The Breker Trekker
Tom Anderson, VP of Marketing
Tom Anderson is vice president of Marketing for Breker Verification Systems. He previously served as Product Management Group Director for Advanced Verification Solutions at Cadence, Technical Marketing Director in the Verification Group at Synopsys and Vice President of Applications Engineering at … More »
Guest Post: Documentation Is Not Just a Requirement
October 21st, 2013 by Tom Anderson, VP of Marketing
Breker customers have surely noticed that the quantity and quality of our product documentation have taken a huge leap in the last six months or so. This is due to the Herculean efforts of Bob Widman, a well-known documentation, training, and applications expert in the EDA industry. He has been working with Breker for most of this year and the results speak for themselves. We’re pleased that Bob has contributed the following guest post on the importance of documentation:
Why does a company provide documentation with its product? The typical answer is that the customer expects it. Often overlooked is how the process of creating the documentation has a positive impact on the product and the company that is developing it.
When new employees start working for a company, the typical way for them to start learning the product is by taking a training class or reading the documentation. But what if there is none? The challenge for a writer is how to get started and to get the necessary support from those that can provide it. I would like to discuss three aspects of documentation creation, using my experience at Breker Verification Systems:
Documentation is a team effort
Let’s assume that the primary goal is to provide useful documentation of a product for the customer. To minimize problems and streamline the documentation process, the focus will be on accuracy, simplicity, consistency, extensibility, and maintainability. Automation should be used where it makes sense.
Fortunately at Breker, there was a strong commitment to provide quality documentation and this came from every level within the company. Before I could start creating the documentation, it was necessary to acquire a basic knowledge of the product and its status. This process was started by being given a personal marketing presentation and demo of the product, which involved many questions and answers.
After much discussion, the goal became a reference manual and user guide for each of the TrekSoC product and the Trek Core graph-based technology that underlies all of Breker’s products. In parallel, two training classes were being developed by an application engineer. The training material, including the labs, would be the basis of the user guides, where bulleted lists are replaced by paragraphs and enhanced with additional material.
A reference manual includes a description of all the application programming interfaces (APIs) and other product features, including GUI usage and any Backus–Naur Form (BNF) to describe the syntax of languages or commands. A user guide describes the flow and how everything fits together. It was logical to start with the reference manuals, because this is what every flow is built upon, and it was also the easiest to produce.
Breker created the specifications for Trek Core and TrekSoc using the Microsoft OneNote product. Engineering, Application Engineers (AEs), Marketing, and I could access all this information via the Web. It was used to track proposals, define concepts, build consensus, and specify the APIs. This was my primary source for documenting the APIs and other features. It was a team effort and everyone could add their input and make corrections. Items were marked in blue when they were committed and green after they had been implemented.
While the training material was being created, I provided feedback, by going through it slide by slide with the AE developing it. This was a great way to learn the product, while providing input from the perspectives of a technical writer and a novice user. Lots of questions were asked and answered, by both of us. The result was improved training material, and a solid foundation for the development of the user guides.
The documentation must match the software
Specifications are great as a source for creating the documentation, but how do you make sure that the APIs that were documented match what was implemented in the software? Our solution was to save each of the API chapters in the documentation as a text file and compare each API with what is defined in the software, using a script. This script finds any APIs implemented in the software but not documented, those documented but not implemented, and any argument mismatches or data type discrepancies.
The script also reads in an exclusion list for those APIs that are not ready for public consumption (not to be documented). If these APIs were documented, they were conditionally hidden (by specifying it as conditional text of type “future”), therefore not seen by the customer or the script.
The script was written by an AE and has already found several errors in the documentation that were corrected. When the script was run the first time, there were many APIs that were shown as implemented but not documented. Most of these were legacy APIs that were not to be documented, and therefore were added to the exclusion list.
The examples must work
How does one make sure the examples in the documentation are correct? The obvious response is to create working examples and make sure they run correctly. This is a good start, but only solves part of the problem. The software is a moving target, because it is constantly being updated to improve results and expand functionality, and this may cause some examples to fail, when run with later versions of the product.
The solution is to make sure that each example used in the documentation is part of a regression suite that is run as each software release is created. This is automated through “make” files. The test file for each example includes any pertinent comments including the name of the file at the beginning. The complete working file is added to the documentation, hiding the file comments and any other parts not relevant for the user.
Sometimes you do not have space in the documentation for the complete example or want to only show the parts of the example that are pertinent to the discussion. In this case the hidden code is replaced with an ellipsis (…) to indicate that not all of the code is shown in the example.
So far over one hundred working examples have been run, resulting in errors being found and corrected. These documentation example files will be added to the automated release regression tests. This is an ongoing effort.
Documentation does take a team effort and everyone wins. It’s the questions that get asked, along with the followup discussions and answers, that make the documentation better, and the product better as well. Automation with scripts or “make” files can find errors that are often missed by the writer or reviewer. Setting up a documentation process early can save a tremendous amount of time and effort, and results in improved documentation.
Over the past six months, Breaker as a team has created two training classes, two reference manuals, and two user guides. Engineering, AEs, and Marketing were a part of this effort and all have benefited, including me. There is still quite a bit of work to be done, but we have come a long way.