Third Party Products
DBDocumentor ™ Runtime Issues
The issues listed here are specific to DBDocumentor . If you do not see the issue you're facing addressed, it may be addressed in the general section, or it may be new to us. If after checking the general section, you still don't find a resolution, please drop us an email, with as many details as possible.
When DBDocumentor processes a script file, the SQL parser will break the script into individual batches. The breaking point is a GO directive. If no GO is found in a script file, the entire file is treated as a single batch. The DBDocumentor SQL parser does not process the file sequentially, rather it processes each batch sequentially and looks in each batch first for CREATE statements and then for any DROP statements. If a DROP statement is present for an object already present in the project, the object is dropped.
If the DROP for an object is present in the same batch as its CREATE, and occurs after the CREATE, then DBDocumentor will add and then remove the object from the project, creating the scenario of missing objects for DBDocumentor.
To work around this issue, ensure that if a DROP is present in a script file for an object you wish to have documented, that the DROP is contained in a batch occurring prior to the CREATE.
I occasionally receive an access violation in hhc.exe during output compilation. Is there a fix for this?
There are a variety of reasons under which this may occur. By way of background, hhc.exe is a component of the Microsoft HTML Help Workshop, and is the actual compiler of your CHM file. Most reasons for access violations in hhc.exe are not under the control of Pikauba Software, but here are a few scenarios:
If these situations do not describe what you're seeing, please contact support with the following information:
While it is our objective to mitigate this problem within DBDocumentor, an actual resolution lays with Microsoft.
It is not uncommon for a computers CPU to go to 100% for a period of time when heavy work is being performed. In the case of DBDocumentor™, this will happen on virtually all classes of CPU regardless of processor speed when a file is first loaded, and only when a file is first loaded. If your SQL project is made up of a single file containing all the script batches of your database, the length of time the CPU is at 100% will be longer. If a file contains a single script, the period of time maybe undetectable.
During file load operations a number of sanity checks are performed on the file and the larger the file, the greater the number of checks. It is also possible that Task Manager will report the DBDocumentor™ or DBDocumentor™ process as not responding.
By way of example, if a single file is used to contain a database of 50 tables and their indices, with approximately 400 stored procedures for these tables, and all the associated DROP statements and default data, this file will take between 10 and 15 seconds to load on a Pentium IV class machine. If this same file is broken into many files with each individual file containing a single SQL object, the load time is not noticeable.
For DBDocumentor, this problem was resolved in version 2.31.
DBDocumentor processes projects as a foreground application and conducts a large number of string based operations when parsing the SQL. Under normal circumstances it is not uncommon for the CPU to be at 100%, but in these situations the system will be responsive and task manager will not indicate that DBDocumentor has stopped responding. Under certain specific scenarios, DBDocumentor versions prior to 3.10 can hang.
If you are experiencing this problem, please first verify that you are running the most recent version of DBDocumentor. If you are and are still experiencing the problem, please contact Pikauba Software to see if a fix is under development.
The following is a list of parsing issues which were present in previous versions, and illustrate workarounds users may try to resolve the problem.
My procedure, data view, or table function returns only one result set, but DBDocumentor lists more than one, why?
This is by design, and occurs if conditional logic directs the result set output. It allows you to ensure that the format of the result sets is the same. If differences are noted, this could pose a problem to the consumers of this object.
This is by design. If you are including the table, data view or table function in the project, DBDocumentor will attempt to locate and hyperlink to the table. DBDocumentor will not expand the column information as this result set is intrinsically non-deterministic. From an interface design perspective, you may wish to consider redefining the procedure to return only named columns.
I have joins that return the same column name from multiple tables, but DBDocumentor lists the column name followed by a numeral.
This is by design and indicates that duplicate column names are present. It should be noted that any access to objects containing duplicate column names will present difficulties to client side APIs such as ADO and ADO.Net.
My procedure returns a result set from a nested procedure. How can I get that procedure to be included?
Currently the only method to accomplish this is via the override tag. You would need to include the parameters of the nested result set in the upper level procedures' documentation, then use the override tag in the upper level procedures' documentation. e.g.
There are a number of reasons why no output might be returned. For the purposes of this discussion, it will be assumed that DBDocumentor runs to completion with no obvious errors and a compiled help file is produced. This help file would contain only an overview section. Some of the reasons for this situation include:
Developers interested in more information on this subject may wish to view:
© 2001 - 2004 Pikauba Software. All rights reserved.