This article aims at presenting a new approach to create a software
documentation and describing the structure, functionalities and internals of the
gds application created to implement the concepts here exposed. Please notice
that the application itself is not intended to be a commercial product or a
company-tool (although it may serve as such) rather than a conceptual
implementation of the following concepts.

A common software documentation is usually made of one or several text and
graphical documents which explain how the software is structured and how it
works. There are many types of software documentation and even more
methodologies to create one (e.g. Unified Process, Tropos - agent oriented,
etc..) and most of them are standardized groups of steps, refinements and
frameworks. Without diving into one specific documentation system it's obvious
that each one has pros and cons depending on the task it's used on. Following a
software documentation methodology means planning a number of initial steps and
then proceeding to design the application's components which will eventually be
translated into pure code. Although modeling languages, graphs and diagrams can
be used to describe the resulting structure of a previous written code project,
the standard modeling techniques recommend a series of steps and practices to be
followed upon each software project opening. In particular modeling a wide
software from the ground up it's a complex task that requires a deep knowledge
of the underlying architecture, API and features available.

Since having a perfect knowledge of a language doesn't mean you can always
easily grasp the meaning or concepts expressed within a text (or a generic
graphical representation) that uses that language - and this isn't just
valid for programming languages -, it is perfectly normal to spend time on a
source code file to understand what that code actually does and to "mentally
link" those operations in the complex view of a greater application. Describing
a software behavior and structure should help other people who are supposed to
work with your code to understand how you structured your code (how many modules
/ did you follow a pattern / etc..) and what's the purpose of a specific part of
your project. Extensively commenting a piece of code is a good programming practice although sometimes it may not be enough to completely replace a proper documentation. As long as you really want other people to understand your code
it's your primary goal to give a complete insight of what your application does
and how it works. A software company that hires a new programmer and put him to
work on a specific part of a greater application is interested in providing him
as much information as possible on how that module works, what is supposed to do
and (possibly) why they designed it that way. The sooner the programmer grasps
the code and gets familiar with it, the sooner he will be fully operative on
that code and capable of modify it or expand it.

The ability to explain a concept clearly and as effectively as possible is a
personal skill and varies from one to another. However there are practices and
techniques which may greatly simplify a concept understanding. First of all: the
level of complexity. If a module is very complex (i.e. it's formed by many other
functions/modules or performs a great variety of operations strictly
interconnected between them) it might be difficult to describe in a formal
documentation. In general every complex element can be split into a number of
other parts. Let's take for instance a simple program which asks for a matrix
and returns each matrix's number square

There are various levels of detail that could explain this simple program's tasks and modus operandi. Just like higher level languages can abstract more than lower level ones, a first high level may explain this program's purpose in a simple and concise way. A second level may expand on the first one and give insights on how the program is structured. An additional third level may continue the second level's work and additionally expand each description. This process can continue until the application's code, which acts like a last level complete of every information needed, is reached. The code provides a greater level of detail but is more complex with respect to the other levels.

The above program might have a level structure like the following

In this case it was chosen to use three levels (plus the code level) to represent the same information at higher and lower (richer) detail, but the number of levels could have been greater. The concept of "greater detail" is a fundamental one in almost every software documentation and in every software designing process.

Another concept that needs to be taken into account when getting started with a new software code is the context where a chunk of code is inserted into. Most of the time spent searching for a specific part of the code where the program performs certain operations is needed by the programmer to create a "mental map" in which each block is categorized and its role in the overall architecture is well-defined.

Finally the execution order of the program's blocks isn't always obvious, especially when dealing with highly multi-threaded code. Sometimes only a careful reading can lead to understand the synchronizing mechanisms of the threads involved.

Using a graphical and interactive approach to software documenting is a relatively new concept. Since a concrete example is worth a thousand words, in this section a small Qt C++ program will be presented along with its associated interactive documentation. The entire package (program sources + documentation directory) can be downloaded by the link on the top of this page.

The program we are going to examine through the help of an interactive documentation is a simple one: a basic linear function drawer in a restricted Cartesian graph area

Since this is a sample (and simple) application, just a basic drawing feature has been implemented with a code that isn't definitely brilliant for error handling and modularity. Although its code is not hard to understand by reading it whole, if the application had been a more complex one a programmer would have spent a considerable amount of time trying to understand the structure, all the data types and their roles, the execution flux (as already said multithreading can hinder this process) and the overall cooperation between various modules.

The following video is a showcase of how the gds software produces an interactive documentation for the simple graph application

GDS stands for "Graphical Documentation System" and it's a concept experimental application designed to provide an interactive and extremely intuitive overlook of a new software code. If used properly gds allows a programmer to create a high detailed documentation of its code for others to use and understand.

The concepts presented a few sections above have led the gds app designing and realization. In this section the application usage is briefly presented, afterwards the application's structure and code organization will be presented. Note that gds uses openGL rendering and requires openGL extensions 3.3 or higher to run properly. It also needs Microsoft Visual C++ 2010 Redistributable x86 package installed (you can freely download it from here).

The application has two main operative modes:

View Mode - this mode provides a virtual tour of the three-level documentation and it's recommended for first-time code users

Edit Mode - this mode allows to create a new documentation (if the documentation directory where all the database files are stored isn't present) or to edit an existing one

The user is prompted for a choice upon the application's start

The view mode is quite intuitive, there's a code pane (not visible in level one - everyone should be able to understand it), a central diagram pane and a right documentation pane. There's also a navigation pane that allows three actions

Zoom out to the previous level (level 1 is the maximum a user can zoom out)

Zoom in in the selected node (level 3 is the maximum a user can zoom in)

Select next block - this is useful to navigate inside the code and get a precise order of how things happen in the code logic. Blocks order can be set in edit mode (we'll see how quite soon).

The following screens show the gds app in view mode respectively in level 1, 2 and 3

The edit mode allows a programmer to modify or create a new or existing documentation. When there's no documentation (i.e. there's no gdsdata directory in the application's path) no graph is available and gds tries to recreate it. This could mean that the documentation has been moved elsewhere (and gds can't find it) or that there's no documentation yet.

The following is the gds app in edit mode with no documentation found

With the "Add Child Block" button nodes can be added (or root nodes if there is no graph) to the documentation along with a label (a block name), an index and the documentation. The index field is used by the view mode to navigate through blocks in the right order. If this index is set duplicate or wrong, the view mode will simply navigate through the wrong order. In each level it is possible to delete an element (whatever it is: root/child/parent) or swap its content with another one.

Pressing the "Next Level" button while having one node selected will cause the gds to create a sub-level for that node: that means the block needs additional details on how it works. The gds automatically saves modified nodes when navigating through levels or closing the application.

The code pane on the left is visible only in level 2 and 3 and lets a user select a codefile and highlight lines into it

A level 2 block may not have a codefile associated, so there's a "Clear" button. Notice that gds is meant to reside in a fixed location inside your code project's root directory. Every path to codefiles is stored as relative to gds directory. There are simple correction algorithms to retrieve the right line of code if it has been moved, however gds should be used to document files that are meant to be released and "ready". Obviously a documentation file might also be deleted and the associated node would get the "Documentation file not found" error then allowing the user to define a new documentation file. As already stated this is a concept and experimental application to convoy a new documentation method, a commercial tool that would embrace this philosophy should integrate these functionalities into a proper version control system (which would also keep track of different and moved files).

Since gds has been conceived to be an easy-to-use application there's nothing else needed to know to use it as a normal user.

The following sections will describe in greater detail the programming logic behind gds, so it's mainly targeted to a programming audience or to someone who is interested in modifying gds (gds is opensource).

The following section will require some basic openGL programming knowledge - reader advised.

The QGLDiagramWidget class is the main widget of the entire gds application. It's the central pane which displays the 3D graph and allows the tree diagram to be rendered. Since the application uses Qt libraries, the widget is a subclass of the QGLWidget class that provides functionalities to draw openGL graphics by providing three main virtual functions that can be reimplemented:

paintGL() - this is the function where the openGL scene is rendered and where most of the widget's code resides

resizeGL() - called whenever the widget is resized

initializeGL() - sets up an openGL rendering context, called once before paintGL or resizeGL

The diagram widget also uses overpainting (see the Qt documentation for more information) which basically means the block name is painted over the openGL rendered scene, the code that calls for a repaint and then performs the overpainting is the following

The code is extensively commented, but a few words are worth spending since may give a useful insight on what's going on.

The QGLDiagramWidget uses double buffering, that means the scene rendered on the openGL context isn't showed until a swapBuffers() call is made. This prevents flickering between colorpicking modes (we'll explain this shortly) and animated transitions.

The initializeGL() function takes care of initializing all the resources needed by the openGL scene, i.e. VBOs (Virtual Buffer Objects, these are buffers needed to store data for the elements to be drawn like vertex, uv texture coordinates, normals and indices), textures and shaders.

The GLwidget uses a simple 3D model whose vertices, UV texture coordinates, normals and indices are stored into the "roundedRectangle.h" file. The widget also renders it and applies a phong lighting model through the compiled shader programs (there are two pairs of vertex and fragment shaders, the first pair is used to normally draw elements, the second pair is used to initialize a colorpicking scene and to perform simple operations like connectors drawing).

The vertex shader used to normally render objects is the following

#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 aVertexPosition;
layout(location = 1) in vec2 aTextureCoord;
layout(location = 2) in vec3 aVertexNormal;
// Values that stay constant for the whole mesh.
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform mat3 uNMatrix;
out vec2 vTextureCoord;
out vec3 vTransformedNormal;
out vec4 vPosition;
void main()
{
// Pass along the position of the vertex (used to calculate point-to-vertex light direction),
// no perspective here since we need absolute position (we used absolute position for the light point too)
vPosition = uMVMatrix * vec4(aVertexPosition, 1.0);
// Set the complete (Perspective*model*view) position of the vertex
gl_Position = uPMatrix * vPosition;
// Save the uv attributes
vTextureCoord = aTextureCoord;
// Pass along the modified normal matrix * vertex normal (the uNMatrix is
// necessary otherwise normals would point in a wrong direction and
// they would not be modulo 1 vectors), this matrix ensures direction and
// modulo 1 preservation while converting their coords to absolute coordinates
vTransformedNormal = uNMatrix * aVertexNormal;
}

There are attributes used to receive model's vertices, UV texture coordinates and normal versors, uniforms to receive perspective, modelview (composed by model - this matrix is set to the element's position and view - this is set by default to the root element but can be changed with directional arrows on the keyboard) and a normal matrix needed to preserve direction and modulo of the unit normal vector (if you're interested in why a special normal matrix needs to be passed to the fragment shader to adjust lighting take a look at Eric Lengyel's "Mathematics for 3D Game Programming and Computer Graphics"). The vertex shader basically just calculates the new vertex position and passes it along with uv coordinates and the trasformed normal to the fragment shader.

The fragment shader takes care of calculating the light direction (these shaders use per-fragment light that comes from a point) and the light weighting vector that will be used to weight the light color components. Finally it renders the texture (using the UV texture coordinates) keeping in mind the light weighting.

The paintGL() function is where most of the graphic work is done. After initializing the viewport and several other default values (e.g. glClearColor) the function can switch into two modes:

A color picking one

The normal rendering one

Color picking is a graphic technique often used with openGL to identify objects clicked in the scene. Is a more recent technique than SELECT picking and ensures a perfect integration with programmable pipelines (on the other hand SELECT picking relies on fixed pipelines).

Basically each object is stored as a "dataToDraw" object and is granted a unique color

When the user clicks on an object the mouse position is recorded and, since raw openGL doesn't recognize objects as entities, the entire scene is rendered with each mesh' unique color. Afterwards the point where the mouse was clicked is read back from the framebuffer and its color is compared against each object's color to identify the object the user clicked on.

Another big unit of the project is the edit mode window, mainly because of its number of controls and widgets incorporated. The code is highly commented here too, so we'll just focus on the parts of the code that are relevant to a complete comprehension. The view window code is rather similar although there are a great number of small differences that would make a unique refactoring a living hell (that's why two classes have been created).

By default the edit mode window's constructor starts in level-one mode. Each level is identified by an enum type and each object (i.e. each block) has a dbDataStructure associated with it. The core structures declarations can be found in the "gdsdbreader.h" header

the structure provides fields to store each object's data (label, user index, unique index to create the documentation structure, rich text compressed data, etc..) along with data that is not meant to be stored on disk, that's why there are two stream operators overrides that take care of what should be written to disk and what should not.

their code is quite large but they perform roughly the 70% of the work of the gds storing system.

The tryToLoadLevelDb() function takes care of the database files loading from the gds default directory (defined in "gdbsreader.h") depending on the level we want to explore. The "returnToElement" parameter specify whether the function should select the previous zoom-ed element when returning from a deeper level backwards.

The saveCurrentLevelDb() and saveEverythingOnThePanesToMemory() respectively save all the items data on disk and on memory (by re-constructing an updated version of the dbDataStructure tree).

All the elements are stored in a dynamic QVector<dbDataStructure*> vector

// All elements for the current active graph (and relative GL pointers)
QVector<dbDataStructure*> m_currentGraphElements;
// The selected element index for the current active graph (this is updated by the openGL widget through a function)
dbDataStructure* m_selectedElement;
// These pointers help in finding/creating the next database file while browsing zoom levels
quint64 m_currentLevelOneID;
quint64 m_currentLevelTwoID;

the vector is used to store just the pointers to the elements, the connection between them (parent->children) are stored in their dbDataStructure object.

The two m_currentLevelOneID and m_currentLevelTwoID variables are used to keep track of the current element where a zoom is active in the first and second level (the third level hasn't an additional zoom property).

The right rich text area is a textEditorWin object, which in turn is a subclass of a QMainWindow base class. This is necessary to add toolbars, actions and complex controls to the base widget - a mere QTextEdit rich text editor. The code is rather straightforwarding and, except for a number of small changes, resembles the rich text editor widget of the Qt SDK so we won't bother describing it further.

The code area (for both edit and view windows) is a CodeEditorWidget (QTextEdit subclass) with a CppHighlighter (QSyntaxHighlighter subclass) object associated to its document() and set with a standard C/C++ syntax highlight configuration. Along with the initialization settings a system of signals and slots (Qt's exclusive) provides a convenient way to link a line counter widget to the scrollbar events

The mouseReleaseEvent() override takes care of intercepting the block (equivalent to line in a plain-text context) where the user clicked (if in edit mode) to highlight a specific line of code whose number will be stored in the

QVector<quint32> m_selectedLines

vector. Each node's associated code (if any) is stored in the following fields

// These fields will be useful for levels 2 and 3
QString fileName; // Relative filename for the associated code file
QByteArray firstLineData; // Compressed first line data, this will be used with the line number to retrieve info
QVector<quint32> linesNumbers; // First and next lines (next are relative to the first) numbers

It has to be noticed that each code block is identified by its first line compressed data and next line numbers (lines after the first have their numbers stored relatively to the first). This is a simple approach to bear with the lack of a proper version control system which should instead check for differences and try to merge versions. Inserting marking comment tags in the code could have been another solution but since we believe the code shouldn't be messed up with, the above approach was chosen.

when an object is selected (colorpicking mode - GLWidget) on the openGL diagram, the widget notifies its parent window (edit or view mode) that the selection has been changed. The edit window, however, provides another functionality: elements swapping. When the user pressed the "Swap Element" button (which is a toggle button), the system records the first swap item as the current selected item. In turn, when the user selects another element, it is marked as "second swap item" and the swap begins. Since the GLWidget simply ignores all this, the swap logic is handled by the edit window itself and that's exactly what happens in the code function above. If the swap mode isn't active the selected graph's element is retrieved in the dbDataStructure, the selected element is saved into memory and the panes are reloaded with the new selected element's data.

Other tricky functions:

void MainWindowEditMode::on_deleteSelectedElementBtn_clicked()

this slot handles the "Delete Selected Element" action in three different ways:

If the selected element is the root, deletes all the graph

If the selected element isn't the root and hasn't children, is eliminated

If the selected element isn't the root but has children, the user is prompted whether the system should delete the children along with their parent or assign the children to their parent's parent through a pointers system

void MainWindowEditMode::on_addChildBlockBtn_clicked()

this slot handles the "Add Children Block" action. If there's a variable called m_firstTimeGraphInCurrentLevel set, the graph is empty and a root element must be created (no father), otherwise a child element is created and the selected element is set as father.

Finally, the GLWidget provides functions to control the drawing process without the hassle of dealing with painting events

//// This method is called by the openGL widget when it's ready to draw and if there's a scheduled painting pending
void MainWindowViewMode::deferredPaintNow()
{
// Sets the swapping value to prevent screen flickering (it disables repaint events)
bool oldValue = GLDiagramWidget->m_swapInProgress;
GLDiagramWidget->m_swapInProgress = true;
GLDiagramWidget->clearGraphData();
updateGLGraph();
// Data insertion ended, calculate elements displacement and start drawing data
GLDiagramWidget->calculateDisplacement();
// Restore the swapping value to its previous
GLDiagramWidget->m_swapInProgress = oldValue;
GLDiagramWidget->changeSelectedElement(m_selectedElement->glPointer);
// We selected an element for the first time (the graph has been loaded), we need to recharge this item's data
// Load the selected element data in the panes
loadSelectedElementDataInPanes();
}

First a call to the clearGraphData() is made, this clears the blocks vectors in the GLWidget and calls for a repaint, then a calculateDisplacement() call occurs to initialize the post-order traversal and displacement calculation, eventually a changeSelectedElement() (if the element needed to be selected is different from the root) is called to select another element that, in turn, will instruct the painting function to use a different gradient texture to render the selected element.

All the connection between elements are automatically created in the drawConnectionLinesBetweenBlocks() function of the GLWidget so there's no need for the main windows to explicitly call it

void QGLDiagramWidget::drawConnectionLinesBetweenBlocks()
{
// This function is going to draw simple 2D lines with the programmable pipeline
// The picking shaders are simple enough to let us draw a colored line, we'll use them
ShaderProgramPicking->bind();
// NOTICE: since each element's model matrix will be multiplied by the vertex inserted in the vertex array
// to the shader, this uMVMatrix is actually going to be filled with JUST the VIEW matrix. The result will be
// the same to the shader
GLuint uMVMatrix = glGetUniformLocation(ShaderProgramPicking->programId(), "uMVMatrix");
GLuint uPMatrix = glGetUniformLocation(ShaderProgramPicking->programId(), "uPMatrix");
// Send our transformation to the currently bound shader,
// in the right uniform
float gl_temp_data[16];
for(int i=0; i<16; i++)
{
// Needed to convert from double (on non-ARM architectures qreal are double)
// to float
gl_temp_data[i]=(float)gl_projection.data()[i];
}
glUniformMatrix4fv(uPMatrix, 1, GL_FALSE, &gl_temp_data[0]);
for(int i=0; i<16; i++)
{
// Needed to convert from double (on non-ARM architectures qreal are double)
// to float
gl_temp_data[i]=(float)gl_view.data()[i]; // AGAIN: just the view matrix in the uMVMatrix, the result will be the same
}
// Set a color for the lines
glUniformMatrix4fv(uMVMatrix, 1, GL_FALSE, &gl_temp_data[0]);
GLuint uPickingColor = glGetUniformLocation(ShaderProgramPicking->programId(), "uPickingColor");
glUniform3f(uPickingColor, 1.0f,0.0f,0.0f);
// If there's just one element (root and no connections), exit
if(m_diagramDataVector.size() == 0 || m_diagramDataVector.size() == 1)
return;
// Scroll the diagramDataVector and create the connections for each element
QVector<dataToDraw*>::iterator itr = m_diagramDataVector.begin();
// Create a structure to contain all the points for all the lines
struct Point
{
float x,y,z;
Point(float x,float y,float z)
: x(x), y(y), z(z)
{}
};
// This will contain all the point-pairs to draw lines
std::vector<Point> vertexData;
while(itr != m_diagramDataVector.end())
{
// Set the origin coords (this element's coords)
QVector3D baseOrig(0.0,0.0,0.0);
// Adjust them by porting them in world coordinates (*model matrix)
QMatrix4x4 modelOrigin = gl_model;
modelOrigin.translate((qreal)(-(*itr)->m_Xdisp),(qreal)((*itr)->m_Ydisp),0.0);
baseOrig = modelOrigin * baseOrig;
// Get each children of this node (if any)
for(int i=0; i< (*itr)->m_nextItems.size(); i++)
{
dataToDraw* m_temp = (*itr)->m_nextItems[i];
// Create destination coords
QVector3D baseDest(0.0, 0.0, 0.0);
// Adjust the destination coords by porting them in world coordinates (*model matrix)
QMatrix4x4 modelDest = gl_model;
modelDest.translate((qreal)(-m_temp->m_Xdisp),(qreal)(m_temp->m_Ydisp),0.0);
baseDest = modelDest * baseDest;
// Add the pair (origin;destination) to the vector
vertexData.push_back( Point((float)baseOrig.x(), (float)baseOrig.y(), (float)baseOrig.z()) );
vertexData.push_back( Point((float)baseDest.x(), (float)baseDest.y(), (float)baseDest.z()) );
}
itr++;
}
// We have everything we need to draw all the lines
GLuint vao, vbo; // VBO is just a memory buffer, VAO describes HOW the data should be interpreted in the VBO
// Generate and bind the VAO
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Generate and bind the buffer object
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
// Fill VBO with data
size_t numVerts = vertexData.size();
glBufferData(GL_ARRAY_BUFFER, // Select the array buffer on which to operate
sizeof(Point)*numVerts, // The total size of the VBO
&vertexData[0], // The initial data of the VBO
GL_STATIC_DRAW); // STATIC_DRAW mode
// set up generic attrib pointers
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, // Attribute 0 in the shader
3, // Each vertex has 3 components: x,y,z
GL_FLOAT, // Each component is a float
GL_FALSE, // No normalization
sizeof(Point), // Since it's a struct, x,y,z are defined first, then the constructor
(char*)0 + 0*sizeof(GLfloat)); // No initial offset to the data
// Call the shader to render the lines
glDrawArrays(GL_LINES, 0, numVerts);
// "unbind" VAO and VBO
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}

Connectors are simply drawn by linking basic color picking shaders (without textures or lighting calculations) and setting a VAO (Vertex Array Object) and a VBO (Vertex Buffer Object) to store the vertices to connect with lines. The role of the VBO is to store the memory needed to perform the operation (which will be accomplished by the associated shaders) while the VAO specify how data are stored into the VBO. However these are basic openGL actions.

This paper's goal was to present a new software documentation approach implemented through an experimental concept application - gds. Software engineering methodologies are techniques relatively new compared to other engineering fields so there might be a lot of improvements and changes in the future.

To be completely honest this work also helped me to learn openGL and strengthen my Qt knowledge, beside realizing an old idea I've been thinking on for a long time.

Comments and Discussions

I am facing problem with phong shading in my qt program.It is behaving differently in different systems(with the same graphics drivers).If I don't set the shading it is coming same in all the systems.any idea ?