A subgraph is simply a coherent subset of the graph elements: some nodes of the graph, and some edges between them (more informations: Wikipedia: Subgraphs).

Specifically, the edge of a subgraph must already be present in the graph just above it in the hierarchical scale. A subgraph does not necessarily contains all the edges of the induced subgraph. If an edge belongs to a subgraph, its ends belong to the subgraph too.

For instance, here is a subgraph containing the nodes in the lower half of Orion (but not all edges between them):

Subgraphs can be nested. Here are two possible subgraphs included in Low parts:

When meta-nodes are created, Tulip follows a specific behaviour. With the clustering of nodes, new subgraphs are created in order to improve the visualization. If the action is done from the root level of the hierarchy, a meta-graph, named by default groups, will display the whole graph with the appropriate meta-nodes and a subgraph, named in the form grp_vwxyz, will only contains the clustered nodes. Otherwise, if the action is realized from a lower level in the hierarchy, only the second subgraph graph will be created at the same level in the hierarchy.

The creation of meta-graph modifies the root graph (here Orion) which represents all nodes (including the meta-nodes and the representation of the subgraph inside it) and all edges (including the meta-edges). So much information overlapping can be displayed unclearly:

You can simply create a subgraph by selecting the nodes and edges you want to isolate in your graph and click on the Create subgraph from selection option. It can be found either with a right click on the graph name in the graph list or in the Edit menu.

From those menus you will also be able to create empty subgraphs. You can add here new nodes and edges, which will be directly added into the subgraphs above the current one in the hierarchy. The Clone subgraph action will duplicate the target graph in a subgraph just beneath it in the hierchical scale.

To create a meta-node, you can proceed in a fashion similar to the one followed to create subgraph from selection. Once you have picked the nodes, click on the option Group elements in the menu Edit.

Optionally, the edge selection for the subgraph creation can be realized with the “Induced Sub-graph” algorithm.

You can delete a subgraph or a meta-node by selecting the appropriate option proposed in the menu opened with a right click on the graph name in the list.

The removal of a subgraph is pretty straight forward. If Delete is chosen, only the current subgraph is removed, letting its subgraphs going one step up in the graphs hierarchy; its direct subgraphs become subgraphs of its parent graph. If Delete all is chosen, all the subgraphs are removed from the hierarchy.

Deleting a meta-node removes all nodes in this particular meta-node, but not its representation in the other graphs. To properly delete the meta-node, you first need to ungroup it. By doing so, all the edges will resume to their old anchor nodes. This modification propagates through the hierarchy tree, up to the root. The subgraphs created with the meta-node are not deleted, however, the meta-node disappears as it is removed and the ungrouped nodes does not remplace it.

If you change the position of a node (viewLayout property) within a subgraph (with the mouse or through a layout algorithm), the same node will be moved in the root graph, if the viewLayout property accessible in the subgraph in the one inherited from the root graph.

If you use a measure algorithm on a subgraph, new local properties are created. Those properties are not applied to the root graph (if properties are not defined on the subgraph, they are inherited).

You can also note that, because of the hierachy, some actions (delete, rename...) done in the root graph or in one of the non-final subgraph will obviously pass on to every subgraph. Identically, the creation of a node in a subgraph will add it in each of the graphs above.

Tulip proposes an import wizard for CSV files. Comma-separated values files are very common to store statistical data. The internal file structure is rather simple, consisting of records (one per line usually) containing several fields, separated with a special character (such as a comma, a semi-colon, an hash...).

The first panel allows the user to configure the source file location, the characters encoding, the field delimiter character and the text delimiter character.

The purpose of each labeled component is explained below:

The source file location field: this field indicates the location of the file to parse. To change the source file click on the “...” button and select the file containing the nodes.

The file encoding selection menu: this drop down menu provides a list of encoding schemes for the characters in the text file. We use a standard UTF-8 in this example as the files does not contain any special character.

The data orientation: this check-box allows the user to invert rows and columns i.e to treat rows as columns and columns as rows in next steps.

The separator selector: this field allows the user to define the characters used to separate data value fields within each row. Select a separator in the list or input a custom separator. For the nodes file, the separator is ”;”. If a duplication of the separators is possible, you can check the “merge consecutive separators” box.

The text delimiter selector: this field allows the user to define the characters used as start and end delimiter for data value fields. Select a delimiter in the list or input a custom one and press the [Enter] key to validate your input. Separated value files often additionally define a character used to indicate the start and end of a data element which should be considered as a single text entry. This strategy allows the inclusion of text entries which include the value separator.

For example, a file, which is structured as a comma separated value file, could use the double quotation mark to delimit text values and would then be able to include text values such as: ‘Zoe, Mark, Sally’.

The preview area : this area displays a preview of the file as it will be interpreted with the current settings.

The second panel allows the user to define the line range, which columns to import and to define their data types.

The purpose of each labeled component is explained below:

Use first line tokens as column names : use the elements in the first line as default names for the columns. If checked the first line will be skipped during the import process. In any case, you can alter the name of the fields if they do not suit you.

The line range spinbuttons : these two spin buttons allow the user to select the start and end rows for the data to import. The spin boxes can be used either by typing a new value in the text entry area where the numbers are displayed, or by using the mouse button to click on the upwards arrow to increase the number and the downwards arrow to decrease the number. For instance, if the text file contained a large header area with meta information, this header could be excluded from the data imported by increasing the number of the starting, “From”, line.

The columns configuration area : this area allows the user to configure each column detected in the file. Any single column can be excluded from the data imported by clicking in the checkbox under its name to remove the check mark. User can rename a column by editing the field containing it’s original name. You can’t input the same name of another column. The data type of a column can be changed using the combo-box under it’s name.

The preview area : this area displays a preview of the file as it will be interpreted with these settings. If a column isn’t selected it will not appear in the preview.

The number of preview lines spinbutton : allows the user to increase the number of preview. If unchecked all the file will be displayed.

In our example, all the default choices are ok, so you can click on “Next” to access to the final panel.

For each row we compare the destination entity id to graph entities ids. If there is a correspondence, the row data are imported on the first matching entity. If there is no entity with such id you can force the creation of a new entity with the “Create missing entities” option.

In the current application, we want to import the edges on new relations (or edges).

A relation is specified by a source identifier and a destination identifier. Both identifiers are defined by the values in the source and destination columns. For each row we compare the values in the source and destination columns, to the values in the source and destination properties for all the existing node entities. If the source and destination identifiers correspond to existing node entities a new relation is created between those entities. If there is no entities in the graph with such identifier you can force the creation of missing entities with the “Create missing entities” option.

In our example, instead of the “viewLabel” default property, we specify the previously created “node_id” property as the one against which we will map the “Source” and “Target” fields.