Core Features

I have just subscribed to this list and would like to say Hello to
everybody.

I am working in the field of systems biology and my work includes
processing of images and writing software for it also.

I have several questions to the developers about the core features of
GEGL.

Can the node have several input and output pads? And can it have zero
number of input or output pads?

Where is the result of node execution stored - in memory or in temporal
storage on disk?

There are several software packages that use DAG to store the list of
operations called workflows or workspaces namely VisiQuest, TiViPe,
SCIRUN, SIVIL and probably Matlab/Simulink. The image processing tools
and api for nodes/connections are usually separated. What is the reason
to put them in one library?

Are there plans to implement distributed processing?

Are there any efforts to implement visual programming environment for
creating and editing DAG with GEGL?

Core Features

I have just subscribed to this list and would like to say Hello to
everybody.

I am working in the field of systems biology and my work includes
processing of images and writing software for it also.

I have several questions to the developers about the core features of
GEGL.

Can the node have several input and output pads? And can it have zero
number of input or output pads?

The current architecture allows both several input and several output
pads, though some internal restructuring/maintenance might be needed
to support multipleoutput pads correctly throughout the system. There are nodes that
load/produce image data that have no inputs, as well as nodes that
store image data (the png-writer for instance) that have no output
pads.

Where is the result of node execution stored - in memory or in temporal
storage on disk?

Out of the box GEGL swaps tiles to RAM, by setting the environment
variable GEGL_SWAP to any non-zero value it will swap to disk instead
and has in such a manner been successfully used for image processing
on >12GB image rasters.

There are several software packages that use DAG to store the list of
operations called workflows or workspaces namely VisiQuest, TiViPe,
SCIRUN, SIVIL and probably Matlab/Simulink. The image processing tools
and api for nodes/connections are usually separated. What is the reason
to put them in one library?

Because they are closely connected, image processing operations in GEGL do
not necessarily work on the entire image buffers but on the smallest
possible sub rectangles needed to compute a desired region of
interest. It is also possible to do other optimizations like
reordering of operations, changing of parameters according to scale
level etc.

Are there plans to implement distributed processing?

This is planned, the first step will perhaps be multiple threads (or
processes on a single machine). The tiled buffer architecture of GEGL
is designed with network distributed read/write access in mind. When
multi-threaded/distributed processing
becomes possible the aim is that this should happen without changes to
the public API. Thus existing tools built on top of GEGL should
automatically get the same capabilities without any code changes.

Are there any efforts to implement visual programming environment for
creating and editing DAG with GEGL?

Right now the closest thing is the tree based test app the comes with
GEGL. It implements a proxy between the graph and a tree (which allows
clones, but not editing of them yet). A tree with clones can represent
any DAG as long as you do not allow multiple output pads for any of
the nodes. (all nodes have a single "output" pad).

I've got an old graph editor[1] from the gggl[2] project (gggl has now
been assimilated by GEGL). Which I probably will resurrect at some
point.