* '''Agent''' - An agent is a component of the connectivity framework that monitors a data source for changes. If a change occurs (e.g. objects are created, deleted, or changed) it creates a [[#R|record]] out of the object and sends it to SMILA.

+

* '''Action''' - An action is one step in an [[#W|asynchronous workflow]] associated with a certain [[#W|worker]] that does the actual processing.

−

* '''Annotation''' - Annotations are additional information on [[#A|attributes]] or [[#A|attachments]]. Annotations can have annotations themselves.

* '''Attachment''' - Attachments are parts of [[#R|records]] used to store large binary data such as document content.

* '''Attachment''' - Attachments are parts of [[#R|records]] used to store large binary data such as document content.

−

* '''Attribute''' - Attributes are parts of [[#R|records]] and contain simple data objects that are easily represented in XML, such as <tt>String</tt>, <tt>Integer</tt>, <tt>Float</tt>, and <tt>Date</tt>.

+

* '''Attribute''' - Attributes are parts of [[#R|records]] and contain simple data objects that are easily represented in XML or json, such as <tt>String</tt>, <tt>Integer</tt>, <tt>Float</tt>, and <tt>Date</tt>.

== B ==

== B ==

Line 12:

Line 43:

* '''[http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wsbpel BPEL]''' - BPEL is an XML-based language defining several constructs to write business processes. It defines a set of basic control structures like conditions or loops as well as elements to invoke web services and receive messages from services. It relies on [[#W|WSDL]] to express web services interfaces. Message structures can be manipulated, assigning parts or the whole of them to variables that can in turn be used to send other messages.

* '''[http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wsbpel BPEL]''' - BPEL is an XML-based language defining several constructs to write business processes. It defines a set of basic control structures like conditions or loops as well as elements to invoke web services and receive messages from services. It relies on [[#W|WSDL]] to express web services interfaces. Message structures can be manipulated, assigning parts or the whole of them to variables that can in turn be used to send other messages.

+

+

* '''Bucket''' - Data container in an [[#W|asynchronous workflow]], containing logically grouped [[#D|data objects]] of the same type. Can be ''transient'' for interim data, which means that data is not persisted and removal of data is under job management control, or ''persistent'', which means that removal of data is not under job management control.

+

+

* '''Bulk''' - a number of [[#R|records]] bundled in a single file to enhance throughput when processing records in [[#W|asynchronous workflows]].

+

+

* '''Bulkbuilder''' - An [[#W|asynchronous workflow]] [[#W|worker]] that accepts single [[#R|records]] and combines them to a [[#B|bulk]]. See [[SMILA/Documentation/Bulkbuilder|Bulkbuilder documentation]].

== C ==

== C ==

−

* '''Crawler''' - A crawler is a component of the connectivity framework that actively crawls a data source, creates [[#R|records]] out of the objects found in the data source and sends them to SMILA (e.g. <tt>FileSystemCrawler</tt> or <tt>WebCrawler</tt>).

+

* '''Crawler''' - A crawler is a special [[#W|worker]] in an [[#W|asynchronous workflow]] that imports data from a data source (e.g. filesystem, web or database) into SMILA. It iterates over the data elements and creates [[#R|records]] for all elements that will be further processed in the workflow. In general crawlers resp. crawl workflows are used for initial (bulk) import of data sources. (see SMILA [[SMILA/Documentation#Importing|Importing]] for more details)

== D ==

== D ==

−

* '''Delta indexing''' - Delta indexing is also known as incremental or generation based indexing.

* '''DFP''' - The Data Flow Process is a set of processing steps. These steps cover the following aspects and is described in the data flow process description:

+

* '''DeltaChecker''' - The DeltaChecker is a [[#W|worker]] in an (asynchronous) import [[#W|workflow]] that handles the [[#D|delta indexing]].

−

** Storage descriptions

+

−

** Extraction of messages from the queue

+

−

** Process based information handling (e.g. splitting, routing, ...)

+

−

** Data annotation through BPEL

+

−

* '''DFPD''' - The Data Flow Process Description is a set of process related configuration files. Files in this set are optional. The following components are contained in the DFPD:

+

* '''Delta indexing''' - Delta indexing is also known as incremental or generation based indexing. It is driven by <tt>DeltaChecker</tt> [[#W|worker]].(see SMILA [[SMILA/Documentation#Importing|Importing]] for more details)

−

** Source/target references (e.g. queue)

+

−

** References to different storages or collections

+

−

** BPEL (change and delete process in several files organized in system/data processes)

+

== E ==

== E ==

* '''[http://www.eclipse.org/ Eclipse]''' - Eclipse is an open source community, whose projects are focused on building an open development platform comprised of extensible frameworks, tools and runtimes for building, deploying and managing software across the lifecycle.

* '''[http://www.eclipse.org/ Eclipse]''' - Eclipse is an open source community, whose projects are focused on building an open development platform comprised of extensible frameworks, tools and runtimes for building, deploying and managing software across the lifecycle.

+

* '''EILF''' - EILF (Enterprise Information Logistics Framework) was the original proposed name of SMILA. Since this abbreviation was difficult to pronounce, it was not accepted by the community and thus changed to SMILA.

* '''[http://www.eclipse.org/equinox/ Equinox]''' - Equinox is a base technology from [http://www.eclipse.org Eclipse] implementing the [[#O|OSGi]] specification. Not only delivering a high performance class loading mechanism Equinox also provides an environment for managing component dependencies.

* '''[http://www.eclipse.org/equinox/ Equinox]''' - Equinox is a base technology from [http://www.eclipse.org Eclipse] implementing the [[#O|OSGi]] specification. Not only delivering a high performance class loading mechanism Equinox also provides an environment for managing component dependencies.

+

+

== F ==

+

+

* '''Fetcher''' - A fetcher is a [[#W|worker]] in an (asynchronous) import [[#W|workflow]] that retrieves Records containing an URL or file path, etc from a [[#C|crawler]] and actually fetches the content (e.g. of files) from the data source ((e.g. <tt>FileFetcherWorker</tt> or <tt>WebFetcherWorker</tt>)), attaches it to [[#R|records]] and sends them to the [[#U|UpdatePusher]]. (see SMILA [[SMILA/Documentation#Importing|Importing]] for more details)

+

+

== G ==

+

+

== H ==

== I ==

== I ==

−

* '''ID''' - An ID identifies a [[#R|record]] in SMILA and is part of a [[#R|record]]. IDs are complex objects, aggregated of various keys (data source IDs, object IDs within the data source, element and/or fragment IDs).

+

* '''ID''' - An ID identifies a [[#R|record]] in SMILA and is part of a [[#R|record's]] metadata.

* '''Job''' - A Job is a description of a distinct and repeatable working process that the system should accomplish. It references and parametrizes an [[#W|asynchronous workflow]].

+

+

* '''Job Run''' - A Job Run is an "instance" of a Job, for example one run of an import of a data source to an index. Only one active job run can existe per job. Statistics will be accumulated for each job run. A job run is automatically stopped when SMILA shuts down.

+

+

== K ==

+

+

== L ==

+

+

== M ==

+

+

* '''Micro bulk''' - a (small) bundle of [[#R|records]] in one single file which can be pushed into the system using the [[#B|Bulkbuilder]]. The micro bulk in itself is not JSON, but a file where each line must consist of a single JSON representation of a [[#R|record]]. E.g.:

+

<pre>

+

{"_recordid": "id1", "attribute1": "attribute1", ...}

+

{"_recordid": "id2", "attribute1": "attribute2", ...}

+

{"_recordid": "id3", "attribute1": "attribute3", ...}

+

</pre>

+

+

== N ==

== O ==

== O ==

Line 50:

Line 111:

== P ==

== P ==

−

* '''Pipelet''' - A pipelet is a reusable component (POJO) in a BPEL workflow used to process data contained in [[#R|records]]. See [[SMILA/Documentation/Pipelets_and_ProcessingServices|Pipelets and ProcessingServices]] for details.

+

* '''Pipelet''' - A pipelet is a reusable component (POJO) in a [[#B|BPEL]] workflow used to process data contained in [[#R|records]]. See [[SMILA/Documentation/Pipelets|Pipelets]] for details.

−

* '''Pipeline''' - A pipeline is the definition of a BPEL process (or workflow) that orchestrates pipelets, processing services, and other BPEL services (e.g. web services).

+

* '''Pipeline''' - A pipeline is the definition of a [[#B|BPEL]] process (or workflow) that orchestrates pipelets and other BPEL services (e.g. web services).

−

* '''Processing service''' - A processing service is a reusable component (OSGi service) in a [[#B|BPEL]] workflow used to process data contained in [[#R|records]]. See [[SMILA/Documentation/Pipelets_and_ProcessingServices|Pipelets and ProcessingServices]] for details.

+

+

== Q ==

== R ==

== R ==

−

* '''Record''' - A record is a sole element in SMILA that contains data to process (e.g. content and metadata of a document). Each record has an [[#I|ID]] and it can contain [[#A|attributes]], [[#A|attachments]] and [[#A|annotations]].

+

* '''Record''' - A record is a sole element in SMILA that contains data to process (e.g. content and metadata of a document). The record consists of metadata elements, see [[SMILA/Documentation/Data_Model_and_Serialization_Formats]].

* '''[http://www.osoa.org/display/Main/Service+Component+Architecture+Home SCA]''' - Service Component Architecture is a set of specifications which describe a model for building applications and systems using a Service-Oriented Architecture. SCA extends and complements prior approaches to implementing services, and SCA builds on open standards such as Web services. The SCA programming model is highly extensible and is language-neutral. Go to [[SMILA/Project Related Technologies/SCA and Tuscany|SCA and Tuscany]] for discussing.

+

* '''Slot''' - An (input/output) slot is a description for the input/output behaviour of a [[#W|worker]]. In a concrete [[#W|asynchronous workflow]] slots are assigned to [[#B|buckets]]

−

* '''[http://www.osoa.org/display/Main/Service+Data+Objects+Home SDO]''' - Service Data Objects are designed to simplify and unify the way in which applications handle data. Using SDO, application programmers can uniformly access and manipulate data from heterogeneous data sources, including relational databases, XML data sources, Web services, and enterprise information systems. The SDO programming model is language neutral.

* '''[http://www.osoa.org/display/Main/Home SOA]''' - Service Oriented Architecture is a computer systems architectural style for creating and using business processes, packaged as services, throughout their lifecycle. SOA also defines and provisions the IT infrastructure to allow different applications to exchange data and participate in business processes. These functions are loosely coupled with the operating systems and programming languages underlying the applications.

+

== T ==

−

* '''Surrogate Process''' - A surrogate process is a process that embeds several components. Additionally this process adds further functionality to these components (e.g. runtime functionality, error prevention, transactions, manageability ...). In the SMILA application surrogate processes als add business processes and further features (e.g. callability from external processes or applications...).

+

* '''Task''' - Description of a single unit of work to be processed by a [[#W|Worker]]. A task can contain worker specific properties.

* '''[http://www.eclipse.org/stp/ STP]''' - SOA Tools Platform is an eclipse open source project that builds frameworks and exemplary extensible tools that enable the design, configuration, assembly, deployment, monitoring, and management of software designed around a Service Oriented Architecture ([[SMILA/Technology Preview/SOA|SOA]]). An interesting subproject is the [http://www.eclipse.org/stp/sca/index.php SCA Composite Designer].

+

== U ==

−

== T ==

+

* '''UpdatePusher''' - The UpdatePusher is a [[#W|worker]] in an (asynchronous) import [[#W|workflow]] that pushes the crawled records to the [[#B|BulkBuilder]] of a running import [[#J|job]].

−

* '''[http://incubator.apache.org/tuscany/sca-overview.html Tuscany]''' - Apache Tuscany is an implementation of the [[#S|SCA]] specification 1.0. It is available for Java and C++. It also supports [[#S|SDO]] specification 2.1 for both Java and C++. Go to [[SMILA/Project Related Technologies/SCA and Tuscany|SCA and Tuscany]] for discussing.

+

== V ==

== W ==

== W ==

−

* '''Workflow''' - see [[#P|Pipeline]]

+

* '''Worker''' - Single processing component in an asychnronous workflow. Pulls [[#T|tasks]] to process. Defined in a worker description.

* '''Workflow (asynchronous)''' - Describes an asynchronously processed workflow by specifying a sequence of workers and associating their input/output [[#S|slots]] to [[#B|buckets]].

+

+

* '''Workflow (synchronous/BPEL)''' - see [[#P|pipeline]]

+

+

* '''Workflow run''' - Single traversal of a workflow.

* '''[http://www.w3.org/TR/wsdl WSDL]''' - WSDL is an XML format for describing network services as a set of endpoints operating on messages containing either document-oriented or procedure-oriented information. The operations and messages are described abstractly, and then bound to a concrete network protocol and message format to define an endpoint. Related concrete endpoints are combined into abstract endpoints (services). WSDL is extensible to allow description of endpoints and their messages regardless of what message formats or network protocols are used to communicate.

* '''[http://www.w3.org/TR/wsdl WSDL]''' - WSDL is an XML format for describing network services as a set of endpoints operating on messages containing either document-oriented or procedure-oriented information. The operations and messages are described abstractly, and then bound to a concrete network protocol and message format to define an endpoint. Related concrete endpoints are combined into abstract endpoints (services). WSDL is extensible to allow description of endpoints and their messages regardless of what message formats or network protocols are used to communicate.

A

Attachment - Attachments are parts of records used to store large binary data such as document content.

Attribute - Attributes are parts of records and contain simple data objects that are easily represented in XML or json, such as String, Integer, Float, and Date.

B

Blackboard or blackboard service - The blackboard service manages SMILA records during processing in a SMILA component (connectivity, workflow processor). In addition it hides the handling of record persistence from these components. For a complete description see Usage of Blackboard Service.

BPEL - BPEL is an XML-based language defining several constructs to write business processes. It defines a set of basic control structures like conditions or loops as well as elements to invoke web services and receive messages from services. It relies on WSDL to express web services interfaces. Message structures can be manipulated, assigning parts or the whole of them to variables that can in turn be used to send other messages.

Bucket - Data container in an asynchronous workflow, containing logically grouped data objects of the same type. Can be transient for interim data, which means that data is not persisted and removal of data is under job management control, or persistent, which means that removal of data is not under job management control.

C

Crawler - A crawler is a special worker in an asynchronous workflow that imports data from a data source (e.g. filesystem, web or database) into SMILA. It iterates over the data elements and creates records for all elements that will be further processed in the workflow. In general crawlers resp. crawl workflows are used for initial (bulk) import of data sources. (see SMILA Importing for more details)

D

Data Object - The smallest unit of data handled by an asychronous workflow (e.g. a record bulk).

Delta indexing - Delta indexing is also known as incremental or generation based indexing. It is driven by DeltaCheckerworker.(see SMILA Importing for more details)

E

Eclipse - Eclipse is an open source community, whose projects are focused on building an open development platform comprised of extensible frameworks, tools and runtimes for building, deploying and managing software across the lifecycle.

EILF - EILF (Enterprise Information Logistics Framework) was the original proposed name of SMILA. Since this abbreviation was difficult to pronounce, it was not accepted by the community and thus changed to SMILA.

Equinox - Equinox is a base technology from Eclipse implementing the OSGi specification. Not only delivering a high performance class loading mechanism Equinox also provides an environment for managing component dependencies.

F

Fetcher - A fetcher is a worker in an (asynchronous) import workflow that retrieves Records containing an URL or file path, etc from a crawler and actually fetches the content (e.g. of files) from the data source ((e.g. FileFetcherWorker or WebFetcherWorker)), attaches it to records and sends them to the UpdatePusher. (see SMILA Importing for more details)

G

H

I

ID - An ID identifies a record in SMILA and is part of a record's metadata.

J

Job - A Job is a description of a distinct and repeatable working process that the system should accomplish. It references and parametrizes an asynchronous workflow.

Job Run - A Job Run is an "instance" of a Job, for example one run of an import of a data source to an index. Only one active job run can existe per job. Statistics will be accumulated for each job run. A job run is automatically stopped when SMILA shuts down.

K

L

M

Micro bulk - a (small) bundle of records in one single file which can be pushed into the system using the Bulkbuilder. The micro bulk in itself is not JSON, but a file where each line must consist of a single JSON representation of a record. E.g.:

N

O

ODE - Apache ODE (Orchestration Director Engine) executes business processes following the BPEL/WS-BPEL standard. It talks to web services, sending and receiving messages, handling data manipulation and error recovery as described by your process definition. It supports both long and short living process executions to orchestrate all the services that are part of your application.

OSGi - The OSGi specification is about managing a component based software system. It defines an in-VM Service Oriented Architecture (SOA) for networked systems. An OSGi Service Platform provides a standardized, component-oriented computing environment for cooperating networked services. This architecture significantly reduces the overall complexity of building, maintaining, and deploying applications.

P

Pipelet - A pipelet is a reusable component (POJO) in a BPEL workflow used to process data contained in records. See Pipelets for details.

Pipeline - A pipeline is the definition of a BPEL process (or workflow) that orchestrates pipelets and other BPEL services (e.g. web services).

WSDL - WSDL is an XML format for describing network services as a set of endpoints operating on messages containing either document-oriented or procedure-oriented information. The operations and messages are described abstractly, and then bound to a concrete network protocol and message format to define an endpoint. Related concrete endpoints are combined into abstract endpoints (services). WSDL is extensible to allow description of endpoints and their messages regardless of what message formats or network protocols are used to communicate.