Why it is necessary?

Previously, fields in CSV files were parsed as text and necessary conversions had to be done manually before sending data to the output datasource. And vice versa, data has to be formatted before writing into a CSV/text file.

Now it’s possible to replace most of the boiler-plate code with a simple declarative approach. Basically formatting and parsing rules are defined as properties of a connection element.

Example. Import CSV data into a database

Let’s assume there is an input file with an exchange rate data in a CSV format:

Example. Export data from a database table into a CSV file

Lets reuse data imported in the previous step. Now it will be exported into another CSV file. This time the format of the output file will be slightly changed – numbers will be rounded to 4 digits after decimal point and the time part will be formatted using a dd.MM.yyyy notation:

Let’s start with a quick intro to Scriptella dataflow which is based on a concept of rows and columns(which can be treated as variables). When a query is executed it emits multiple rows making them available to nested elements. As a consequence a variable change is only visible to nested elements of the query. Here is an example to illustrate the above said:

<properties>
userCount=0 <!-- Setting an initial value for the variable -->
</properties>
<query connection-id="db">
<!-- The query selects number of record in the Users table, the variable userName is set to the value of COUNT(*) column.
The change is available ONLY to nested elements -->
SELECT COUNT(*) as userCount from Users
<script connection-id="log">
Overriden value of userCount: $userCount
</script>
</query>
<script connection-id="log">
Out of scope/unmodified value of userCount: $userCount
</script>

At times it is more convenient just to set a global variable, so that it’s value can be consumed in other places of ETL file. In Scriptella 1.0 this was possible only with a help of workarounds:

Use System.setProperty and System.getProperty to share a variable between scripts.

Another approach is similar to the technqiue utilized in anonymous inner classes to modify a single-element array declared as a final variable.The following example illustrates it:

<!-- The query defines a scoped context by declaring
a globalVar array available to nested elements.
Since the globalVar is an array, changes to its elements are immediately available to all callers -->
<query connection-id="jexl">
//Array with only one element modifiable by nested scripts
globalVarArray = [0];
query.next();
<query connection-id="db">
SELECT COUNT(*) as userCount from Users
<script connection-id="jexl">
<!-- Store userCount in a global array -->
globalVarArray[0] = userCount;
</script>
<!--And now print the value of global variable we've just set -->
<script connection-id="log">
Inner script: globalVar=${globalVarArray[0]}
</script>
</query>
<script connection-id="log">
Outer script: globalVar=${globalVarArray[0]}
</script>
</query>
<script connection-id="log">
Out of scope: globalVar=${globalVarArray[0]}
</script>

If you run the script the following output is printed on the console:
Inner script: globalVar=USER_COUNT
Outer script: globalVar=USER_COUNT
Out of scope: globalVar=0

As you can see it is possible to declare global variables in Scriptella, but this requires an additional query element and use of arrays. Scriptella 1.1 introduces support for etl.globals – a global map for variables available to ETL elements. Based on that, the example above can be rewritten to leverage the new mechanism:

The code became less verbose. Additionally the “out of scope” script was removed since global variables have no scope. The line globalVar=$globalVar (normal variable globalVar is not defined) was added to demonstrate that global variables do not affect normal variables, however it is not recommended for a global variable to share a name with normal variable to avoid possible misunderstandings.
And this time the output would be:Inner script: etl.globals.globalVar=USER_COUNT
Outer script: etl.globals.globalVar=USER_COUNT
globalVar=$globalVar (normal variable globalVar is not defined)Implementation note: As of now the etl.globals map is not shared between scripts when called by “scriptella” driver. Bug-12790 was logged to track this issue and will be resolved prior to 1.1 release.Update 1:
Thanks to Anji for pointing out that the example with an array initialization globalVarArray = [0] will not work in Scriptella 1.0 due to lack of array instantiation support in JEXL1.1. JavaScript can be used as an alternative as explained in the FAQ entry, or you can use Janino driver to achieve the same effect:

<etl>
<connection id="janino" driver="janino"/>
<connection id="log" driver="text"/>
<!-- The query defines a scoped context by declaring
a globalVar array available to nested elements.
Since the globalVar is an array, changes to its elements are immediately available to all callers -->
<query connection-id="janino">
// Array with only one element modifiable by nested scripts
set("globalVarArray", new int[1]);
next();
<script connection-id="janino">
<!-- Store userCount in a global array -->
((int[])get("globalVarArray"))[0] = 22;
</script>
<!--And now print the value of global variable we've just set -->
<script connection-id="log">
Inner script: globalVar=${globalVarArray[0]}
</script>
</query>
</etl>

JDBC batching is a very important feature which allows sending multiple commands to the database in one call. Scriptella batching is controlled by statement.batchSize parameter. The value of this parameter specifies number of statements to be combined in a batch before sending it to the database.

Please note that behavior for batching depends on the type of statements processed, as the result non-prepared SQL statements (statements without ? parameters) are processed in a single batch group different from parametrized prepared SQL statements. The grouping rules are the following:

Normal(non-prepared) statements are always grouped in a single batch per ETL script element.

Parameterized prepared statement use SQL text as a group key, i.e. different statements go into different groups what may sometimes introduce an undesired behavior.

As the result mixing parameterized (prepared statements) with normal statements in a single ETL element is NOT RECOMMENDED for batch-mode since statements are processed in different batch groups and results may be unexpected.

The following 2 examples show RECOMMENDED ways of batching.Example 1. Extract->Load using PreparedStatement.setParameters/addBatch:

Queries are not supported in batch mode. Typically batches are intended for DB modifications, consider using a separate connection if you need querying.

Batch buffers(pending SQL commands) are flushed immediately before ETL script commits, rather than after ETL element completes. You may face this problem only while querying tables being updated, which is not recommended (see item 1).

Optimal size of batch varies between DBs and available JVM heap size, but in most cases it should not be less than 10.

Another important parameter statement.fetchSize is added to give the JDBC driver a hint about the number of rows that should be fetched from the database when more rows are needed for the result set. Examples: