This month, expressor software is releasing the newest version of their data integration product – expressor 3.5. This release includes three significant enhancements to this powerful and adaptable product: an extensibility framework, integration with Melissa Data’s data quality tools, and operators to read from and write to Salesforce.com databases.

As a proof of the usability of the extensibility framework, expressor implemented their Salesforce support through an extensibility library. In the future, as this framework is used to implement more functionality, you will be able to add new features to your expressor deployment without needing to install a newer version of the software.

In order for me to experience using the Salesforce functionality firsthand, expressor gave me access to their in-house Salesforce database used by their developers. This database, which is used by both engineering and quality control, includes a sampling of the data stored in the company’s primary Salesforce database. What I encountered was a very smooth and error free procedure for extracting data from a Salesforce table and submitting it to an expressor dataflow for further processing.

As with all expressor data integration applications, the first task is to create a project that will contain all of the artifacts needed to develop the application. I installed a pre-release build of expressor Studio 3.5 and created a workspace named SFDC. As soon as the workspace opened, an informational message appeared notifying me that there were uninstalled extensions. It was a simple step to click the Enable All link to install the extension that contains the Salesforce.com functionality.

Alternatively, clicking the Manage Extensions link, or selecting the Studio > Manage Extensions… menu item would start the Manage Extensions wizard, through which I could have selectively installed, enabled, or disabled individual extensions. Note that a different complement of extensions can be enabled or disabled in each workspace.

Once I had installed and enabled the Salesforce extension, I created a project named ReadSFDC. The first step in creating my application was to define a connection artifact to the Salesforce installation. As with all expressor connection artifacts, this work was managed through a wizard. I simply started the wizard and selected Salesforce Connection from the menu.

In the first step, I gave the connection the name development_account; in the second step, I entered my Salesforce.com account name, account password, and security token, which you request from Salesforce once you have valid account credentials.

To confirm the validity of these entries, I clicked Test Connection in Studio’s ribbon bar and a confirmatory message was displayed. This is all that was involved in configuring an expressor connection artifact to a Salesforce installation.

Now that I had a connection, I was ready to create a schema artifact, which is a description of the structure of a data source external to the expressor application. In this case, the schema would describe the structure of the Contact table in the database. In expressor’s Salesforce database, this table has been extended and includes many columns in addition to the columns included in the default table provided in the Salesforce installation.

To create the schema, I started the schema wizard, selecting Salesforce Schema from the menu.

Using this wizard to create a schema against a Salesforce table is no different than using the wizard to create a schema against a relational database table. You first select the connection artifact so the wizard has the information needed to connect to the target, then you select the table, or tables, for which you want to create a schema.

Click Next, accept the default name or enter a name for the schema, and click Finish. The schema is automatically generated and appears in the Schemas folder under the project.

This table has more than 100 columns, but for my testing I was only interested in two: the unique record identifier (expressor asked that I not select columns that would identify actual users), and the date on which that individual downloaded expressor Studio. Within my data integration application, I will use the download date to determine which version of the product the user downloaded.

When I first opened the schema, the composite type, which describes how each data record will be represented within the application, contained an attribute for each table column, so I added another composite type named StudioDownloadDate to the schema and deleted all attributes except the two I needed for my application. The ability to associate multiple composite types with a schema, allowing you to customize the composite type for reading, writing, or updating a data resource, is one of the powerful and useful features in expressor. Simply changing the composite type, something you can do while configuring an input or output operator, adapts the schema to multiple uses.

Now I’m ready to develop the dataflow, the graphical representation of the application. The first operator I used was the Read Salesforce input operator. On this operator’s Properties panel I simply selected the appropriate connection, schema, and composite type. However, since expressor 3 was first released at the beginning of 2011, I wanted to restrict my query against the SalesForce table to those users who downloaded during the current year; this too is something that can be configured through the operator’s Properties panel.

On the Properties panel is a link that opens what expressor refers to as the Filter wizard. This wizard allows you to easily create a filter query using the Salesforce Object Query Language. Using the drop down controls I was able to select the column (or columns) that I wanted to use to restrict the result set, the value against which I wanted to evaluate the column value, and the type of evaluation I wanted to perform. In this application, I wanted to select only the records in which the download date was in 2011, the current year. Note how I was able to use one of the date literals that the Salesforce Object Query Language supports. If necessary, my filter expression could include multiple comparisons combined using the AND or OR operators.

Developing the rest of the dataflow was straight-forward. I sorted by download date, used a Transform operator to retrieve from a lookup table I created within the Transform operator the product version for each download date, and a Write File operator to save the application’s output.

Iterating through the lookup table to find the product release for each download date was a simple task in which I used the expressor Datascript iterator function ipairs to control a processing loop.

The application ran perfectly the first time. The Read Salesforce operator retrieve the desired subset of records from the source table and my code in the Transform operator associated a product version with each download.

If you need to write to a Salesforce table, the expressor Write Salesforce operator will support your requirements. With this operator, you again must identify connection and schema artifacts and then select the type of database operation you want to perform: insert, update, upsert, or delete. Since a Salesforce table has many columns that are managed by the Salesforce application itself, this operator will warn you if you try to initialize one of these columns, allowing you to modify your code or schema to avoid a conflict.

I was quite impressed by the ease of use and broad functionality of the expressor Salesforce operators and the ease with which the extensibility framework added this functionality to my Studio installation. As in the past, this newest release of expressorStudio adds significant value to their product and I continue to be impressed by the quality and functionality of the expressor Data Integration Platform. If data integration is something that you will be investigating, I urge you to download and try expressor.

Community Initiatives

About Pinal Dave

Pinal Dave is a Pluralsight Developer Evangelist. He has authored 11 SQL Server database books, 17 Pluralsight courses and have written over 3200 articles on the database technology on his blog at a http://blog.sqlauthority.com. Along with 11+ years of hands on experience he holds a Masters of Science degree and a number of certifications, including MCTS, MCDBA and MCAD (.NET). His past work experiences include Technology Evangelist at Microsoft and Sr. Consultant at SolidQ. Follow @pinaldave
Send Author Pinal Dave
an email at pinal@sqlauthority.com

Email Subscription

Enter your email address to subscribe to this blog and receive notifications of new posts by email.