Posts Tagged ‘SData’

Introduction

Often there is a need to synchronize data from an external source with Sage 300 data. For instance, with Sage CRM we want to keep the list of customers synchronized between the two systems. This way if you enter a new customer in one, it gets automatically added to the other. Similarly, if you update say an address in one, then it is updated in the other. Along the way there have been various techniques to accomplish this. In this blog post I’ll cover how this has been done in the past and present a new idea on how to do this now.

In the past there have been a number of constraints, such as supporting multiple databases like Pervasive.SQL, SQL Server, IBM DB2 and Oracle. But today Sage 300 only supports SQL Server. Similarly, some suggested approaches would be quite expensive to implement inside Sage 300. Tied closely to synchronization is the desire by some customers for database auditing which we will also touch upon since this is closely related.

Sage CRM Synchronization

The first two-way data synchronization we did was the original Sage 300 to Sage CRM integration. The original integration used sub-classed views to capture changes to the data and to then use the Sage CRM API to make the matching change to Sage CRM. Sage CRM then did something similar and would write the change back to Sage 300 via one of its APIs.

The main problem with this integration technique is that its fairly brittle. You can configure the integration to either fail, warn or ignore errors in the other system. If you select error, then both systems need to be running in order for any one to use either. So if Sage CRM is offline, then so is Sage 300. If you select warn or ignore then the record will be updated in one system and not the other. This will put the databases out of sync and a manual full re-sync will need to be performed.

For the most part this system works pretty well, but isn’t ideal due to the trade off of either requiring both systems always be up, or having to run manual re-syncs every now and then. The integration is now built into the Sage 300 business logic; so sub-classed Views are no longer used.

The Sage Data Cloud

The intent of the Sage data cloud was to synchronize data with the cloud but not to require the on premise accounting system be always online. As a consequence, it couldn’t use the same approach as the original Sage CRM integration. In the meantime, Sage CRM added support for vector clock synchronization via SData. The problem with SData synchronization was that it was too expensive to retrofit that into all the accounting packages that needed to work with the Sage Data Cloud.

The approach the Sage Data Cloud connector took was to keep a table that matched the accounting data in a separate database. This table just had the key and a checksum so it could tell what changed in the database by scanning it and re-computing the checksums and if they didn’t match then the record had been modified and needed synching.

This approach didn’t require manual re-synchs or require both systems be online to work. However, it was expensive to keep scanning the database looking for changes, so they may not be reflected terribly quickly or would add unnecessary load to the database server.

What Does Synchronization Need?

The question is then what does a modern synchronization algorithm like vector clock sync require to operate efficiently? It requires the ability to ask the system what has changed since it last ran. This query has to be efficient and reliable.

You could do a CSQRY call and select based on the audit stamps that are newer than our last clock tick (sync time). However, the audit stamp isn’t and index and this query will be slow on larger tables. Further it doesn’t easily give you inserted or deleted records.

Another suggested approach would be to implement database auditing on the Sage 300 application’s tables. Then you get a database audit feature and if done right, you can use this to query for changed records and then base a synchronization algorithm on it. However, this has never been done since it’s a fairly large job and the ROI was never deemed worthwhile.

Another approach that is specific to SQL Server would be to query the database transaction logs. These will tell you what happened in the database. This has a couple of problems namely the queries on the transaction logs aren’t oriented around since the last sync, and so are either slow or return too much information. Further SQL Server manages these logs fairly aggressively so if your synchronization app was offline for too long, SQL Server would recycle the logs and the data wouldn’t be available anymore. Plus, this would force everyone to manage logs, rather than just have them truncated on checkpoint.

SQL Server 2008 to the Rescue

Fortunately, SQL Sever 2008 added some nice change tracking/auditing functionality that does what we need. And fortunately Sage 300 only supports SQL Server so we can use this functionality. There is a very good article on MSDN about this and how it applies to synchronization here. Basically the SQL Server team recognized that both data synchronization and auditing were important and quite time consuming to add at the application level.

Using this functionality is quite simple, you need to turn on change tracking for the database, then you need to turn on change tracking for each table you want to track changes for.

Then there is a SQL function that you can select from to get the data. For instance, I updated a couple of records in ARCUS and then inserted a new one and the result is shown.

This is giving me the minimal information, which is all I require for synchronization since I really only need to know which records to synchronize and then can get the full information form the main database table.

If you want to use this to audit all the changes in your database, then there are more options you can set to give you more complete information on what happened.

Summary

If you are writing an application that needs to synchronize data with Sage 300 (either one way or two way), consider using these features of SQL Server since you can add them externally to Sage 300 without affecting the application.

Similarly, if you are writing a database logging/auditing application you might want to look at what Microsoft has been adding to SQL Server starting with version 2008.

Last week I introduced a sample of how to develop mobile apps for Sage 300 ERP using the Argos SDK. In that article I covered where to get the sample and how to get it working. This week, we’ll start to look at how it is put together and how it works.

Sign On

The first thing you need to do is sign-on or authenticate. There is a standard method of authentication built into SData which is explained on the SData website here. Usually you would want to create a sign-on dialog and then feed the results into the SData access layer. This is all done in the sample application.

Basically the file src\Application.js is responsible for orchestrating the running of the application. When it starts running, it calls handleAuthentication() which usually calls navigateToLoginView() to run the login screen. This is done in src\views\login.js. This file displays the UI and gets the data entered. We’ll talk more about how UIs work next. It basically sets up the credentials data structure, calls authenticateuser to save these for the SData layer and then navigates to the initial screen.

Anatomy of a View

For Sage 300 ERP developers this is going to be confusing, because in the Sage 300 ERP world, business logic objects are called Views. But here in the Argos SDK world, Views are UI screens. Basically you provide a data structure (JavaScript object described in JSON notation) with all the fields you want and an HTML template on how you want them displayed, and then Argos has base classes to display these. The Argos SDK uses object oriented JavaScript, so it’s often worth going into the SDK and browsing the base classes to see what things are derived from. This gives you a good idea of what is done for you and what behavior you can override.

The most basic such View is src\views\VendorGroups. This is used as a finder when adding new Vendors. The code is:

This is basically the JSON definition for this View object. This one fairly simple and bare bones, the main points are to define the SData feed in the resourceKind: property, along with the query fields we need. Notice the simplate which is used to format and display each entry in the list, these are described in the next section. Then there is an error handler and a function to perform searches. The rest is done in the base class for this View. For more complicated objects, you will need to override more functions and provide more input (like more details about fields for editing).

Simplate

Simplate is a small templating engine for JavaScript. We use the templates to dynamically combine HTML and data in our views.

The Simplate syntax is a mix of markup, tags and JavaScript code. These are the most common syntax items you’ll see:

When developing you run into lots of bugs, so how to solve them? The nice thing about JavaScript is you just update your files, save them and then hit refresh on the browser to run. But since JavaScript is interpreted and not compiled, you only find out about syntax errors when you run. If the syntax error is bad then it can stop the whole program from running (extra or missing brackets are bad for this), simpler errors will just stop the program when it hits that line of code. Also beware that if you misspell variables, JavaScript will just happily keep going using an undefined value, so be careful.

I like to run in both Firefox and Chrome. Firefox (with Firebug) is good at pointing out syntax errors, so they are easy to fix. Chrome has an excellent JavaScript source code debugger built in that is great for setting breakpoints and tracing through code. Another tool I really like is Fiddler which spies on all the SData server calls being made. From here you can look in depth at what is going on with the SData server.

Since the Argos SDK along with any other libraries it uses are all open source JavaScript projects that means you can examine any of the source code in the Argos SDK and debug into it to see what is happening there as well as what is happening in your own code.

Also remember the SalesLogix and Sage 300 sample applications, chances are you can find an example of what you are trying to do in one of these programs.

Summary

The Argos SDK is a powerful mobile development platform that combines SData with the Dojo JavaScript framework to give quite a deep method to quickly develop mobile applications for various Sage SData enabled products.

I blogged about a number of SData improvement in the upcoming Sage 300 ERP 2012 release here. At that time I felt that the SData support project was complete, but I was wrong, and the team was able to add another major feature in the last couple of sprints before code complete.

This feature is important because it introduces a feature that our current View Business Logic layer doesn’t handle well. This adds a foundation for developing UIs easier. This adds a foundation to allow customers, partners and developers to understand our data model much easier. So let’s introduce the feature in all its gory detail and then circle back to talk more about it at the end.

This article reproduces a lot of material from the DPP Wiki, mostly to make it a bit more accessible since hopefully this is of interest beyond just DPP partners. This material is on how to create these referenced resources in Sage 300 ERP SData feeds. Notice that the SData resources are based on Sage 300 Views and that the extra information is added via XML files, so you get all the benefits of this feature with no additional programming. For more on how to make use of this feature check out the SData website specifically here.

Overview

Within the Resource View Mapping xml, you can define a section (called referencedResources):

For enabling you to link in read-only fields from other views. In Accpac 5.x, this was known as the “lookup fields” feature.

For enabling SData related resources.

Another way to look at this is that this is where you can define a foreign key relationship between the resource kind’s view and a different view.

Although the two areas are virtually identical we describe them here separately to reflect the two slightly different aims.

Lookup Fields

For example, the A/R Customers maintenance UI displays the description for the Terms Code field. This description is not a field that is part of the A/R Customers view and the A/R Payment Terms view is not composed with that A/R Customers view. There is a foreign key relationship between these 2 views, between the Terms Code (CODETERM) field in the A/R Customers view and the primary key of the A/R Payment Terms view. Therefore, the referencedResources section can be added to the A/R Customers resource view mapping to support displaying the Terms Code Description field from the A/R Payment Terms in the A/R Customers resource. The 2 views are related thru the CODETERM field in the A/R Customers view and the primary key of the A/R Payment Terms view and the Terms Description field has the field name of TEXTDESC in the A/R Payment Terms view. An example of how this would be represented in the resource view mappings of the 2 resources is shown below.

Note that if you already have a relationship defined between the views due to “view composition”, then you do not need to define this relationship in the referencedResources section. The Resource View Mapping xml already supports defining header/detail relationships using multi-level resources sections in the xml.

Configuration

These are the details of the sections of the resource file.

referencedResources Section

The referencedResources section must come between the Optional Fields (if present) and the IncludedFields or ExcludedFields, whichever is used.

The section can contain multiple referencedResource sections, each of which describes a resource that you want to link to the primary resource, in order to bring in additional field(s) from that resource into this one.

referencedResource Section

This section contains the information for an additional, referenced resource that will be linked to this resource. It has one attribute, name, which is the name of another SData resource (for which there is a separate resource view mapping). The name is the plural name of the resource as found in the classmap.xml file.

Within this section are 2 additional sections which describe the way in which this referenced resource is related to the primary resource:

referencedKeyFields

lookupFields

referencedKeyFields

This section contains a referencedKeyField item for each field in the defining key of the reference resource. It will normally be a field in the primary resource whose value will be used to “lookup” a value in the reference resource. Alternatively a fixed value might be necessary.

Note: any fields that you specify in this section will automatically be tagged as “autoPostback” – so that whenever their value is changed on the client, the change will be immediately submitted to the server so that the referenced fields can be recalculated.

The main information that you normally need to specify for this referencedKeyField is the viewFieldName, which is the name of that key field in the primary resource.

However if this reference actually uses a fixed value instead of the value of a field then use the fixedValue to specify that value.

The number of referencedKeyField items that you define will depend on the number of fields that are in the index that is being used by the reference resource. The index that is being used by a resource is defined by the definingIndex section of the resource definition. If this is not specified in the resource definition, then the index will be assumed to be the primary key (index 0).

The order of these items is important: they must match the order of the key field(s) in the reference resource’s index.

lookupFields

This section contains a lookupField item for each field in the reference resource whose value will be “looked up” (according to the key information specified in the referencedKeyFields section) in the referenced resource and will be displayed as a read-only field in the primary resource.

The main pieces of information that you will specify for this lookupField is the following:

viewFieldName: this is the name of the field in the reference resource.

name: this is the name of the field that will be used by the primary resource. There cannot be any spaces in this name and it must be unique with the primary resource.

A/R Customers Terms Description Example

This example demonstrates how you would define your A/R Terms Code and Customers resources so that the A/R Customers resource could include the “Terms Code Description” field from the A/R Terms Code resource in the Customers resource as a “lookup field”.

Beware: WordPress changes dash-dash-greaterthan to long-dash-greaterthan which then messes up the ends of the comments. Also beware that it changes regular double quotes into 66 and 99 styles quotes.

The application of referenced resources for lookup fields should be considered carefully with performance implications in mind. Application developers and QA testers may have to explore this thoroughly. For example, looking up a description field in another Sage 300 ERP View may imply a table scan in the corresponding view table.

SData related resources

In SData there is the concept of related resources; resource kinds that are shared between resources. For example many A/R customer resources share the same A/R payment terms resource. The referenced resource definition describes how they are related. As in the “lookup fields” case there is a foreign key relationship between these 2 views, between the Terms Code (CODETERM) field in the A/R Customers view and the primary key of the A/R Payment Terms view. Therefore, the referencedResources section can be added to the A/R Customers resource view mapping to support including the A/R Payment Terms in the A/R Customers SData resource. The 2 views are related thru the CODETERM field in the A/R Customers view and the primary key of the A/R Payment Terms view. An example of how this would be represented in the resource view mappings of the 2 resources is shown below.

Note that if you already have a relationship defined between the views due to “view composition”, then you do not need to define this relationship in the referencedResources section. The Resource View Mapping xml already supports defining header/detail relationships using multi-level resources sections in the xml.

Configuration

These are the details of the sections of the resource file.

referencedResources Section

The referencedResources section must come between the Optional Fields (if present) and the IncludedFields or ExcludedFields, whichever is used.

The section can contain multiple referencedResource sections, each of which describes a resource that you want to link to the primary resource.

referencedResource Section

This section contains the information for an additional, referenced resource that will be linked to this resource. It has three attributes:

name, which is the name of another SData resource (for which there is a separate resource view mapping). The name is the plural name of the resource as found in the classmap.xml file.

property, which is the property name that this will have in the referencing resource (e.g. “terms” in our example)

description, an optional description that can be used as the label for the property (e.g. “Payment terms”)

Within this section is the additional ‘referencedKeyFields’ section which describe the way in which this referenced resource is related to the primary resource.

referencedKeyFields

This section contains a referencedKeyField item for each field in the defining key of the referenced resource. It will normally be a field in the primary resource whose value will be used to “look up” a value in the reference resource. Alternatively a fixed value might be necessary.

Note: any fields that you specify in this section will automatically be tagged as “autoPostback” – so that whenever their value is changed on the client, the change will be immediately submitted to the server so that the referenced fields can be recalculated.

The main information that you normally need to specify for this referencedKeyField is the viewFieldName, which is the name of that key field in the primary resource.

However if this reference actually uses a fixed value instead of the value of a field then use the fixedValue to specify that value.

The number of referencedKeyField items that you define will depend on the number of fields that are in the index that is being used by the referenced resource. The index that is being used by a resource is defined by the definingIndex section of the resource definition. If this is not specified in the resource definition, then the index will be assumed to be the primary key (index 0).

The order of these items is important: they must match the order of the key field(s) in the referenced resource’s index.

A/R Customers Terms Related Example

This example demonstrates how you would define your A/R Terms Code and Customers resources so that the A/R Customers resource could include the A/R Terms Code resource in the Customers resource as a related resource.

Including related resources should be considered carefully with performance implications in mind. Application developers and QA testers may have to explore this thoroughly. For example, looking up a related resource in another Sage 300 ERP View may imply a table scan in the corresponding view table.

Summary

Perhaps, related resources might seem like a bit of an esoteric SData topic, but we are excited about it for a couple of reasons. This is the first time where the Sage 300 ERP API has included explicit relational information about how all the various tables are inter-realted. This is a key enabler for making creating UIs easier via SData, since now UI tools can automatically know how to create finders and to find related information (automatic drill down). Here we see SData become much more than just a RESTful Web Services interface to Sage 300 ERP; we now see more and more API capabilities going beyond what we have in our COM, .Net and Java APIs.

I’ve previously blogged on the enhancements for the framework for creating custom SData feeds for applications here and here. In this posting I’m looking at enhancements to our core SData protocol support. We’ve been working hard to add more features and correct some inconsistencies and bugs.

The main purpose of this exercise is to make SData work better for integrators and to make sure the Sage 300 ERP SData feeds work well with other Sage tools like the Sage CRM dashboard and the Sage Argos Mobile Framework.

Global Schema

Generally SData features are mostly of interest to programmers. However some, like this one, enhance existing integrations between different products. Global schema is a mechanism to return all the SData meta-data for a dataset (company) in a single call. In version 6.0A, you could only get the metadata for one SData resource per call. Rather esoteric. But having this enhances our integration to the Sage CRM SData dashboard. Previously when you created an SData widget pointing to an Sage 300 ERP SData feed you needed to specify the $schema for a specific feed, something like:

Which means you don’t need to know the resource component of the URL. In Sage CRM it looks like this, first you give the URL to the global schema:

Then you get a list of SData resources to pick from in a more human readable form:

Previously you only got the feed you specified. Then you select a feed and hit next to choose the fields you want from that feed.

SData Validation Tool

Sage has written a tool that will validate the correctness of SData feeds. This tool is available here (you need to scroll down to near the bottom of the page). The intent of this tool is for anyone, whether internal or external to Sage to be able to validate any Rest web services for SData compliance against what is described in the spec at the SData Website. This tool was around in the 6.0A days, but it needed work. Back then 6.0A passed the feed validator. With the new tool 6.0A has a lot of problems reported against it. With 2012, quite a bit of work went into making our feed compliant. Which means you can expect them to work as the specification states and integrations with other SData aware products and tools becomes much easier. This tool is continuously being updated and will probably auto-update itself as soon as you install it. Below is a screenshot. Hopefully by release a few of the remaining errors will have been corrected.

Argos

Argos is a framework for creating mobile SData clients using HTML5, JavaScript and CSS. This was originally developed by the Sage SalesLogix group to create the mobile interface for the Sage SalesLogix Mobile product. However since SalesLogix uses SData as its Web Services interface, this library was created entirely on SData. As a consequence it can be used with any product that supports SData.

As part of our Sage 300 ERP 2012 development we tested Argos on our SData feeds and produced a sample mobile application.

As part of this development we fixed a couple of bugs and made sure our SData support works well with the Argos SDK. I’ll write a future blog posting on more details on the Argos SDK and how to write mobile applications for Sage 300 ERP. However if you are interested in Argos, you can check it out, since the whole project is open source and available on github:

We finished implementing e-tags with this version. These allow proper multi-user control when you have multiple sources updating records. Basically when you read a record, it returns an e-tag which has the date and time the record was last modified. When you update the record this e-tag is included with the update message and then the server can see if the record has been modified by someone else since you read it. This detects the multi-user conflict. Sometimes the server will just merge the changes silently and everyone is happy, sometimes the server will need to send back the dreaded “Record has been Modified by Another User” error response.

Using Detail Fields in Select Clauses

In 6.0A, if you read an SData feed with header and detail components (like O/E Orders), then you got back the header fields and links to the details. Even if you specified detail fields in a select clause. This meant if you wanted both the header and detail lines you needed to make two SData calls. Further this was annoying because it meant the format that you get back reading records is different than how you write the records, so you would need to combine the separate results from the header and details back together to do an update or insert. Now if you specify detail fields in the select clause you will get back all specified fields in the XML package, which will likely be a header with multiple details all returned for the same call. This saves an SData call, but further it’s much easier to deal with, since now you have an already correct XML template for manipulating to do future inserts and updates.

Define Your Own Contracts and Have Your Own Classmaps

In version 6.0A, the only contract we supported for SData feeds created by Sage 300 ERP was the accpac contract. Now in the classmap file you can specify the contract your feeds belong to. This has always been in the classmap files, it just didn’t work. This means you can ensure any feeds you define yourself won’t conflict with anyone else’s.

Another problem in 6.0A was that to create your own feeds, you either needed to be an activated SDK application with your own program id, or you needed to edit one of our existing classmap files. This was annoying since your changes could well be wiped out by a product update. Now you can copy your own classmap file into an existing application (like O/E), you just need to name it classmap_yourownname.xml and it will be added to the defined SData feeds.

Further all the feeds that make up the accpac contract were oriented to the Orion UI work. They weren’t necessarily a good fit for people doing general integration work. So we are creating a new contract which will have the main master files and documents in it that is oriented towards integration work and stateless operation.

Summary

SData continues to be a foundation technology that we are building on. Quite a lot of work has gone into improving our SData support in Sage 300 ERP for the forthcoming 2012 release. This will allow us to leverage many other technologies like Argos to accelerate development.

If you are interested in learning more about SData check out my learning SData videos which are located here and which I blogged about here.

A couple of weeks ago I blogged on Learning over the Web, in this blog I mentioned that I really like the Khan Academy and their video method of training. I’ve now started experimenting with making Khan Academy style videos. I’ve now done three, so far, as an introduction to SData. I plan to make more of these going forwards. Once I have a larger set of videos on SData, I may try branching out to other topics. Below is a picture of Sal Khan working on such videos:

New Video Page

I’ve added a Video page to my blog which will provide links to all the videos I produce. To start with there are three videos, which aren’t very many. However I hope to make a new one every week or so and then if I can keep that up, after a year there will be fifty or so videos. The first three videos are:

The best way to learn something is by doing. So I recommend playing with SData and experimenting with the various items described in the video. To this end you can play with a locally installed version of Sage 300 ERP (or another Sage product) or you can access our demo server at http://sage300erpdemo.na.sage.com. The user id and password are ADMIN/ADMIN, make sure you enter them in upper case if prompted from the Browser or other client software. If you type the URL: http://sage300erpdemo.na.sage.com/SDataServlet/sdata/sageERP/accpac/SAMINC/arCustomersFinder into the Chrome browser and enter ADMIN/ADMIN for the userid/password then you should get back a large amount of XML containing the first 10 customer records in the SAMINC database. For information on how to perform other querying, see the third video.

If you want to try these with a different Sage product, then you might need to run Fiddler to see the exact form of their SData URLs. Once you have this, you can be up and running. Fiddler is a very useful tool for spying on HTTP requests made from your computer. You can spy on any program or website to see what it is doing.

I find creating videos more time consuming than writing, mostly because it’s harder to jump around in videos and harder to edit them. I’m hoping I can get better at creating videos with more practice and time. Partly getting used to the process and learning by doing. I hope that as I keep doing these, they will get better. It certainly takes some practice to use the writing tablet for drawing (hopefully my handwriting will improve) and at the same time I need to watch myself to not say “Um” so much. So I consider these first three videos the first three steps on a longer journey.

For producing the videos I pretty much copied what they use at Khan Academy. It’s neat that you can create videos these days with very little equipment or post production software. I used entirely either open source or free software and a very inexpensive writing tablet. The items I used:

YouTube to post the videos to. Seemed the easiest and the URLs are easy to circulate.

SmoothDraw 3 for drawing. I start with a black rectangle 854×480 pixels (which is a preferred YouTube resolution that fits well on my monitor).

It took me a bit of trial and error to get things to work right. I tried a couple of free screen recording utilities like Camstudio, which didn’t work for me. They either crashed or didn’t produce good results. Then in editing, for MovieMaker you need to change the project from 4×3 to 16×9 or it produces something that doesn’t work right on YouTube.

Generally handling video files is a bit of a pain since they are so large. Uploading from work is ok. Uploading from home is very slow, I suspect because cable modem is optimized for downloading content rather than uploading it. Either that or Shaw decided that uploading videos is a no no and throttled my connection.

I’m still undecided on whether I want to add vlogging to my blogging. This requires a camera, but web cams are cheap and for that matter both my phone and camera both take really good videos, certainly good enough for YouTube. When I’ve tried this in the past, I haven’t been happy with the results and found that much more video editing is required. But then again hopefully with some practice, I can get better over time.

Summary

I hope you find my new Video page useful. Hopefully over the coming month I’ll add quite a few videos and start to branch out to other topics.

We introduced SData support with Sage ERP Accpac 6.0A; however, the product as it shipped only defined a few SData feeds that it needed to support the new Web Portal, Data Snapshots, Inquiry Tool and Quotes to Orders features. But Sage 300’s support for SData is based on converting SData Web Service requests into View calls. So it is possible to expose any View (or collection of composed Views) as an SData feed.

In a future version of Sage 300 ERP we will expose all the relevant Views via SData, but in the meantime if you want to use SData with Sage 300, then you need to provide XML files to define the feeds you need.

All the feed definitions are XML files, which means you can read all the existing ones that come with Sage 300 using a normal text editor. Hence you can use the existing ones either as examples or templates for the definitions you need.

One thing to be careful of is that most of the fields in these XML files are case sensitive. This means they must match exactly or they won’t work. When things don’t work, it’s worth looking in the Tomcat\log folder at the relevant SDataServlet.log as this will often point out errors when parsing the XML files.

Class Map Files

The classmap files are a series of XML files located in the sub-folders under: C:\Program Files (x86)\Common Files\Sage\Sage Accpac\Tomcat6\portal\sageERP. The feeds for a given application are stored under the application’s program id and version id directory such as oe60a. Note that these need to be in a folder for an activated application to be read, but within an application you can define feeds that access Views in any application.

All the configuration XML files are loaded into memory by the SDataServlet on startup. So if make any changes to these files, you need to restart the “Sage Accpac Tomcat6” service for your changes to take effect.

You can use these to define custom Java classes to process the SData requests, I’ve covered this a bit in other blog postings, but won’t go into that here, since this article is only considering what can be done by editing XML files.

The classmap defines each SData feed and specifies the class to handle the feed and then a detailed feed definition file called a resourceMap file.

Example – Currency Codes

The currencyCodes resource is implemented by the Java class: ViewResourceKind and is defined by the resource map file: currencyCodeViewMapping.xml. ViewResourceKind is a system provided Java Class for generically converting SData requests into View calls. You can use this to expose most Views (that have data) as SData feeds.

The key points are the viewID that maps this feed to the currency codes view CS0003. The URI of the feed is the plural name, namely currencyCodes. Then you can specify the list of fields you want included in the feed. You might specify a shorter list of fields to keep the size of the feed to a minimum. The includedFields section is optional. I tend to prefer using an excludedFields section to just list the fields I don’t want.

pluralName: plural name of the resource. If undefined then the pluralName = name +”s”. This will be the URI of the resource.

resources: Collection of sub-resource elements

kind: resource kind name of the resource. Top resource of the resource tree the resource name and the resource kind name should be the same. However for a sub-resource the resource name reflects the name of the property that refers to it in the parent while the kind name is the name that it appears as at the top level of the schema.

includedFields: list of resourceViewFields that are to be included in the resource (by default all fields are included)

excludedFields: list of resourceViewFields that are not to be included in the resource

overridenFields: list of resourceViewFields that are to be overriden in the resource (usually done to change the SData name)

virtualFields: list of resourceViewFields that are to be added to the resource. Note: virtual fields requires extending ViewResourceKind with a custom class that implements these virtual fields.

lookupFields: These are a 6.1A feature that allow you to add fields looked up from another view like to get a description.

Summary

Hopefully this article gives an idea of how to setup additional SData feeds for Sage 300 ERP, without requiring any programming.

I’ve blogged a few times about using SData to feed data into Sage products and about the general ideas behind SData. In this blog posting I want to look at some benefits to integrating into Sage products using SData and the general reasons behind doing so. A lot of this article is forward looking and so each Sage product is somewhere along this journey and no product has reached the end yet.

Imagine that you are an Independent Software Vendor (ISV) and that you have a great product that satisfies a useful specialized business need for many customers. However it requires some data that is typically stored in an ERP system and generates some financial results that need to be entered into this ERP. Say you need to read all the Orders from the system and then you produce a number of G/L entries that need to be imported. Wouldn’t it be great if you could automate this process by being notified automatically whenever an Order is entered and being able to write the G/L entries directly into the ERP? However, you find that your customers are running all sorts of different ERP systems. As you look into doing these integrations you find that you need to join an SDK program for each ERP and have someone learn and program in all these different ERP’s SDKs and program you integration in all sorts of strange languages from VB to C# to JavaScript to FoxPro. Then you have to maintain all these integrations from version to version as your product progresses and all these ERPs evolve. This is a lot of work. There must be an easier way.

You could take the attitude that you will provide an API and SDK for your product and then, since your product is so wonderful, that all the ERP vendors will write an integration to your product using your SDK. After a few years of waiting for this to happen, you’ll probably give up on this.

Sage has many ERP systems and combined has a very large market share. However all these ERPs have their own heritage and their own SDK. How can Sage make it so this ISV can create their integration easily to a large set of Sage products using their own development tools and programming knowledge?

This is where SData comes in. SData is a REST based Web Service interface. SData is based on internet standards and easy to call from any programming system that supports Web Service calls. SData is a standard being implemented in all Sage Business Solutions. So how will this work in the scenario above?

Method One – Poll for Orders

In this case the ISV application will periodically poll for new Orders on the Orders SData feed, process them and then write any G/L entries to the G/L SData feed.

This method isn’t that different than using the ERP’s built in API in whatever technology that is implemented, perhaps COM, .Net or Java. So at least now for the various ERPs we are using only one API technology to access several ERPs.

Standard Contracts

Generally each ERP has its own heritage and hence its own database schema. Most Sage ERP’s will expose their built in native schema as their native SData schema. So although you can use SData to access a range of different ERPs, you still need to handle multiple schemas to set and return in the XML payloads associated with the calls.

The solution for this is standard contracts. These are specifications for standard SData calls that define a common schema. It is then the job of each Sage ERP to transform their native schema into the schema of the standard contract as part of their job of offering standard contract feeds.

With standard contracts implemented the ISV can then access multiple Sage ERP’s using the same API technology, namely SData, and using the same payload schemas, essentially making the integrations the same.

Method Two – Connected Services

Connected services refer to integrating products whether from Sage or an ISV being located in the cloud and then integrated to an on-premise installed ERP.

Currently there is a concern that on-premise customers don’t want to expose their ERP system to the Internet, meaning they don’t want to manage a firewall and worry about the security concerns associated with an Internet facing web server. This means they do not want to expose their ERP’s SData feeds to the Internet at large. So how does the ISV integrate if they can’t call the ERP’s SData feeds?

Although the cloud application can’t call the on-premise application, the on-premise application has no problem calling the cloud application. This means it’s up to the on-premise application to push all necessary data to the cloud application. Generally this is handled by a synchronization engine that synchronizes some ERP data up to the cloud. The SData specification contains a data synchronization specification for this purpose. The synchronization algorithm is very robust and will catch up if the Internet is down, or the cloud service is unavailable.

We could also provide notifications when things happen, but this is less robust, since what happens when the Internet is down or the other application is unavailable (perhaps offline for maintenance).

Notification Configuration

The above method for connected services assumes that SData synchronization will need to be configured. The core ERP will need to configure a list of SData services to synchronize with (hopefully down the road there will be many), then the ERP will need to ask the service which feeds it wants to synchronize.

Initially this configuration will probably be done by configuration screens in the ERP. However over time it will be nice to move this all to dynamic discovery type services. This will reduce error in setup and allow connected services to evolve without requiring setup changes in all their integrated ERP systems each time something new is added.

If the ERP and ISV solution are both installed together (either on-premise or in the cloud), then it would be nice to configure some direct notifications, basically call-outs from the application when various important things need to be done. This might be for things like requiring a calculation be performed or to notify that an order’s status has changed. These sorts of things will only work when both applications are live and connected, but there are many useful scenarios for this.

Summary

Initial SData implementations are either shipping or forthcoming across all the main Sage business products. We are starting to see integrations of the method one type emerging. We are actively developing connected services using the SData synchronization mechanisms. Then as we continue to roll out SData technology you should see any missing pieces start to fill in and greater uniformity emerge on how you integrate with all Sage products.

As Sage studies the market and determines useful profitable connected services, Sage can publish an SData contract that all our ERP systems will support. Then the chosen ISV can provide this service for all Sage products by supporting this SData contract. Often for connected services we need different providers in different parts of the world, and again it’s easy for Sage since for this one contract perhaps we have one provider for North America, another for Africa, another for Asia and another for Australia.