Our earlier blog entry provides more details on the technical approach, architecture and integration engineering. This new video provides the executive level detail and showcases the complete solution capabilities presented at the Symposium itself.

Monday Nov 04, 2013

XML Component Dictionary capabilities are provided in conjunction with the CAM Editor toolset. These dictionaries accelerate the development of consistent XML information exchanges using standard sets of dictionary components.

The quick tutorials are aimed at showing the 'how to' of the basic capabilities to jump start use of XML dictionaries with the CAM Editor.

Learn how to use the dictionary functions to create dictionaries by harvesting data model components from existing XSD schema, SQL database table schema, or simple Excel / Open Office spreadsheets with tables of components listed.

Also included are tips and functions relating to use of NIEM exchange development, IEPD and EIEM techniques.

These videos should be viewed in conjunction with reviewing the overall concepts and techniques described in the companion video on the CAM Editor and Dictionaries overview. The approach is aligned with OASIS and Core Components Technical Specification (CCTS) standards specifications for XML components and dictionaries.

Dictionary collections can be stored locally on the file system, or local network, or collaboratively on the web or cloud deployment, or can be shared and managed securely using the Oracle Enterprise Repository (OER) tool.

Also included are techniques relating to the use of the NIEM approach for developing XML exchange schema and IEPD packages. This includes generating reuse scores, wantlist, and cross reference spreadsheets.

Included in the latest release of the CAM Editor is the ability to use the analyse dictionary tool to determine duplicate components, conflicting component definitions, missing component descriptions and so on. This ensures high quality dictionary component specifications. Using the CAM Editor you can also create MindMap models and UML physical models of your dictionary components sets.

Monday Oct 21, 2013

Rapidly developing Oracle BPM application solutions with data source integration previously required significant Java and JDeveloper skills. Now using open source tools for open data development significantly reduces the coding needed. Key tasks can be performed with visual drag and drop designing combined with menu selections entry and automatic form generation directly from XSD schema definitions.

The architecture used is extremely lightweight, portable, open platform and scalable allowing integration with a variety of Oracle and non-Oracle data sources and systems.

Two videos available on YouTube walk through the process at both an introductory conceptual level and then a deep dive into the programming needed using JDeveloper, Oracle BPM composer and Oracle WLS (WebLogic Server) along with the CAM editor and Open-XDX open source tools.

Combining Oracle BPM with these open source tools provides a comprehensive simple and elegant solution set. Development times are slashed and rapid prototyping is enabled. Also existing data sources can be integrated using open data formats with either XML or JSON along with CRUD accessing via the Open-XDX Java component. The Open-XDX tool is a code-free approach where data mapping is configured as templates using visual drag and drop in the CAM Editor open source tool. XML or JSON is then automatically generated or processed (output or input) and appropriate SQL statements created to support the data accessing.

Also included is the ability to integrate with fillable PDF forms via the XML templates and the Java PDF form filling library. Again minimal Java coding is needed to associate the XML source content with the PDF named fields.

The Oracle BPM forms can be automatically generated from XSD schema definitions that are built from the data mapping templates. This dramatically simplifies development work as all the integration artifacts needed are created by the open source editor toolset.

The developer level video is designed as a tutorial with segments, hands-on demonstrations and reviews. This allows developers to learn the techniques and approaches used in incremental steps. The intended audience ranges from data analysts to developers and assumes only entry level Java skills and knowledge. Most actions are menu driven while Java coding is limited to simply configuring values and parameters along with performing builds and deployments from JDeveloper and Oracle WLS.

Additional existing Oracle online training resources can be referenced on Oracle BPM and WLS that cover other normal delivery aspects such as user management and application deployment.

The main focus is integrating JSON handling alongside the existing XML capabilities to provide developers with the ability to use either or both from the single set of infrastructure.

This provide JSON developers with the ability to quickly build visual data models, use robust XML content validation services and generate XSD schema and JAXB bindings without having to do all those tasks by hand or know the nuances of complex XSD schema or XML handling.

For XML developers it provides a rapid ability to use JSON as an option in their information exchanges and web service integration for supporting mobile and web-based application needs.

In addition to these new JSON capabilities the existing functionality has significantly improved performance and capability. The CAMV validation engine now runs up to 20 times faster for large XML validation input and with templates containing setChoice() rules. For comparison a 500+ rules validation template with large 15MB sample COBie CAD/CAM smart building XML export now runs in 19 seconds instead of previously taking over 9 minutes.

Then the drag and drop dictionary components handling has similarly been significantly improved. Large sets of components now are inserted in real time with low memory overhead thus dramatically improving the user experience and ability to quickly build information exchanges from XML dictionaries of predefined domain components. The video shows using the Education domain to rapidly build a StudentDetails report with grades, achievements and student data.

For the Open-XDX open data API toolset we have added bi-directional support. This means using the same CAM template and the SQL drag and drop interface you can design Update/Insert SQL database web services along with the query services. Again the focus is on providing simple and rapid application development support. Example code and resources can be found at our GitHub site while on line demonstrations are available from the VerifyXML.org site.

Further enhancements include a new Dictionary Evaluation report. This tool analyzes the XML components in a dictionary and highlights design issues, omissions, duplicates and more that would be extremely tedious to detect by hand. This allows a development team to collaboratively improve the quality of their core components and their reuse across a project implementation.

Last but not least we have improved the XSD schema importing and exporting resolving a range of complexity nuances not previously handled allow improved accuracy and compatibility with XSD schema.

Friday Apr 26, 2013

Background

Before there was either
XML or JSON there was EDI. JSON is very reminiscent of EDI, both
syntactically and conceptually, and that the claims made back then as
to why EDI would be sustained over XML. EDI was lightweight, human
readable, fast to process, compact and worked well with existing
systems exchanges and interfaces and had a dedicated following of
advocates. But EDI has significant flaws, it is brittle, difficult
to extend and has weak data typing, content validation and rendering
support. Also semantics in EDI are very limited and rely on external
referenced specifications with local human knowledge that is
notoriously difficult to align across implementations. Particularly
code lists value sets and cross field content validation rules were
especially problematic for EDI.

Moving past these
limitations standards setting organizations have adopted XML
technologies as the primary toolset in defining information exchange
specifications. Further more there is an extensive family of XML
technologies that support the complete ecosystem of semantics and
particularly the need for interoperability, security and common
meaning and rules. The diagram here illustrates that.

Figure 1 –
Information Exchange Conceptual Components

Referencing this
diagram, JSON is restricted to the Structure and Content
capabilities. XML on the other hand provides the ability to handle
rich exchanges where all the aspects shown can be managed. In
today's challenging commercial and government information sharing
world you must have the complete set of robust capabilities
available.

The JSON primary use case

JSON is designed for
web client interfaces to web services on the internet. Essentially
it is serialized Javascript objects which makes it a strong fit for
web browser native client side scripting that all the major web
browsers provide.

While XML does not fit
as well for that scenario there are many equivalent solutions using
different interfacing in the browser such as Adobe Flash, Microsoft
InfoPath, Oracle ADF, or open source solutions such as Netbeans
forms that are using XML. One advantage of these is the “write once” approach and
deploy anywhere to tablet, smart phone, or web browser.

XML and JSON Performance Analysis

The presumptions of how slow and resource-demanding "Fat” XML is compared to JSON’s lightweight payload do not hold up to a test. An experiment with 33 different documents and almost 1200 tests on the
most commonly used browsers and operating systems found the
performance for the total user experience, (transfer, parsing and
querying of a document), to be nearly identical for both XML and JSON
formats.

Clearly this shows that you should perform experiments, test your own data and code with your own users and devices to determine real results. What "seems obvious" is not always true.

Selection of useful links of peoples opinions and
thoughts

We present here a
selection of “what does the internet think?” resources to show
the context to the use of JSON and insights into processing and
handling content in a web browser delivery context.

"We are conducting an experiment in XML and JSON performance in
browsers and would appreciate anyone with a couple minutes to spare
to visit this website and push one button.http://speedtest.xmlsh.org
(the results will be analysed and published - at this coming
Balisage 2013)

Summary and
Conclusions

Number one thing to
notice here is you are reading this document and it is being
delivered and rendered to your computer screen using XML, RSS and xhtml and not
JSON.

Back in the day when
XML was brand new, Bill Gates held a press conference to announce
that Microsoft would be adopting
XML wholesale for use across its products and
Windows operating system. This tells us that XML is ubiquitous and
extensible (that is in its name). There is now a huge number of XML
based standards in a family of solutions that support all aspects of
the needs of information exchange. In today's challenging world you
cannot just discount those as unnecessary.

When you look at
information exchanges the diagram provided in the introduction
section above it shows the complete ecosystem of components that you
need for effective consistent, trusted, predictable, reusable and
extensible information flows. Also we can see that JSON is missing
key delivery control and semantic pieces and thus JSON has a very
limited mission profile. Within that profile when fit-to-purpose it
can be effective, but as a general solution it does not meet all the
extended requirements.

Clearly JSON has its
niche following and will continue to serve that for its primary use
case of web based point client-server information exchanges. That is
not necessarily a bad thing. Having lightweight alternative
solutions is perfectly acceptable for a lot of content delivery
circumstances.

People should not
confuse business operational convenience with overall applicability -
e.g. Twitter and FourSquare dropping XML and relying solely on JSON.
Both of these services use simplistic formats totally under their
sole edict that are unlikely to change in the future. Also there are
competitive reasons, JSON actually can make it harder with its
limited semantics for competitive sites to harvest then analyze and
reuse and republish their content.

As a technology XML
continues to improve and its use is being better optimized and
refined, with tooling support that is narrowing the gap in areas
where JSON claims to have the technical edge today. Specifically we
can point to Oracle's work on Open Data APIs using Open-XDX that
supports both XML and JSON outputs. And the CAM
templates approach with NIEM that comes with
that and enables content providers to rapidly build working web
services and user form interfaces from SQL data stores.

In short we can expect
to see both XML and JSON to continue to fulfill information delivery
needs going forward but the differentiations are likely to blur.
Neither one is going to displace the other in core areas of use.
Providing the capability to use and support both is not a
significant burden and thus meets personal preferences and local
project nuances.

To get a sense for all
this as a brief real time interactive example you can try these two
live demonstration service points.

This one is using XML
when you
click here. And this one is doing the same
thing (its actually the same Open-XDX service component) but returns
JSON instead when you
click here.

Addendum

JSON is much simpler than XML.
JSON has a much smaller grammar and maps more directly onto the
data structures.

Simplicity
is deceptive. XML can easily be used as simply as JSON
syntactically. But simplicity comes at the price of ignoring many
common more robust information sharing needs in an extended
network – rather than just point-to-point.

The mapping referenced here is for
objects within a JavaScript environment only. Outside of that
context this is not so. All major programming environments have
robust XML support.

Extensibility

XML is extensible because it is a
mark-up language

JSON is not extensible because it
does not need to be. JSON is not a document markup language, so it
is not necessary to define new tags or attributes to represent
data in it.

This is a naïve view. Things
change constantly with new information sharing needs.
Particularly as more participants are added to exchanges and
standards evolve. Only in limited cases such as Twitter can we
see set formats.

Interoperability

XML is an interoperability
standard.

JSON has the same interoperability
potential as XML.

JSON clearly has significant
limitations and gaps with regard to information semantics and
reuse.

Openness

XML is an open standard

JSON is at least as open as XML,
perhaps more so because it is not in the center of
corporate/political standardization struggles.

This is a highly subjective
statement. XML has proven to be universally adopted and
implemented not just in software but firmware devices and
communications systems. Notice the JSON work is not immune from
manipulation as anything else as happened with JavaScript itself.

Human Readable

XML is human readable

JSON is much easier for human to
read than XML. It is easier to write, too. It is also easier for
machines to read and write.

Again this is an entirely
subjective statement. Markup is markup there is no “easier”
here. Machines have no notion of “easier”. The notion of
“easier to read” and presumably comprehend the meaning of is
notoriously hard to define.

Exchange Formats

XML can be used as an exchange
format to enable users to move their data between similar
applications

The same is true for JSON

Agreed.

However XML also has security and
other capabilities that are absent from JSON.

Structure

XML provides a structure to data
so that it is richer in information

The same is true for JSON.

However XML can provide deeper
structuring than JSON supports. It can also handle more extended
content types.

Processed

XML is easily processed because
the structure of the data is simple and standard.

JSON is processed more easily
because its structure is simpler.

Again this is entirely subjective.
See the link provided in the links section on machine timings
testing.

Code Re-invention

There is a wide range of reusable
software available to programmers to handle XML so they don't have
to re-invent code

JSON, being a simpler notation,
needs much less specialized software

JSON is mainly available in
JavaScript and not in a wide range of programming environments.
Further it is not the simplicity of the syntax, it is the
drastically reduced capabilities. Hence JSON only provides very
limited functionality.

XML separates the presentation of
data from the structure of that data.

XML requires translating the
structure of the data into a document structure.

JSON structures are based on
arrays and records.

This is only in the context of
the data within a web browser memory. Whereas XML is the native
format that underpins the spreadsheets, databases and array stores
that JSON content must ultimately be persisted to and from!

A common exchange format

XML is a better document exchange
format. Use the right tool for the right job.

JSON is a better data exchange
format.

Again this is entirely subjective
and no metrics are being given here. What defines “better”?
Clearly JSON is significantly less capable and restricted in it
use cases. Therefore “your mileage may vary in actual use”
would be an appropriate caution here when trying to measure what
is “better” where and how.

Data Views

XML
displays many views of one data

JSON does not provide any display
capabilities because it is not a document markup language.

XML has broader applicability.
Therefore you can write once, use everywhere. While JSON can
expect to be changed into XML for such extended uses.

Self-Describing Data

This is a key XML design
objective.

XML and JSON have this in common.

However XML has richer semantics
available than JSON.

Complete
integration of all traditional databases and formats

(Statements about XML are
sometimes given to a bit of hyperbole.) XML documents can contain
any imaginable data type - from classical data like text and
numbers, or multimedia objects such as sounds, to active formats
like Java applets or ActiveX components.

JSON does not have a <[CDATA[]]>
feature, so it is not well suited to act as a carrier of sounds or
images or other large binary payloads. JSON is optimized for data.

Visual content is data! Ask the
FBI analyzing the recent Boston attacks. One could also say that
JSON is limited to only simple basic data content and lacks
extended validation such as code values, date and number
formatting.

Internationalization

XML and JSON both use Unicode.

XML and JSON both use Unicode.

However JSON has limitations in
its use of encoding and exchanges.

Open
and extensible

XML’s one-of-a-kind open
structure allows you to add other state-of-the-art elements when
needed. They can always adapt your system to embrace
industry-specific vocabulary.

Those vocabularies can be
automatically converted to JSON, making migration from XML to JSON
very straightforward.

Exactly, if you have XML it is
trivial to generate JSON. The reverse is not the case however.

Readability

XML is easily readable by both
humans and machines

JSON is easier to read for both
humans and machines

This is an entirely subjective
statement. The better answer is that well written XML and JSON are
equivalent for human and machine handling.

Object-Oriented

XML is document-oriented.

JSON is data-oriented. JSON can be
mapped more easily to object-oriented systems.

The reverse is an issue however,
objects do not necessarily map easily to documents. Plus not all
content is objects; it actually constrains the use model. XML on
the other hand is well equipped for use as object-oriented content
as well as documents.

Adoptation

XML
is being widely adopted by the computer industry

JSON is just beginning to become
known. Its simplicity and the ease of converting XML to JSON make
JSON ultimately more adoptable.

The use of JSON is limited to web
client-server scenarios. Within that domain it is popular.
Outside of that domain XML completely dominates.

Wednesday Mar 27, 2013

The
focus for this release is improved collaboration support including
better dictionary generation, models, reports, spreadsheets and
enhancement of the rules entry tools and rules processing. New for
this release is support for Italian language localization.

The
new XPath conditional rule entry wizard makes XPath rules definition
significantly easier for cross-field validations and more. We have
also improved the rule handling in the CAMV engine to be more
consistent.

For collaboration the locations of dictionaries
collections can now be located at a URL, a file system or stored in
the Oracle Enterprise Repository (OER). Coupled with this are now
the consistent dictionary collections and database connections
manager tools for configuration management. Also better generation of
dictionaries from spreadsheets and a new spreadsheet to dictionary
utility XSLT tool. Dictionary XML component generation has also been
improved adding a new Components section to itemize components in
dictionaries along with more and more consistent handling for
dictionary content types, rules and annotations.

The
XSD schema importing and exporting now supports the use of Appinfo
tags for application specific detailing of exchange data
relationships.

For
models we have enhanced the Mindmaps to include color coding of Added
and Updated annotations plus SQL DBmappings and choice items.

For
reports we have added a new Export to XML option for the popular
Tabular Report view. This exported XML is compatible with importing
into an Excel spreadsheet or can be custom rendered using a
stylesheet or XSLT transformation.

Several
enhancements have been made to the CAMV validation engine along with
XSD schema generating and annotations handling. For Open-XDX SQL
data integration we now have a nifty utility that can generate MySQL
database tables from CSV text file data exports.

In
summary the new CAM Editor V2.4 provides the following improved
functionality:

All new XPath rules entry Wizard
tool

Significantly enhanced Dictionary
generation

Collaboration support including
Oracle Enterprise Repository (OER) and URL locations

Saturday Mar 02, 2013

Loading CSV text data into a MySQL database table is an art form. Obviously the text data can trip up for a variety of reasons from non-unique keys to missing data columns to invalid number or date formats.
I recently needed to load over 20 such tables from a SQL Server CSV text file dump. The same techniques would work for data from an Excel csv text file too.

To automate the process as much as possible I wrote a quick XSLT utility (called converter-txt-2-sql.xsl) that reads in the CSV text file, examines the first line that contains the table column field names, then analyses the samples by scanning the entire input data lines, and generates a valid CREATE TABLE {name} ( {column(s)}); SQL statement. Then in a second pass it builds the INSERT VALUES ({data}) statement for all the following data lines in the CSV.

It does a pretty good job, about 98% of what you need. You still need to do some manual editing of the CREATE TABLE SQL generated. Essentially it can only guess at the lengths for each column - so you may want to manually adjust those, along with setting the key field column name (it assumes the first one for that), and then if your data is null or unique and so on. But those are quick edits once it has all the basics there for you.

It assumes that each line is one data record in the CSV text file input; so you cannot have multiple linefeeds inside your data lines, only one at the end of each line; a fair assumption most of the time.

Having got it working I was able to load up the twenty tables in less than an hour. There is still room to improve the XSLT logic to handle various edge conditions better, but for the time I invested in writing the XSLT its a fair level of maturity, awaiting the next project to see if it needs more refinement. Plus it is a nifty example of using XSLT to read in a text source file and output text (in this case SQL statements). Note: depending on your XSLT processor (I used Saxon) you may need to feed a dummy xml file in e.g. <dummy/> just to satisfy the processing engine.

I found it worked well to migrate over SQL Server tables quickly into MySQL just working from the raw csv text export files from SQL Server that had been sent to me. It's not completely perfect but it should suffice for proof of concept purposes and quick demonstrations. And you have to know what you are doing, to be able to resolve syntax and data integrity errors. However I was able to load over 50,000 data records well enough.

Of course if you can get a live connection from MySQL to SQL Server then you can use the built-in migration tools MySQL has. This little XSLT utility is useful when you do not have that option. Or if people are using Excel spreadsheets with data tables and you want to convert those over to SQL tables.

Saturday Jan 12, 2013

An ongoing issue for XML transactions processing is UTF-8 character conformance. In an ideal world your computer should simply process your information content stream, store it and step on. XML engineers however have other ideas.

Content created in Microsoft Excel or Word or in a Web page application on a Windows desktop is by default using the Windows 1252 character set, however often this content ends up in XML document instances labelled as UTF-8 encoding.

A conforming XML parser such as Xerces will then kick out invalid byte code sequence errors when attempting to process the content. Turns out the really simple answer is to change the encoding statement in the XML prolog to say "Windows 1252" e.g.

<?xml version="1.0" encoding="Windows-1252" standalone="yes"?>

and then retry. Of course if you know you are using a different
character encoding substitute that for the Windows-1252 value here instead.

Now for automated batch processes you will need a simple piece of XSLT to switch / add the correct encoding.

You can find out more tips and tricks on all this - plus links to XSLT tools to help with this from the CAM Editor wiki page.

Another issue is simply locating the offending characters inside an XML instance - for that you can use this handy command line grep statement:

grep --color='auto' -P -n "[\x80-\xFF]" file.xml

All this then allows you to diagnose potential character set conflicts and hopefully then build smoothly functioning XML interfaces. For XML content validation you can of course use the CAMV validation engine - and you can find out more on that from this YouTube resource site showing a video on the topic (also included are various NIEM training aspect too).

Monday Nov 05, 2012

Our new XML Validation Framework
tutorial video is now available.
See how to easily integrate code-free adaptive XML validation services into your
web services using the Java CAMV validation engine.

CAMV allows you to build fault tolerant content checking with
XPath that optionally use SQL data lookups. This can provide warnings as
well as error conditions to tailor your validation layer to exactly
meet your business application needs.

Also available is developing test suites using Apache ANT scripting of validations. This allows a community to share sets of conformance checking
test and tools .

On the technical XML side the video introduces XPath validation rules and illustrates and the concepts of XML content
and structure validation. CAM validation templates allow contextual
parameter driven dynamic validation services to be implemented compared
to using a static and brittle XSD schema approach.

The SQL table lookup and code list validation are discussed and examples presented.

Features
are highlighted along with a demonstration of the interactive
generation of actual live XML data from a SQL data store and then
validation processing complete with errors and warnings detection.

The
presentation provides a primer for developing web service XML
validation and integration into a SOA approach along with examples and
resources. Also alignment with the NIEM IEPD process for interoperable
information exchanges is discussed along with NIEM rules services.

The
CAMV engine is a high performance scalable Java component for rapidly
implementing code-free validation services and methods. CAMV is a next
generation WYSIWYG approach that builds from older Schematron coding based interpretative runtime tools and provides a simpler declarative metaphor for rules definition.

Thursday Oct 25, 2012

Learn how to build a working XML query/response system with SQL database accessing and XML components from example NIEM schema and dictionary.

Software development practitioners, business analysts and managers will find the materials accessible and
valuable in showing the decision making processes that go into
constructing a working XML exchange.

The 22 minute video available online shows how to build a
fully working ULEXS-SR exchange using a Vehicle license search example. Also included are aspects of NIEM training for assembling an IEPD schema with data models.

Materials
are focused on practical implementers, after viewing the instruction
material you can use the open source tools and apply to your own
SQL to XML use cases and information exchange projects.

All the SQL and XML code, editor tools, dictionary and instructions that accompany the
tutorial video are also available for download so you can try
everything yourself.

Tuesday Oct 09, 2012

The perennial question for people is how to easily generate XML from SQL table content? The latest CAM Editor release really tackles this head on by providing a powerful and simple toolset.

Firstly you can visually browse your SQL tables and then drag and drop from columns and tables into the XML structure editor. This gives you a code-free method of describing the transformation you require. So you do not need to know about the vagaries of XML and XSD schema syntax.

Second you can map directly into existing industry domain XML exchange structures in the XML visual editor, again no need to wrestle with XSD schema, you have WYSIWYG visual control over what your output will look like.

If you do not have a target XML structure and need to build one from scratch, then the CAM Editor makes this simple. Switch the SQL viewer into designer mode, then take your existing SQL table and drag and drop it into the XML structure editor. Automatically the XML wizard tool will take your SQL column names and definitions and create equivalent XML for you and insert the mappings.

Simply save the structure template, and run the Open Data generator menu option, and your XML is built for you.

Sunday Oct 07, 2012

Creating actual working XML
exchanges, loading data from data stores, generating XML, testing,
integrating with web services and then deployment delivery takes a
lot of coding and effort. Then writing the documentation, models,
schema and doing naming and design rule (NDR) checks and packaging all this together (such as for NIEM IEPD use).

What if there was a tool that helped
you do all that easily and simply?

Welcome to the new Open-XDX and the CAM Editor!

Open-XDX uses code-free techniques in
combination with CAM templates and visual drag and drop to rapidly
design your XML exchange. Then Open-XDX will automatically
generate all the SQL for you, read the database data, generate and
populate the valid output XML, and filter with parameters. To
complete the processing solution Open-XDX works with web services and
JDBC database connections as a callable module that can be deployed
plug and play with your middleware stack, all with just a few lines
of Java code (about 5 actually).

You can build either Query/Response or
Publish/Subscribe services from existing data stores to XML
literally in minutes. To see a demonstration of using Open-XDX, a
MySQL data store and integrating with Oracle Web Logic server please
see this short few minutes video - http://youtube.com/user/TheCameditor

There is also a Quick Guide available
that provides more technical insights along with a sample pack
download of templates and SQL that you can try for yourself.

Tuesday Oct 02, 2012

Good to see that someone else has picked up on this. Of course we have
had this feature in the CAM Editor (http://www.cameditor.org) now for over a year - so happy to see
mainstream spotting how useful this is as well.