I want to get a collection of different groups to describe their analysis pipelines in a standard way to make it easier to see where people are doing the same thing and where they are doing different things for the same sort of analysis

I think the sort of attributes this file would need for each step in a pipeline would be

inputs, output, program, version, command line.

It would be good to have something which also states the order of the steps

I know the sra/ena analysis xml allows for at least some of this but that is quite heavy weight so I am hoping for something custom format or using json syntax so it is both human readable aswell as allowing some programatic parsing

Before I specify something myself, is there a solution already which provides most if not all the functionality I want.

This isn't necessarily meant to be something someone could use to run the pipeline but a description of the command lines used so if someone else wanted to install all the tools and rerun the process using their own pipeline infrastructure they could, or if you just want to compare how different the same steps are from different pipelines. I hope something like this will help me improve our README files for our analyses too

I trying to find an example of the galaxy json you mention but I couldn't come up with the right search terms. Do you have an example?

Here is a good overview of using makefiles for bioinformatics analyses. A makefile is a way to write out a DAG of targets and dependencies.

To write a DAG in JSON is certainly doable (see convoluted example below) but for lack of finding anything prepackaged, it has seemed to me a lot more work, since I have to design my structure of ordered targets, dependencies and parameters as a nested set of JSON objects and lists, and I had to write all the external code to validate the graph structure and components, as well as process dependencies into target end products.

JSON is more readable, but it is also more verbose ("heavyweight") as a consequence. Making sure all the bits and pieces are in the pipeline document seems a strong prerequisite. If you want to go this route, you might consider looking into JSON Schema (http://json-schema.org) to design a "meta"-language or schema for your graph, which can be used to help ensure individual instances of a pipeline are correct before processing. You might write a schema, and then write a JSON-formatted pipeline that validates to your schema.

Here is one very rough example of such a schema document, which defines inputs (sets of genomic intervals, essentially), operations applied on those sets to create outputs, and a vocabulary of properties and parameters that might be useful for staging and processing (datetime stamp, ID fields, descriptive metadata, etc.):

The following is an example of a JSON-formatted instance of a processing pipeline, which would validate against this schema. The goal is to show a graph that would take transcription start sites, filter them for belonging to the CTCF factor, and then apply the equivalent of a bedmap operation against them and a list of promoter windows of interest on chromosome 16:

It would be the job of whatever service parses this JSON request or payload to decide which genomic sets are inputs that exist (dependencies) and which are targets, yet to be made, which require backend processing steps.

There are various libraries written to process JSON and validate JSON against a JSON Schema document. In Python, for instance:

If the request doesn't validate, a ValidationError exception is thrown with errors that point to the offending JavaScript object in the request. If the request validates, that doesn't mean there couldn't be problems with the schema, but it's a good start for testing and validation.

Maybe there is a suite of tools written that do all of this already, but I wasn't able to find one. Hopefully someone more knowledgeable will comment, or hopefully this post gives some ideas of what could potentially be done.

GNU Makefile seems to do a lot of the heavy lifting and the tools to process one are ubiquitous on UNIX systems like Linux and OS X, so it is perhaps reinventing the wheel to translate this system to another language. I mean, all that JSON above can basically be reduced to something like:

Further, if any dependency changes (say, the set of transcription start sites changes) then only targets downstream of changed dependencies get remade, which is more efficient. This could be done with a JSON-based approach, but it requires coding.