There is only one way writing files to DataPower filesystem not going through XML Management set-file request:

<dp:dump-nodes> allows to store a nodeSet under a specified filename in "temporary:" folder.

nodeSet has one limitation, and that is that you cannot store arbitrary (binary) files that way.

Adding this functionality (write binary files) to dump-nodes was what RFE 71699 was about.
Customer did know that storing base64 encoded binary data is possible, but wanted to be able to write binary data.

OK, an RFE (Request For Enhancement) requests to add new functionality to a next major release. RFE 71699 was rejected because writing binary data to temporary: folder is possible with today's firmware. Yes, you need the latest 7.2.0.x firmware (for "dp:gatewayscript()" and GatewayScript "_.readsAsXML()"), but 7.2.0 is definitely earlier available than a next major release. The RFE developer update describes in short how to do with v7.2.0 firmware, this blog posting will give you the details.

OK, first we have to write (binary) data to a file in "temporary:" folder with GatewayScript. This can be done via the "fs" (filesystem) API in writing eg. a Buffer. The tricky part is how a XSLT can pass a binaryNode to GatewayScript. A binaryNode is handled in DataPower as "special" XML. And therefore you have to read the input via readAsXML(), the nodelist returned will contain just a single element for a binaryNode passed from XSLT, and that can be easily ".toBuffer()"ed. So this is the complete GatewayScript fs.write.js (click for download):

Now we need only to pass a binary node in XSLT in dp:gatewayscript() call, stylesheet fs.write.xsl (click for download) shows that there is no magic at all, just pass the binaryNode as "input" in dp:gatewayscript() [you have to store "fs.write.js" above into "local:" folder]

I wanted to be able to do same or similar [of course store as file and view via gimp or browser is possible].

Back in 2011 I gave 2 WSTE webcasts on "Non-XML Data Processing in WebSphere DataPower SOA Appliances Stylesheets". The 2nd webcast shows on slide 28 how I did convert a bitmap image into textual output making use of the Braille Patterns.

This is the conversion for snowman.pbm, .pbm is portable bitmap format from netpbm tools:

Typically only the top 2x3 dots of 2x4 get used, as you can see above I used all 2x4.

samples.txt[.pre.html] contains various sample output produced (shown below), which are part of pbmtobraille.c's comments too.

9x9.pbm is a really crazy parsing sample according the .pbm spec, following this statement:
"Mr. Poskanzer cautions that programs that read this format should be as
lenient as possible, accepting anything that looks remotely like a pixmap."

This is the header section demonstrating basic use with pbmtext output, including negation of generated output, as well as the help line telling the tool's features:

This is top of tail comment section, showing graphviz output done by pbmtobraille:

And finally this is bottom section showing a bigger layout in vertical direction (layout=TB is default, Top to Bottom):

The whole thread discussed that handcrafted FFDs are not supported and referred back to this 2012 posting on the options. It also listed the only Enhancement Request that has been done since 2007 for FFD processing (FFD PMRs were fixed of course).

Further below I will show how easily binary data processing can be done with GatewayScript (available with 7.0.0.0 firmware). But before lets summarize all DataPower Non-XML data processing options here in one place:

One comment on option 4. While this works without Dataglue license (these days a XG45 without DIM option) you have to "pay" the price in form of added latency and memory consumption of the attachment processing needed by that technique.

The simple GatewayScript data structure for processing binary data is the buffer object.

For reading binary input we use readAsBuffer() method, and its documentation tries to move people to use the Buffers object.

When contexts are small, use the readAsBuffer() function. Use the readAsBuffers() function when a context is large. The readAsBuffer() function requires a contiguous memory allocation. The readAsBuffer() function is faster but is more demanding on the memory system. The readAsBuffers() function does not require contiguous memory to populate the Buffers object.

Use of Buffers might be valid for some Non-XML processing, but when the application needs access to whole input I prefer buffer.

Good news is that the first (workable) Non-XML sample program can be found in readAsBuffer() documentation itself. It is a binary identity operation with error handling. Here you can see rAB.js:

Since binary identity is not that interesting lets see now the binary reverse operation from "... without Dataglue license" posting. Adding 5 lines to rAB.js does the job. here is reverse.js:

Now lets see what both do on sample input from "... without DataGlue license" posting:

Last question to be answered is on the runtime of rAB.js and reverse.js on the 10MB input. That can be answered easily based on the ExtLatency logging target of coproc2gatewayscript again blog posting:

...,AXF=137,AGS=908,......,AXF=134,AGS=147,...

So the reverse operation on 10MB data (read binary data, revert, output result to context) took (908-137)=771msec.

There is a "front" XML-FW listening on port 6001, gziping the Non-XML input and dispatching to /gz XML-FW on port 6002 or /gz-hash XML-FW on port 6003.

In order to gzip(Non-XML) both rules have to set "gzip" as Output-Filter and Non-XML Processing to "on"(via Objects screen).

The gz-hash service on port 6003 has a Non-XML Transform Action with output context "xwa" generating the hash.Next is a Results action that attaches Non-XML input to "xwa" context. Because the Non-XML input is the gzipedinput from "front" service this does the right thing.

Last a Results action returns the "xwa" context to OUTPUT (inclusive the attachment, as MIME, see above).

Here is a combination of 4 screenshots of the whole gz-hash policy.

And this is stylesheet "hash.xsl" that hashes the Non-XML input data using dp:hash-base64() DataPower extension function:I got confirmation that the hash computed by DataPower matches the hash computed for same file by Java backend application.

To be able to directly "read" the binary data logged and my "brain-base64-decoder" not being that good stylesheet log-binary.xsl logs the binary data hexadecimally encoded (50% message length increase compared to base64-encoded).

XS40 as well as new XG45 (without DIM feature) do not have "DataGlue" license and allow for very limited Non-XML processing capabilities.One Non-XML feature which is present is "Convert Query Params to XML" action. It converts Non-XML CGI-encoded input (an HTTP POST of HTML
form or URI parameters) into equivalent XML message.

This is a demonstration of complete sample application "binary-reverse" -- it just reverses any (binary) input data and returns that.This works on boxes without DataGlue license like XS40, but on other boxes as well.The 0x00 bytes at begin, in the middle and at the end of sample input file are only present "to make it more difficult" ...

This is sample output from 2nd service in export presented last Monday in Frankfurt.Here you can see how to convert binary input data to base64 or hexbin encoded representation for "normal"stylesheet processing, as well as how to return arbitrary binary data from a stylesheet generated base64 string:

Before I did the zip2html posting last December I had a solution without the cool execution of an attached stylesheet.

I did extract all files needed, and because storing on local filesystem was not possible without going through xml-mgmt,
made use of a self-implemented "file cache". In difference to normal backend response caching in that scenario the
files are available on the "client side" and caching these was not that easy.

But because I finally came up with a purely attachment based solution for zip2html tool there was no need for the "file cache" anymore.

I had discussions with a Techsales colleague at that time, and he just told me that he needs to cache client data and asked for my cache.

So I seperated out my "client request cache", reworked it and post it here today for anybody who needs to cache client data on DataPower.

By default the document count is 5000 (which you may want to reduce) and the document cache size is 0 (disabled, you need to increase).
The maximal size of a document cache is 161MB, but keep in mind that anyconfigured document cache memory is lost for transactiuons.
I did set document caching to fixed with time to live (TTL) of 59 seconds for URLs matching "*cache*", all other URLs are not chached.

Before going into the details, that is what "you get".
First we cache (POST) the document with content "test123" under URL .../cache/0001.
Then we retrieve it (GET) two times successfully.
The big image Screenshot.png gets cached under URL .../cache/002 then.
And two more get requests get it back from the cache.
The "word-count" (wc) commands prove that the received and original sizes (273657 bytes) are identical:

And this is screenshot of "Status->XML Processing->Document Status" status provider after above commands.
Here we can see that the small "test123" document gets cached as one document.
The big Screenshot.png gets cached as 3 parts (only last part less than 127.000 bytes in size).
And the concatenated complete compressed base64 string for retireval under URL .../cache/002:

Again, before going into the details, find a 3.8.2.6 domain backup and its zip2html tool output attached here.

The three files stored in local:/// directory of that domain (shown further below) are available inlined in the "(all)" link of zip2html file!

Since HEAD is not really helpful in getting client data put into DataPower document cache, only GET remains.
So a caching service on DataPower needs to "send" the client data using GET to a helper service on the same box to do the caching.
Unfortunately the HTTP header size is limited, so although compressing the client data first it may not fit into a single request.

The solution I have implemented is this:

Client:

sends some (binary) data of arbitrary size to DataPower for caching under a specific URL

extract the 127.000 bytes received by a "part" GET request and "return" them which stores them into document cache

when receiving a "combine" GET request from 1st service, reading all 127.000 byte parts for that request from
document cache and "return" their concatenation, which stores the whole compressed document into document cache

So what the file cache does is storing files from the client side into DataPower document cache circumventing the problem
that you cannot cache files by a POST request -- that's all -- see the demonstrations above, and try out yourself !

I did import and test the above attached backup on 3.8.1.20, 3.8.2.6, 4.0.1.4 and 4.0.2.5 firmwares.

(for me this is the first time, that all rule "matches" are HTTP-method matches (for GET or POST))