This does not sound more useful than 32bit BMP without compression, or true color, uncompressed TGA. Yes, it is simpler, but only marginally, and I lose the ability to easily work with it using common image editing tools.

I honestly don’t concur. Have you seen the format specifications for BMP and TGA?
Could you write a parser from scratch without having to look at the documentation? If yes, then I definitely owe you a beer.

Also, you definitely do not loose the ability to use common image editing tools. You can basically edit any farbfeld in place using

$ TMP=`mktemp`; ff2png < image.ff > $TMP; gimp $TMP; rm $TMP;

However, the point of farbfeld is not as a storage format, but within pipes. You can use it as a storage format though. Nobody stops you. If you use your tools (handling png’s for instance), you would just use

$ ff2png < image.ff | your_tool1 | png2ff > image_edited.ff

However, the point of farbfeld is not to replace existing formats, but to make it easier to modify images in a pipeline and make data-interchange between filters as simple as possible.
It should be easier for people to hack a filter themselves and use it on such a format. Please check out invert.c, which is described on the given page here.

When I find the time, I’ll start working on a set of filters which will be installed with farbfeld optionally.

I did a computer vision course at uni (many years ago) were we were doing simple blurring and edge detection. If I had farbfeld then it would have allowed me to focus on the algorithms rather than parsing the image format. (We were provided a .gif parser / writer, but it was horrendously buggy. In particular it had endian issues, so wouldn’t run on my personal machine, only on the lab’s Solaris boxes.)

Interesting anecdote! I heard similiar things at my university in the CG courses. They program in Java and use some crazy PNG Java bindings. It obviously distracts from the task at hand.

It took me a long time to get the PNG 16-Bit handling right in png2ff, and looking at a lot of code, every example had an issue here or there. Most graphic programs using libpng just literally don’t handle 16-Bit-data (even GIMP up until 2.9!) or mistreat certain formats. If you want to start hacking on something, the format shouldn’t be in your way.

Admittedly, the power of farbfeld only really comes when also using UNIX pipes and combining filters this way. In a normal academia course with most people being used to Windows/OS X and unfamiliar with the shell, it might involve a steep learning curve.
However, the lack of pipes in the first place leads to bad design decisions later on, so I’d love to see the knowledge on piping taught more at universities. Instead of starting right off with programming, a basic course on UNIX would be much more useful. It takes a while to learn that, but when I first really saw the power of pipes, it felt like a second birth.

Fair enough, but today you’d just use Pillow and get on with the logic part, no?

In the age of widely-available open-source libraries for dozens of image formats and package managers that make it trivial to use any of them, I just don’t see making it easy to chain together pipelines of tools written in some language so obscure that it doesn’t have a libpng binding available in its dependency manager as an important use case.

The question is: Have the programming challenges in image processing really changed at the consumer level? We have content-aware scaling and some other nifty things, but from what I can tell, most photographers use the same filters and other things they have been using back in 2001.
It’s not because these filters are old ballast or something, it’s just that 99% of the people don’t need exquisite stuff for their image processing, so I don’t see any reason why you wouldn’t write an image filter in C today. It sure is not as much sugar candy as writing it in Java/Node/Ruby,…, but you at least can avoid performance holes early on.

The beauty of C is actually its uglyness, because you know what you’re dealing with. It’s like being an auto-mechanic, having oil and dirt on your hands and clothes. However, working on your car without getting dirty has an academic value, but it somehow loses its character along the way and the cleaner you get the more you just want to get back to your shed and work on some cars for real, with simple tools you understand and not just a cable you plug in a new car, running a digital diagnosis.

I don’t see any reason why you wouldn’t write an image filter in C today. It sure is not as much sugar candy as writing it in Java/Node/Ruby,…, but you at least can avoid performance holes early on.

This is incredibly backwards reasoning. Performance today is 1000x cheaper than it was in 2001. You would need a very strong justification to throw away all the advantages of a modern language for the sake of performance.

The beauty of C is actually its uglyness, because you know what you’re dealing with. It’s like being an auto-mechanic, having oil and dirt on your hands and clothes. However, working on your car without getting dirty has an academic value, but it somehow loses its character along the way and the cleaner you get the more you just want to get back to your shed and work on some cars for real, with simple tools you understand and not just a cable you plug in a new car, running a digital diagnosis.

People who want to get actual work done will drive a modern car. A vintage car is a hobby. If that’s what this file format is, please mark it as such so it doesn’t get in the way.

Libraries like Pillow just hide the complexity, they don’t remove it. If we don’t want to continue the development that software gets clumsier and heavier every year, we have to find ways to deal with complexity instead of hiding it behind polished interfaces.
In this way, I don’t mean Pillow is bad or hard to use. It’s just a waste to really having to depend on this stuff.

Computers are still getting exponentially faster, every year we see new developments. However, somehow software always manages to fill up these new potentials with just more cruft and other things. There is no reason for that, and it’s in my best interest to offer simple and concise solutions to those who care.

I have written a parser for 24/32bit BMP, with a quick look at the spec. It isn’t harder than farbfeld if you don’t need the full stuff.

I can open a BMP in gimp directly, all my CLI tools support it, and so does my browser. I can easily construct a pipelene to apply filters or do any kind of transformation. Either with ImageMagick, or GStreamer, or any of the other options. Are this frameworks big and bloated? Undoubtedly. But they already can do what I need, so how would a new intermediate format help?

I mean, mith GStreamer, I can create a pipeline that decodes a PNG, applies filters to the RGB image, and saves it back to JPEG or something:

This does not sound any more complicated than the farbfeld-based pipeline, and supports existing formats. With proper tools, you won’t need to write your own filters, nor your intermediate format. I’m pretty sure you can do similar things without GStreamer, too, but I didn’t have the need for that in a while.

Too complicated for my taste, but if you can work things out with it and are happy with the gstreamer-monolith, then feel free to do whatever you want. :)

My stance on this is that in a pipeline, the individual tools shouldn’t have to depend on a library to edit the data. If I quickly want to write a filter in C and explore the science behind image processing, even BMP is too much of a hurdle.
Or if I want to export scientific image data from a measuring device. The program interfacing with this device shouldn’t really depend on libpng or something.

The gstreamer also breaks as soon as you have a problem that cannot be solved with the given filters. Writing gstreamer plugins is a pain in the ass, so it definitely is preferrable to have a “dumb” format which you can even modify in C. But again, YMVV, your points of practicality are true and are an important aspect, and I’m not denying the fact that farbfeld has literally no spread at the moment.

I’m not quite sure I understand why the gst-launch would be complicated. It does pretty much what your pipeline does, except it uses ! instead of a pipe. From the point of view of using it, or understanding the intent, the complexity is about the same. It is much more complicated underneath, indeed. But as a user, I honestly don’t care.

As a developer, I may understand the aversion to complexity, and writing a GStreamer plugin is certainly more work than writing one that works on farbfeld data. But, having written a number of GStreamer plugins, writing a filter that works with farbfeld-like data is almost trivial. You need some boilerplate, but about 90% of that is automatable, so much so that templates and generators exist, so you can get down to coding the real filter in about 5 minutes. (I just tried, after not having written any gstreamer plugin in 10+ years.)

Furthermore, as a developer, I like to avoid writing code others already did, so if I am doing image processing, I’ll be looking at existing libraries, and rather extend those, even if the initial learning curve is slightly steeper, than implement every filter and transformation on my own.

As for BMP being too much, here’s how to parse a 24-bit BMP:

Read the header (static size for all the relevant info; a struct will do).

Check the depth, return error if not 24bit.

Seek to the image data.

Read an array of RGB values.

That takes about 100 lines of C or so, and compared to farbfeld, the additional complexity is a slightly more complex header, where you can ignore most of it anyway, having to check the depth, and seeking to the image data offset. This additional stuff amounts to about 11 extra lines for the header, 2 for returning an error if not 24 bit, and one to seek. Add one or two more for error checking on seek, and we’re looking at ~20-24 extra lines, including error handling. You won’t need to do any post-processing, you can use the array as-is, just like with farbfeld.

I’ll happily trade two dozen lines of C for being able to use an existing format, that all my tools understand already.

With a line or two more, you can extract the original size and depth, and use it later in the pipeline. Slightly more complicated than the farbfeld pipeline, but works with existing tools, and still allows you to write the exact same filter in C. The difference is that it would get the width/height from the commandline, instead from the stream. Probably even easier this way…

That was also my first thought, but NetPBM is annoyingly fiddly to use directly, because of oddities stemming from once being designed to hand-write and send through plaintext email. Even in the binary version, really a “mostly” binary version, there are hold-overs like supporting comments and arbitrary whitespace in the header field, and having the image size in the header be given via base-10 ASCII numerals, also separated by arbitrary whitespace and comments. (What’s binary in the “P6” binary version of PPM is the actual image data, which comes as a big chunk following the header, not as ASCII numerals separated by whitespace, as was the case in the original “P3” version.)

It’s annoying enough that for the case where I want to pipe data into something else that is going to DIY process it without linking in some kind of image-processing library, I typically either just use raw headerless image data, with image size and color depth sent out-of-band or hardcoded, or I improvise something like the format here, a few bytes of parameters followed by the raw image data. So I could see myself using farbfeld, since it’s basically that, but with someone else having done all the work of writing a bunch of convenience utilities to convert data.

Because NetPBM is not simple enough. There are three subtypes of formats (Portable BitMap, Portable GrayMap and Portable PixMap) and in general you can only reach 16 Bit-depth using extensions, let alone alpha channels.
The fun thing about farbfeld is that even though the format is dead simple, it easily beats PNG’s for most data (modulo photographs). My tests with NetPBM weren’t as successful, which is due to the fact that there’s lots of overhead for the plain text representation (which has its uses). It also doesn’t scale well for normal images.

The idea of using palettes or grayscale is kind of redundant. Test for yourself: Take a 16-Bit Grayscale PNG, convert it to farbfeld and compress the farbfeld with bz2, then compare the sizes.

I added a question to the FAQ to deal with this topic, thanks for bringing it up! :)

I just looked at the NetPBM-manpage:

Note that besides being names of formats, PBM, PGM, PPM, and PNM are also classes of programs. A PNM program can take PBM, PGM,
or PPM input. That’s nothing special – a PPM program can too. But a PNM program can often produce multiple output formats as
well, and a PNM program can see the difference between PBM, PGM, and PPM input and respond to each differently whereas a PPM pro‐
gram sees everything as if it were PPM. This is discussed more in the description of the netpbm programs (1)

The comparison is not “NetPBM has many formats, farbfeld has only one”, it’s more like “a world with NetPBM has many formats, a world with NetPBM and farbfeld has many + 1”. The NetPBM P6 format, has a different header and lacks an alpha channel, but otherwise seems like an exact match to farbfeld: 16-bit channels stored as 16-bit big-endian integers packed together. If somebody really needs an alpha channel (or more than 4 channels, or non-RGB channels), the P7 format is there, with an only-slightly-more-complex header.

It also doesn’t make much sense to boast about farbfeld + bzip2 producing smaller images than PNG; you’re essentially benchmarking bzip2 against gzip, and it’s no surprise that bzip2 wins. NetPBM P6 + zpaq would probably thrash farbfeld + bzip2, but that has nothing to do with the relative merits of the two image formats.

Palettization is effectively lossy compression, and as such the benefits depend very much on the input image, the palette quantizing algorithm and the dithering algorithm. Throwing away the right details can make an image that’s vastly more compressible, enough to overcome the implicit handicap of gzip vs. bzip2. Of course, you could also palettize farbfeld images, but bzip2 will be at a disadvantage since it has to deduce that there are only 256 (or fewer) unique 8-byte spans in the image, instead of having that knowledge hard-coded.

All that said, NetPBM is annoying because the header format is annoyingly tricky to read: it almost works with sscanf() but not quite. Farbfeld’s is much, much simpler. Whether that slight benefit is enough to make it worthwhile, I guess we’ll see.

The comparison with png is made because png is stuck with its compression algorithm, farbfeld (and NetPBM) are not.

Palettization in farbfeld is implicit, not explicit. That is the most important point to make.

Btw: if one was always considering the current + 1 formats argument, there would never be progress.
Of course, if there are sane standards, use them. However, I think that the elephant in the room is that even though netpbm is so old, it has literally no traction and there are reasons for that.

One may overanalyze, but I see the reason to be because of the dozens of subformats and the messy spec and docs.
When it turns out you only need 1 format instead of 9, why not go for it?

To put it in another perspective: If you compress farbfeld and any netpbm format using the same algorithm, would there be lots of differences on the netpbm side? It is to be expected that due to its nature, even the best netpbm format would at least be on par with farbfeld, but I’m sure you already know that.

Btw: if one was always considering the current + 1 formats argument, there would never be progress.

Not at all. New formats to offer new functionality (e.g. better compression, colourspace support, different bit-depth or HDR) are a great idea. New formats where the only rationale is “it’s simpler than the previous format” are rarely worth it though.

As Boojum already said, use your favourite compressor and be done with it. ;) Usually, you would not decompress and compress the data inside your filter (for instance written in C), but rather use UNIX pipelines:

png2ff < image.png | bzip2 > image.ff.bz2

We just let the bzip2(1) tool do the job for us…

libpng is one big mess. Most people I know just copy paste code examples from the web. The fun thing here is that even after hours of research, I couldn’t find examples on how to read 16-Bit PNGs properly. Given libpng is also undergoing a huge API change, it generally is not fun to use.

In the end, I just dug myself through the documentation and wrote png2ff, and now it handles 16-Bit PNGs just fine.
Feel free to write yourself a small wrapper library or something, however, it only hides the real complexity of the image libraries, effectively making your program very slow in pipelines (but well, at least my code is short! /s).

Besides, for a pipeline, you would not want to convert to and from compressed image formats every time. separating this into external tools is the only sane solution, also in regard to what the future might bring.

I don’t think you get the point. Using imagemagick, how exactly can I get to the raw pixel data in my program?
Let’s say I have an image x.png and want to invert the colours, or something more complex which I would need to program myself. What exactly should I use? What should I do?

All those image libraries are a pain in the ass to use, so they are not an option (e.g., if you suggested just using libpng in C, hell no!)

Unless you’re doing something exotic and specific to a particular image format, there’s no need to ever read or write the files directly because you can use ImageMagick or some other library. The ImageMagick command line tools are just convenient utilities implemented using the ImageMagick library. That library has bindings to dozens of languages, and most people already have it installed. It gives direct access to raw pixel data, as well as higher level functionality like drawing lines and shapes, filtering, etc.

I just don’t understand creating a new image format to avoid learning an API.

I think that there is value in having very simple, easy to parse image formats; less so for storage than for custom operations. You don’t necessarily want to have to pass raw image data, still less encode and decode from something like PNG at every junction in your pipeline. I take no position on farbfeld, but when working in video, where you do often create a lot of custom one-off manipulation tools, using something trivial to decode without having to worry about implicit typing is really helpful.

I actually wasn’t getting your point, now I do. Well, thing is, ImageMagick is a massive dependency, and their API is not simple enough for my taste. Use what you prefer, use ImageMagick. :) However, in my opinion it should be much simpler and you should not “force” your users to install the heavyweight ImageMagick is.

By “simple bitmap image” do you mean Windows BMP? That format is fairly easy to produce but not trivial to parse. It has options for colour depths, palettes, RLE compression, alpha channels and colour profiles.