I've got a simple solution: Don't use Jonathan's program. The rest of us will remain interested and supportive of his efforts.

this is not about Jonathan's program, if you did not understand - this is about DNG converter and claims that nothing is lost... you just do not know what else was lost by people who did not save their original raw files, is lost now by the same people or will be lost in future, that's it

no, I am not and sky is in fact falling as usual - please read what I am referring to :

...

Do you object ? Does DNG converter irreversibly strips the data during conversion or not ? Very simple question.

Citing bugs specific to converting the RAWs from a few particular camera models to DNGs does not mean the entire DNG concept is a bad idea or that the "no RAW data is lost when converting to DNG" principle is generally false. Given the hundreds of undocumented, proprietary RAW formats Adobe has had to reverse-engineer to get DNG to where it is now, what's surprising is that such glitches aren't far more common than they are.

I am a single individual, and do not have the time or inclination to learn how to properly read hundreds of different undocumented and proprietary RAW formats. DNG allows me to shift my focus as a programmer from dealing with RAW format hell to the actual core functionality of the program--correcting lens aberrations and distortions. If you have any realistic suggestions for alternative input file formats that will allow me to continue focusing on the actual program instead of properly parsing hundreds of different input file formats (which would probably put YOUR camera on the "not supported" list), I'm all ears. But if not, STFU and quit wasting my time and LL's bandwidth. DNG may not be a perfect solution, but IMO it's telling that the DNG denigrators have yet to offer a realistic alternative input file format...

And BTW, converting to DNG doesn't mean you need to erase or alter the original RAW. So if a particular version of DNG Converter doesn't convert said file properly, and the bug is fixed in a later version, why yes, the improper conversion IS reversible--simply re-convert the RAW with the new version of ACR or DNG converter. Very simple answer.

Citing bugs specific to converting the RAWs from a few particular camera models to DNGs

how do you know that it is few ? did you test the rest ?

Quote from: Jonathan Wienke

does not mean the entire DNG concept is a bad idea

communism is a nice idea too... theoretically.

Quote from: Jonathan Wienke

or that the "no RAW data is lost when converting to DNG" principle is generally false.

well, the problem is that implementation was always flawed before, is flawed now and still Adobe along w/ some DNG fans are claiming that conversion does not miss anything... while it the real life DNG conversions are losing the data and you just do not know what else is lost as it is a closed source.

Quote from: Jonathan Wienke

Given the hundreds of undocumented, proprietary RAW formats Adobe has had to reverse-engineer to get DNG to where it is now, what's surprising is that such glitches aren't far more common than they are.

well, that is one reason why people should stay away from a buggy software like DNG converter

Quote from: Jonathan Wienke

I am a single individual, and do not have the time or inclination to learn how to properly read hundreds of different undocumented and proprietary RAW formats. DNG allows me to shift my focus as a programmer from dealing with RAW format hell to the actual core functionality of the program--correcting lens aberrations and distortions. If you have any realistic suggestions for alternative input file formats that will allow me to continue focusing on the actual program instead of properly parsing hundreds of different input file formats (which would probably put YOUR camera on the "not supported" list), I'm all ears. But if not, STFU and quit wasting my time and LL's bandwidth. DNG may not be a perfect solution, but IMO it's telling that the DNG denigrators have yet to offer a realistic alternative input file format...

And BTW, converting to DNG doesn't mean you need to erase or alter the original RAW. So if a particular version of DNG Converter doesn't convert said file properly, and the bug is fixed in a later version, why yes, the improper conversion IS reversible--simply re-convert the RAW with the new version of ACR or DNG converter. Very simple answer.

well, you in fact just do not know if DNG converter converts it properly or not, so you never should erase the original raw... not yesterday, not today, not tomorrow... which simply means that DNG is unsuitable for archiving, unless you are archiving the original raw file as well.

It works perfectly for every camera I've tried; 4 Canon camera models, 4 or 5 Nikon models, a Hasselblad MFDB, and a couple of digicams.

Quote

problem is that implementation was always flawed before, is flawed now and still Adobe along w/ some DNG fans are claiming that conversion does not miss anything... while it the real life DNG conversions are losing the data and you just do not know what else is lost as it is a closed source.

You are full of crap. On the cameras I've tested, there is zero difference between converting the original RAW and converting a DNG; comparing converted images gives a pixel-for-pixel match. If RAW data was being lost, there would be a detectable difference somewhere. And DNG is not closed source; you can download all of the specifications as well as source code needed to read and write DNG files fro free from Adobe. If you have questions about what is happening to the data, you have the ability to look at the code and see exactly what it is doing to your images.

Quote

well, that is one reason why people should stay away from a buggy software like DNG converter

It works just fine for most of the cameras out there, or people wouldn't be using Adobe software.

Thanks, but no thanks. It's not as easy to integrate into external projects, and it would require me to update my software every time a new camera is released. With the freely downloadable DNG SDK, I only need to update the file parsing code when a new version of the DNG spec is released, which is far less frequent than the release of new cameras.

Quote

well, you in fact just do not know if DNG converter converts it properly or not, so you never should erase the original raw... not yesterday, not today, not tomorrow... which simply means that DNG is unsuitable for archiving, unless you are archiving the original raw file as well.

Only for the few cameras that don't get converted properly. Whenever you do a file conversion or any sort of copying, you should always verify the copied/converted files are good before deleting the originals. It's not that hard to do. If you're really paranoid, you can step through the operation of the DNG SDK source code and verify with whatever level of detail you desire how correctly your RAWs are being converted. The fact is, I've tested numerous cameras from several different manufacturers, and had zero problems with DNG. BTW, the Library of Congress disagrees with you, and recommends DNG for long-term image archiving.

I'm done discussing this subject with you. Your arguments are based on fearmongering and falsehoods, and you don't have any constructive alternative input file format suggestion to offer. You are hereby added to my ignore list.

Getting back to the original focus of the thread, I've been working on the interpolation algorithms used to correct barrel/pincushion distortion and chromatic aberrations. I posted a demonstration program that allows you to open a JPEG, TIF, or BMP file and view it rescaled from 6-6400% of its original size. It's just a tech demo, so it doesn't have any of the following features:

File save capabilities

Color management

Batch processing

Deconvolution or blur removal of any kind (yet!)

Instant solution for world hunger

The program DOES do the following:

Opens an 8-bit-per-channel JPEG, BMP, or TIF file

Applies random distortion adjustment parameters to the image

Allows a zoom setting from 6.25-6400%

Displays the image at the selected zoom factor with a simple bilinearly-interpolated version for side-by-side comparison

You'll need the latest .Net framework on your machine for this to work.

Known issues:Error on startup due to a missing database file. Click continue and all should be well. The missing database will eventually be used to store PSF data.

Some very minor aliasing is sometimes visible at magnifications around 50%.

I'm looking for feedback on the quality of the interpolation. I've designed things to maximize sharpness and minimize aliasing, jaggies, and other artifacts. Please let me know how well you think I've achieved these goals, and why or why not. Thanks in advance!

I'm shifting focus to the actual deconvolution stuff now, so it will probably be a while before I post any more updates. But in the meantime, if anyone could post feedback on the interpolation, I'd appreciate it.

I'm shifting focus to the actual deconvolution stuff now, so it will probably be a while before I post any more updates. But in the meantime, if anyone could post feedback on the interpolation, I'd appreciate it.

Hi Jon. Good luck with your program! It's quite ambitious but would be a very useful tool. I'd love to see you open-source the project and/or collaborate with other projects that have already addressed some of these challenges (hugin comes to mind).

I grabbed your program and ran it on my XP box. It appears to work properly and the interpolation looks good. I compared to results from imagemagick using the mitchell and lanczos filters. Honestly, it's hard to tell a difference. I'm generally of the opinion that box and bilinear filters are bad, but once you get passed that level of complexity, you immediately enter the realm of (rapidly) diminishing returns, especially for a general purpose filter.

The one major distinguishing factor is that your program was very slow. I'm running a Core2 2.1Ghz machine so not the fastest but hardly a slouch. I tested with an 800x533 image and it took several seconds to interpolate. Imagemagick is *not* known for its speed but it was significantly quicker on the same image. I'm assuming you're focusing on quality and not speed / optimization at this point, but I thought I would be remiss to not mention it.

I'm interested in which approach to deconvolution you're planning to implement. There's been some *very* interesting research on single-image blind deconvolution lately. I know you're not going that route from the previous discussion, but you may want to search for those papers if only because they're fascinating. Are you aiming for spatially-variant PSF estimation? You did mention a target with multiple point sources, but I wasn't sure if that was to help get a more robust single PSF estimate or if you wanted multiple estimates across the image. If the latter, are you thinking of region-based deconvolution or will you interpolate the PSF for fully continuous variation? Finally, there's the algorithm itself... RL? It seems to add too much ringing unless your PSF is *perfect*. Again, there's been some nice research in recent years (Siggraph, CVPR, etc.) on using natural priors, edge-preserving filters, and multiscale methods to improve the results, sometimes dramatically.

Yes, it is slow; the design is biased more toward quality than performance. That said, I've been working on ways to speed things up without compromising output quality. For upsizing, I'm using a cubic spline based algorithm, but it uses splines going vertically, horizontally, and crisscrossed diagonally to reduce the appearance of pixelation and jaggies. Part of the reason for this approach is to build something that should work well for Bayer interpolation, so that I have an alternative to the interpolation done by ACR. For downsizing, I'm using a weighted-averaging scheme tuned to maximize detail without crossing the line into aliasing. I have run across some instances where the jaggie suppression isn't working right on heavily sharpened images or text that isn't anti-aliased, so I'll probably chase that down and beat it into submission before shifting gears.

Deconvolution is based on an array of PSFs labeled by camera, lens, focal length, and distance from what I'm calling the "logical center" of the image. When using a non-shift lens, the logical center and the physical center of the image are the same. But when using a shift lens, the logical center moves away from the physical center of the image in the direction and amount of shift. Each PSF is a set of splines. Each spline is tagged with an angle (deviation from logical center), and points on the spline are tagged with a distance from the "master pixel" and a percentage of signal from the "master pixel" that is expected to spill over into a "blur pixel" at that angle/distance. PSF data is generated by analyzing an image of a target consisting of small white dots (or possibly small light sources) on a black background, arranged in a rectangular grid so that the distortion, CA and vignetting characteristics of the lens can be analysed as well as blur.

Deconvolution is a two-step process:

Estimating the portion of a pixel's value that is true signal, rather than blur from elsewhere. This is done by scanning all the neighboring pixels within the PSF radius of the "master pixel, and using the PSFs to calculate a probable maximum signal value for the "master pixel". For example, if a master pixel has a value of 10% of maximum and has a large number of nearby pixels within the PSF radius that have a value of 0, and the PSF values are non-zero, then it can be safely assumed the master pixel's true value is zero, because if it had a non-zero signal value, some of that signal would have had to spill over to the neighboring pixels, causing them to have non-zero values. By comparing the neighboring pixel values to the corresponding PSF values, a maximum limit for the signal value of the master pixel can be calculated.

Transferring signal from the "blur pixels" to the "master pixel". Once the estimated signal value is established, the PSF can be used to calculate the amount of signal that spilled from the master pixel to each neighboring pixel within the PSF radius.

During deconvolution, each master pixel has a custom PSF interpolated from the nearest PSFs stored in the database. Not necessarily EVERY pixel, but interpolation is done often enough to avoid any noticeable borders or changes in the image where deconvolution switches from one PSF to another.

I suppose there's already a name for this general algorithm, but right now I don't know what it is.

I compared it to the default bicubic interpolation in the main CS4 window.

At 50%, it was hard to have a preference ... the lack of color management made it harder, but I might give CS4 a small edge.

At 200%, again - hard to pick, but I would say here there was more of a clear edge to CS4.

At 400%, they are VERY different. I think I prefer yours - looks a bit more natural and certain less "pixely".

Hope that helps ... I could devise a more rigorous set of tests ... but if you have a test script in mind, I'd do some more ...

That is helpful. When downsizing, I'm trying to retain as much detail as possible without aliasing, and when upsizing, I'm trying to maintain maximum sharpness, detail, and contrast without causing halos or clipping, and to give heavily upsized areas a smooth, natural-looking, "out of focus" appearance without any obvious pixel-based artifacts. The goal is to be able to go all the way to 6400% without getting any "digital looking" artifacts. It's not quite there yet, but fairly close.

The interpolation has to be able to handle upsizing and downsizing simultaneously and seamlessly. With barrel distortion, pixels that are halfway between the center and corners need to be moved toward the center, so when correcting this, the middle of the image is being downsized (pixels packed more closely together) and the edge of the image is being upsized (pixels stretched farther apart).

I'm shifting focus to the actual deconvolution stuff now, so it will probably be a while before I post any more updates. But in the meantime, if anyone could post feedback on the interpolation, I'd appreciate it.

You might be interested in Bart van der Wolf's investigations of downsampling methods if you weren't already aware:

Interesting stuff in IM's web page. What I'm doing for downsizing is a heavily modified box filter; if a pixel falls completely within the "box" it contributes fully to the box value, but if it intersects the edge of the box, then the pixel's value is split between the adjacent boxes. I'm doing a bit of weighting so that if a pixel is not perfectly centered on the edge of the box (which would evenly split the pixel value between boxes) the split gets exaggerated somewhat, so that a 60/40 split might get increased to ~70/30. By tuning the "exaggeration factor", you can significantly increase sharpness without causing too much aliasing, eliminating the need for a separate sharpening step.

For upsizing, I'm using a modified natural cubic spline function. Each pixel is a "knot" for splines running vertically, horizontally, and diagonally. To interpolate a pixel, I'm doing something similar to bilinear interpolation, except I'm blending the spline values from the 4 surrounding pixels instead of the pixel values themselves, and I'm blending the diagonal spline values as well as horizontal/vertical. I'm still fine-tuning the blending function to give the most natural "out-of-focus" look to heavy enlargement and the least jaggies and other obvious pixel-based artifacts.

It's nice to see that I'm not really "reinventing the wheel" all that much...

I uploaded a new version with minor tweaks to interpolation and major changes to the under-the-hood design to significantly reduce memory use and speed things up a bit. It's still not super-fast, but will handle much larger image files before running out of memory.

I uploaded a new version with minor tweaks to interpolation and major changes to the under-the-hood design to significantly reduce memory use and speed things up a bit. It's still not super-fast, but will handle much larger image files before running out of memory.

Hi Jonathan,

First of all, thanks for the initiative and for making the first trial available. I wanted to give your software a try with my torture test (http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sample.htm). Unfortunately I ran into a problem with your 0_0_1_8 version, errors referring to DotNet at startup but I have the latest ones installed (.Net Framework V1.1, and 3.5 SP1), and I have no complaints from other software (including Visual Studio). I can't find a reference to version 2 being installed any more, is that what your application is depending on?

If you want to try and clear the issue, feel free to send me a PM so we don't clutter this thread.