Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

lawpoop writes "Russell Kirsch, inventor of the square pixel, goes back to the drawing board. In the 1950s, he was part of a team that developed the square pixel. '"Squares was the logical thing to do," Kirsch says. "Of course, the logical thing was not the only possibility but we used squares. It was something very foolish that everyone in the world has been suffering from ever since.' Now retired and living in Portland, Oregon, Kirsch recently set out to make amends. Inspired by the mosaic builders of antiquity who constructed scenes of stunning detail with bits of tile, Kirsch has written a program that turns the chunky, clunky squares of a digital image into a smoother picture made of variably shaped pixels.'"

(1) If you make the pixels sufficiently small, nobody will notice they are square or triangle or whatever because people won't be able to see anything but a bight point of light.

(2) Not all pixels are square.

Those used for TV-compatible computers like Atari 800, Commodore=64, or Amiga used rectangular pixels (more tall than wide) because of the analog NTSC standard (which doesn't use pixels at all but is approximately 704x486 analog resolution). These computers produce rectangular output to be consistent

Actually, no, anti-aliasing leads to an image that more closely mimics the way we perceive the world (continuous analog signals, instead of digital samples). Here is a VERY good primer on Anti-Aliasing (with pictures!) http://www.povray.org/documentation/view/3.6.1/223/#s02_01_02_08_04 [povray.org]
Now, that isn't exactly how it works in every system, but the basics are there and the best algorithms for the task are also presented. In case you didn't click, or don't care to read the whole thing here are the basics:

Actually, anti-aliasing is nothing like blurring. True anti-aliasing is actually a projection of a higher sample rate to a lower one by combining more than one sample within the area of a single sample at the lower sample rate. While not as accurate as the higher resolution image, it is significantly more accurate than simply selecting one sample from each area. Blurring would be taking selecting one sample within the area of a single sample at the lower rate, and then averaging neighboring samples, which means you actually end up with less information than the un-blurred un-anti-aliased image.

But those are pretty easy to solve. The most complete solution is simply to increase display resolution past what the eye can perceive. Have small enough pixels, no jaggies can be seen. We are working towards that bandwidth of the interconnects and cost being the only hurdles, and those are going away slowly. As a quite effective stopgap, anti-aliasing can be applied. It is very easy to do on modern GPUs for little cost.

Now, take a variable size, variable geometry pixel grid. Tell me how your process that, how you store it in memory, how you rasterize images to it. Sound like some complex problems? They are, very complex. So solve all that, and in such a way computers can process it in realtime with cheap hardware (if it is even possible). Then you get to tackle the REAL hard part: Building a physical display that can display said pixels.

So, you can do all this, which I am unconvinced is possible, OR, we can simply work on making displays with more pixels. Get displays up in the 300-400PPI region and none of this is a problem anymore. While that will take more bandwidth than our current interconnects provide, engineering higher bandwidth interconnects is a well understood problem and there are a number of solutions (such as simply running more channels in parallel). It will also require working on ways to bring the cost of high density displays down but again, we've had a great deal of success with that. LCDs went from VGAish resolutions that were quite expensive and small to massive HD displays in about a decade.

To me, it seems like we have the solution to the problem. This new solution sounds far, far more complex and likely impossible.

Would someone tell me how this happened? We were the fucking vanguard of displays in this country. The Sycraft-fu Mach3 was the display to own. Then the other guy came out with a 300 Pixel Per Inch display. Were we scared? Hell, no. Because we hit back with a little thing called the Mach3Turbo. That's 300 PPI and an aloe strip. For moisture. But you know what happened next? Shut up, I'm telling you what happened--the bastards went to 400 PPI. Now we're standing around with our cocks in our hands, selling 300 PPI and a strip. Moisture or no, suddenly we're the chumps. Well, fuck it. We're going to 500 Pixels Per Inch.

Sure, we could go to 400 PPI next, like the competition. That seems like the logical thing to do. After all, three worked out pretty well, and four is the next number after three. So let's play it safe. Let's make a thicker aloe strip and call it the Mach3SuperTurbo. Why innovate when we can follow? Oh, I know why: Because we're a business, that's why!

You think it's crazy? It is crazy. But I don't give a shit. From now on, we're the ones who have the edge in the PPI game. Are they the best a man can get? Fuck, no. Sycraft-fu is the best a man can get.

What part of this don't you understand? If 200 PPI is good, and 300 PPI is better, obviously 500 PPI would make us the best fucking display that ever existed. Comprende? We didn't claw our way to the top of the display game by clinging to the 200 PPI industry standard. We got here by taking chances. Well, 500 PPI is the biggest chance of all.

Here's the report from Engineering. Someone put it in the bathroom: I want to wipe my ass with it. They don't tell me what to invent--I tell them. And I'm telling them to stick 200 more PPI in there. I don't care how. Make the Pixels so thin they're invisible. Put some on the stand. I don't care if they have to cram the 500th pixel in diagonally to the other four hundred, just do it!

Bandwidth to the LCD isn't really a problem at all.
Making a large quantity of pixels, however, is.
The trick is that customers expect a perfect display - therefore, as you get more pixels, your yield drops, as a single dead line makes a display completely unsellable.
Then of course, there is existing software that expects a screen to be 72-96 dpi (e.g. LABVIEW I HATE YOU WITH PASSION). Smartphones don't have this problem because they were designed from the ground up for variable resolution and dpi displ

Get displays up in the 300-400PPI region and none of this is a problem anymore. It is a question of angle of view, not absolute pixel spacing. Specifying the pixels per inch is useless without also specifying the distance between the display and the eye. That being said, a 4000x4000 pixel display it about the point at which the human can no longer perceive individual pixels while simultaneously viewing the entire display. Higher resolutions are required if you want to be able to focus in on a small area of

And non-square pixels help the general public how? I'm having a hard time imagining a screen that could display non-square pixels, or rather pixels with varying shape, using today's technology. Or technology in the next five or so years.

Who says a display has to be raster-based? If you eliminate that constraint, then other image encoding schemes may be considered, including vectors, wavelets and, yes, variable-sized pixels (though how one defines a 'pixel' in this context is arbitrary).

Exactly. Besides, you have to have some kind of regular pixel on a physical display, so it has to be some geometric shape that meshes well with itself: squares, rectangles, triangles, or hexagons. Squares are the easiest. To overcome the blockiness, you just have to decrease the pixel size enough, and increase its density enough, so that the human eye can't perceive the individual pixels. Modern displays have pretty much achieved this.

It sounds like this guy's trying to invent variable-size pixels, but that doesn't make sense. Sure, you could come up with algorithms for dealing with them efficiently, but making a physical display that shows variable-size pixels is anything but trivial, and pointless since we can already make square pixels so small.

Pixel was completely misused in the article. He's working an image scaling [wikipedia.org] algorithm for photos. That isn't saying that it's not noteworthy, interesting or important; it looks like it works great and I'm not aware of anything that produces results that good on photos. There is the Hqx [wikipedia.org] family of filters, but those were designed for emulators and aren't meant to be used with more than 256 colors.

As I understand it, he proposes a system where each pixel (meaning in the image format, not on the physical display) can be subdivided in two areas, with different possible shapes (two rectangles on top of each other, two rectangles next to each other, two triangles) and different sizes of the two shapes. The best way to subdivide is decided for each pixel, in a way that maximizes the contrast between the two areas.

Actually, I know of no color display that actually has square pixels. Even modern LCD displays use non-square pixels. If you combine three subpixels you get a square, but the subpixels can be set individually.

Storage requirements go up as a square when you make the pixels half as tall/wide.

So what? Storage space is plentiful, and increasing geometrically. You can buy a 2TB hard drive now for just over $100. More importantly, display pixel density is NOT increasing geometrically; in fact, it's just about stagnant (except on smartphones, but they've just about hit their peak too with Apple's new display). Most displays now are 1920x1080, and it's unlikely they'll go much higher since the human eye can only per

They'd probably be better overall than square pixels (as they'd be more uniform, whereas square pixels look great for straight vertical and horizontal lines, but look terrible for lines that don't match the layout of the pixels) after anti-aliasing.

However, fabricating LCD panels with hexagonal pixels would probably be a pain.

That memo basically makes the point that a pixel is an infinitesimally small point, a sample representative of the colour in that area. The sampling can be of any form, for example it may have a Gaussian shape, and thus when displaying an image made of these pixels (samples) we should spread the colour of those points to the surrounding area in a suitable way (eg. with a Gaussian).

From the article: The pixels are still square; they're just cut into two pieces along a line through the pixel, and each piece has a color. (It sort of reminds me of S3TC.) The edge of a polygon would have one piece for the front and one for the back, and any other points along it would have one piece for each of two texture samples.

Actually, if I understand it, he made a higher resolution image by turning "the chunky, clunky squares of a digital image into a smoother picture made of variably shaped pixels", if I understand correctly, twice a many pixels.

Kirsch’s method assesses a square-pixel picture with masks that are 6 by 6 pixels each and looks for the best way to divide this larger pixel cleanly into two areas of the greatest contrast

I think, he has actually upscaled the images and used his technique to actually do the mythical CSI image enhancement.

Kirsch has also used the program to clean up an MRI scan of his head. The program may find a home in the medical community, he says, where it’s standard to feed images such as X-rays into a computer.

I don't get the impression at all that he's making smaller images. I get the impression that he's actually pulling detail out of the image with the jaggies, and getting a clearer image. The end result is actually more pixels, not less.

However, that's just my best interpretation -- I'm entirely willing to concede that I'm wrong. I'm actually trying to figure it out.:-P

It is entirely possible that he is ending up with the same number of variable pixels which gives more apparent detail, just with better alignment.

Paul Nipkow
had filed a German patent application on his mechanical-scanning TV or Elektrisches Teleskop1
in 1884, in which he referred to Bildpunkte--literally picture points but now universally translated as pixels...Alfred Dinsdale had written the very first English book on Television in 1926, but instead of
picture element

His algorithm steps up pixel density by 36, fusses around with it to enhance contrast, then effectively halves it again (as each pixel is now 2 fixed, if oddly shaped, blocks of solid color). I think this is more effective as an "enhance contrast" algorithm than a compression algo, since it seems you would still need to account for the increased resolution, though adding tracking as RGBA1/RGBA2/Shape/Rotation could streamline the bitmap data.

IANA Mathematician, but thinking about it more: You wouldn't actually need the shape. If you split a square in half at a given angle, you will get triangles and rectangles he seems to describe. One byte can store 0-180 in 12 15 degree increments, leaving 4 values to alternately mimic the "pixel" above it, to the left, up and to the left, or up and to the right. Then read them as 2 pixel pairs, where the 4 alternate values of subpixel 2 reference to the right, lower-right,down, and lower-left. Given a 36

He says that it was the logical thing to do, it still is! Keeping the pixels square makes sense in almost every computer science aspect I can imagine, and this guy have had 50 years of regret and came up with something that's comparably very hard to implement in scanners, memory, screens and software. Triangles and hexagons are two other ways he might have gone that's comparably simple, but squares are more intuitive. I think his contribution in the past was brilliant. He really should have no regrets.

This sounds like the ongoing debate between analog and digital audio. Everyone likes using images like these [ucview.com] during the debate, but given enough resolution (bits), the closer the digital audio will be to its original analogue (electrical) source.

Square pixels are easy to manipulate to be sure, so are single-core CPUs easier to code against, but the world is not perfectly right-angled. I'm now waiting for my flying car; which will use nano-morphing, variable shaped pixel in-car & HUD displays, and all controlled by a hella-core processor system running Hellabuntu. The sky's the limit! Or something like that.

to fit a larger baby image on his 1957 computer?From what I get from the article for him it is like a day has not passed since then and things like image compression were never invented.Oh, wait, that is what Alzheimer's does to you... I see...Anyway, we should thank him for the pixel (and for it being square and not something ridiculous that would give us problems for years to come...).

Are you honestly suggesting that, by having more pixels on the screen, the picture won't be blocky when they zoom to the pixel level?

No, I'm suggesting that by increasing the resolution the designer will be able to draw smaller shapes. This solves the designer's problem of "I need to be able to get a smaller shape in here but it's all too blocky."

We are not talking about increasing the resolution of the designer's screen. We are talking about increasing the resolution of the image. That won't increase the detail in that image, but it will increase the potential for adding smaller details. The designer will likely need to redo the image completely in the new resolution.

It is not really that far fetched of a feature to request. You could simply have different layers at different resolutions and it would already work, you could also go the next step and simply keep all brush strokes vectorized or use some clever in between. I wrote an app [berlios.de] that could do that once, not practical as it wasn't optimized at all and thus got slower with increased image complexity, but fun to toy around with.

Okay. Minus five points from the "graphic artist" for not knowing how to resample the image. Plus one point for trying to improve her knowledge instead of suffering in silence.

Minus one point from you for not knowing something that's not directly in your field.Minus ten points from you for not even trying to help.Minus fifteen more points from you for being a jerkass about it on Slashdot.

The configuration of the variable-sized "pixels" depends on the image, so you're not going to get a new screen with more detail -- this is for storing images. From what I can tell, he's doing a basic form of an old and well known compression technique ("macroblocks" in JPEG, H.264, and others) and calling it a new form of pixel.

Can someone explain to me what this is about... The picture in the article looks pretty amazing, wow using these non square shaped pixels sure makes the face look clearer, but on closer inspection, it's made up of square pixels which are in fact smaller than the pixels on the left image. Look at the top left of the picture for example.

I'm sure if the picture on the left was resized to match the pixel density on the left it would look just as good..

1) A pixel isn't "invented" by anyone. A pixel is just a concept that is so straightforward, like the wheel, language and adding numbers. It's not a question of which single person "invented" it. It's just a question of, once the technology is there, it WILL be used, no matter what.

2) What kind of screen are you going to use for that? Each pixel can have different types of pixel sizes so no screen could fit that. A square grid is the most uniform division of 2D space into units.

3) If this would have been about hexagonal pixels, I'd have found this cool.

4) At best, this is a new compression scheme for storing pictures - but certainly not a way to display them (see 2))

5) Non square pixels are not a new idea, see for example sensors of cameras.

Perhaps it is a clever algorithm, but the summary and article make it sound like he is re-inventing the pixel. I don't think that is correct. The example shown starts with a square-pixel image, and outputs another square-pixel image, just at a finer resolution with less blockiness.

While it does appear to work, it isn't clear to me how much information can be inferred correctly. Furthermore, cameras often don't use square sensors to begin with, so this isn't directly applicable to the raw image format.

The current problem is that on an LCD display, the Red, Green, and Blue pixels are adjacent to each other, not co-located. Coming up with a scheme to make all 3 colors appear to emanate from the exact same point would be a useful development.

We just got the first 300 plus dpi screen in iPhone 4, and obviously that will come to other devices. At that point, you don't care about the pixels anymore because you can't see them. What is onscreen does not appear to be made out of pixels, it appears to be made out of curves and lines. Most photographic prints are less than 300 dpi. Most people have never seen an image with higher resolution than an iPhone 4 screen. We don't talk about the shape of the pixels in a chemical photo print, we talk about the

Many image-enhancement techniques exist that do just this, and this is not really new.
In fact this proves that square pixels work just fine to transmit the information, but the image can be enhanced to a larger resolution by non-linear techniques that work better than simple [traditional] upsampling.

The printing industry has dealt with this sort of thing for a while. Read up on Stochastic Screening. Not quite the same, but it gives you a sort of idea of the problem of mapping continuous tone data in a non-continous space.

First, here's the actual paper [nist.gov], since it clarifies what exactly he's suggesting and doesn't seem to be linked anywhere in the article.

It's not a suggestion that we start using non-square pixels for displays or cameras or scanners or what not, though he's certainly not being very clear about anything and the reporting on this is just making matters worse. What the paper proposes is a method where:1) The image is split into 6x6 blocks2) For each block, you go over the four rotations of the two following two-section masks:The triangular mask:ABBBBBAABBBBAAABBBAAAABBAAAAABAAAAAAThe rectangular(ish) mask:BBBBBBBBBBBBBBBAAAAAAAAAAAAAAAAAAAAAfor a total of eight effective masks, and average the values under each section, resulting in two values, A and B.3) For the mask and rotation that has the largest difference between A and B, you output the mask, the rotation, and the A and B values, resulting in 19 bits from a 6x6 (288 bits) block.

Though he talks of non-square pixels and whatnot, it's really just a compression algorithm. A really stupid one. Basically it's a bad variation of vector quantization, with lots of baffling details. Why 6x6 blocks? Why those specific masks? Why are you maximizing contrast instead of minimizing error like any sane person would do, WHY? There's no rationale given for any of these choices, not theoretical, not empirical, not even subjective.

The same sort of rigor extends to his comparison, where he compares his compression algorithm to, instead of, say, another compression algorithm, the image apparently simply downscaled and then scaled back up. And not even with a halfway decent resampling algorithm, but with nearest neighbour. Not to mention that the "non-square pixels" version has 2.375 times as many bits to work with. If he'd done a comparison to a reasonably modern compression algorithm like JPEG, the results would be much less favorable to him.

tl;dr Some old guy put together his My First Compression Algorithm kit and it's being treated like a revolution in graphics by ignorant reporters. Nothing to see here, move along.

I completely agree with you about the fact that everything in the paper seems to be pulled out of thin air... But I do see two reasons why his compression algorithm might be better than JPEG or other lossy codecs in some situations:1) the decompression performs no arithmetic on the pixels, hence you can perform gamma correction or color change losslessly (like in a square-pixel image)2) aside from the choice of mask, the compression is entirely deterministic, which is a plus in scientific imaging: when you have a "triangular pixel" with value 200, you know that the average of that zone was exactly 200 (with JPEG, you can't know anything for sure as the compressor could add artefacts or remove detail as it sees fit)

Why are you maximizing contrast instead of minimizing error like any sane person would do, WHY?

The algorithm he created looks a lot like HQX [wikipedia.org] which is used mostly to scale old video games. His algorithm seems generalized to work on high-color images while the HQX algorithms expect something closer to 16-color or 256-color images. HQX probably deals with dithering better.

blatant as it may be, I read the article three times now - and Soilworker, you did well not to bother. I'm pretty sure the answer is not in there.

This doesn't seem to be about square pixels in terms of display technology (where hexagonal pixels may indeed be superior).It also doesn't seem to be about picture acquisition.On the face of it, it seems to be talking about mapping rudimentary shapes to pixels so that they conform to a most-likely contrast-matching scenario with regard to surrounding pixels. Which some other posters here already pointed out with posts about JPEG and the like - but it's not really comparable to that either. Not in technique and not in performance.

At best, as far as I can take away from it, it could be a different way to display an image when zoomed in / a technique that could be used when enlarging an image to provide greater apparent detail (although you wouldn't want to enlarge it - you'd want to store the masks found with the original image for display).

The results in the news blurb look pretty decent and if nothing else 'different' from other 'smart scaling' methods, so it's worth exploring. But what this has to do with square pixels as we're mostly familiar with them, I have no idea.

It is now apparent that for a comparable amount of information stored, a more complex algorithm (with maybe even N passes required) could be employed to produce better results to the human eye [wired.com]. To me, this article seems to miss the beauty of keeping it simple and going with the square. I would also bet that all of his examples are done by starting out on a square based pixel image. How would one scan an image in one pass with his new suggested method? This might become a better standard but I would wager it would make a lot of things computationally more expensive and displaying the images more complex. Not to mention manipulation of the image gets a bit trickier and probably throws a monkey wrench in a lot of our widely implemented compression technologies that already produce this sort of "creative blocks" of multiple pixels.

I'm not an expert in this field and I find his further research neat and mildly innovative but I would bet that when it comes down to weighing the practicality of implementation that squares remain.

No, no... They didn't have color in the 1940s. Just look at the movies from back then...

Actually, you jest, but I remember the first time I saw footage from WWII that was in colour and being stunned, because it was so vivid.

Yeah, I know there actually was color photography in the 1940s. I mean Wizard of Oz came out in the 1930s... I thought about whether to add a footnote indicating that the joke wasn't factually correct, but decided against it.

True. And you'll notice that all of the parts of the story that were filmed on Earth are in black and white. It's only after Dorothy drops through the wormhole and they start filming the parallel world called the Land of Oz that the film shows color.