Posted
by
timothy
on Monday March 14, 2005 @02:00PM
from the only-a-child-can-do-it dept.

Guspaz writes "Up until now, colorizing a video or image has been a painstaking and mostly manual task. However, researchers in Israel have come up with a new way of colorizing images just by making a few scribbles. The technique works on the premise that 'neighboring pixels in space-time that have similar intensities should have similar colors,' and also allows colorization of videos by 'marking' about one in ten frames."

Back when voyager(s) were flying by planets I recall reading how the cameras worked. From what I remember, the cameras actually capture images in black and white. The cameras can detect much more "color" depth than color cameras could (or can?). The scientist would process the pictures to colorize them, you identify one area of color you know and the algorithm would process the rest of the 1 billion shades of gray into a color mapping for people to view. Now why cant identify this gray shade as the color red; anytime you see it then that is red. Go on for each color spectrum or have the algorithm adjust what a little red hue is for a given little hue of gray. It appears that is what the scribbles are doing which is quite clever and the algorithm doesn't have to work (guess) so much.

The problem with doing this is that, for any given camera, there will be a band of RGB color combinations that produce the same luminosity, so a single camera does not provide enough information to produce a full-color image. It requires several cameras, each filtered to a different spectral range, to be able to produce a full-color image, unless you know in advance that your image is monochrome.

No kidding. Even if you are not doing colorization, the boundry detection algorithm he is using kicks ass over the "magic wand" tools in both photoshop and gimp. Perhaps it is the fact that it is doing several "magic wands" at once and boundries are determined by what matches the best, rather than just "does this match good enough".

Im not sure about that. I'm taking a wild guess, but I believe they simply start propogating out from all scribbles, so that, as the growth of two colors approach each other, the stronger one wins out. The wand tool in photoshop starts only from the point you click (rather than the 10+ scribbles in this algorithm). There's no competing areas of propagation.

Create a magic wand tool that requires multiple clicks on the various regions of the image and you'd have pretty good results.

This is an image processing technique known as segmentation [ttp], which is an active area of research. Combined with "texture classification", and you can easily break up a scene into regions of distinct visual appearance.

In the Gimp, you can change the sensitivity. When you click on the Magic Wand ("fuzzy select" they call it), you will see a slider marked "Threshold". The larger this number, the more forgiving the fuzzy select is.

from looking at the before and after images, this technique looks pretty cool and will probably have applications for recoloring an image that is already color. For instance, the image where he recolors the fabric on the chair.

I agree, this does look cool, but there are already colorization tools and Photoshop filters out there - how does this stack up against them? The article says these are "tedious, time-consuming, and expensive tasks", but does not mention the speed of their own scribble method. Using older colorization methods may be preferable if time is an issue.

I'm painting the upstairs of our house and can't get inspired for anything.

Being younger we went with "different" color schemes in our house, orange, grey, shit brown (all really fits too) but the upstairs is still *blah* and I can't imagine any color there than what already is. I've got paint chips taped up here and there and something like this would be nice to have at home (far off I know).

Taking a picture then scribbling on it would be a nice way of previewing paints. In fact I suggest here (in the op

The obvious extension of this is to run software which can identify objects in film and track their motion. Then you apply colour to the 3D digital copy of the item, implant it back into the film and have the original shading information from the B&W film cast on top of the coloured, reimplanted, object.

This doesn't stand up though..as colorized films have failed big time. They were a fad for a while, but it just petered out.

But I have a big problem with colorizing films anyway. Just because you CAN do something doesn't mean it should be done. The weak weak argument that "the kids won't watch it" fell on it's ass as the kids didn't watch the colorized crap either! Not to mention that at the time when colorizing was huge, music videos were the rage...and about half of them were in B&W!

No one is saying this technique is going to be used solely for coloring video that the artist didn't want -- but if you can use this technique for art sake, why not?

Sky Captain and the World of Tomorrow was shot in B&W and then colorized, presumably to give it a "retro-futuristic" look. The device works rather well in the context of the film. I was halfway through the film before I realized what they must have done to give it that otherworldly look.

"Now it's even easier for corporations re-releasing films to completely destroy the original beauty of a film by adding unnatural and unnecessary color!
Coming soon, new dubbing techniques will allow easy substitution of the original actors' voices and dialogue with trite teen-angst to appeal to younger generations."

Oh come off it you load of flamebait. Nobody is threatening your precious black and white films. If people want to watch the originals, they will. If they want to watch a new colorized vers

Ruin them?:) A lot of the appeal of older B&W movies is the fact that they aren't in color. You get a much broader range of contrast when it's filmed on B&W film than a color image which has been desaturated.

If you meant older color movies which have degraded, then I agree. This seems like a very useful technique for restoring the original vibrancy of colors to films whose media hasn't stood the test of time.

I don't necessarily believe that old black and white movies are good BECAUSE they are black and white. Granted, a lot of "colorized" movies look like crap, and I'll also grant you that a lot of black and white movies are good. I think that the correlation between the two is probably imagined. Colorizing a good movie doesn't necessarily lessen the movie, and can add considerably to it I'd say. The act of adding color (if done well), by itself, is not going to ruin the movie, in my opinion. Adding color,

Well, to be fair it's very possible the director might have made different cinematographic choices if he'd had color to work with.

On the other hand, in cases where the director is DEAD... chances are he either doesn't care anymore or has bigger issues to worry about.

From a historical perspective I do think it's a shame if the original versions are lost or suppressed, but otherwise the result of colorization or anything else deserves to be judged on its own merits. Not whether it might offend dead people.

I suppose, then, you wouldn't mind if Michael Jackson decided to add new backing vocals and some kickin' breaks to the original Beatles collection, and re-released them? He bought the rights, after all.

I meant black and white movies. I'm not saying we should colorize them and then burn the originals though, which is how you appear to be interpreting it. I'm saying that providing the option of watching an old movie in color may revitalize many old movies and television shows, allowing them to be rehashed and rerun for more corporate profit.

You don't think hollywood would want to re-release older movies or television shows to a younger generation to milk the franchise one more time? The content has already

You don't think hollywood would want to re-release older movies or television shows to a younger generation to milk the franchise one more time? The content has already been made, it just needs to be brought up to speed to be appealing to the youth of today.

I thought the usual formula of bringing old movies up to speed and *trying* to appeal to the youth of today was to add Ben Affleck and re-shoot. Of course, shoot Ben Affleck and re-add might be preferable to most.

I meant black and white movies. I'm not saying we should colorize them and then burn the originals though, which is how you appear to be interpreting it. I'm saying that providing the option of watching an old movie in color may revitalize many old movies and television shows, allowing them to be rehashed and rerun for more corporate profit.

I wouldn't care to watch a classic old movie with added color. Doubtless, in many cases the director would probably have used color if he could have, but the movie he

Read about it earlier this week, it looks cool, colorization looks nice, but with so few samples its hard to tell if it will work with a wide range of inputs, or only ones with contrast ratios like those shown.

This work is very similar to some work that was presented at last years siggraph using graph cut optimization titled Interactive Digital Photomontage [washington.edu] by some researchers at the University of Washington. This stuff is really cool and has applications outside of just re-coloring black and white. For example, compositors in the film industry adjust the color composition of scenes that were filmed during the day to look like they were filmed at night. Sometimes they just need to tweak the color because the art director isnt happy with it. Other times it's because they introduced CG elements into live action scenes and they dont quite match. If they can tweak those colors interactively, without authoring masks, it is faster than re-rendering the scene and that saves money.

This stuff is really cool and has applications outside of just re-coloring black and white.

Industry applications are interesting, but nothing new -- the industry has been using this technology for a long time when it was more labor-intensive, because they can afford to.

The REAL impact of this technology will come when you see it migrate into new versions of iPhoto and Photoshop Elements. In Photoshop, recoloring a part of a photo is relatively easy, but it still involves a mildly complicated process of

I'm not sure if this was intended to sound like some new ground breaking technique, but it really isn't. I am a masters Electrical Engineering student and am currently taking an Image Processing class. Using neighboring images to reconstruct an image is a VERY VERY common task - in fact, it's almost the only way to do it. How else are you supposed to guess the colors (or what pixel is *supposed* to be there) without knowing what's around it. It's obvious that the highest correlation will be between the near

I think the "break through" in this process is that it works over a series of frames automatically rather than requiring each frame to be manipulated. It was my uneducated understanding that colorization tended to be a frame-by-frame process.

If this can cut the work down to 1/10th normal it becomes plausible for the general public. While I'm no budding spielburg, I know a lot of people who might want to touch up the color quality of their wedding video.

The site is slashdotted so I cannot read it, but i wonder if something akin to this could be used for compressing motion video. For example the intensity is encoded with currrent techniques, but instead of the color being encoded at a lower resolution, instead only a very small amount of colored points are encoded. Then during the decoding, the decoder uses an error function, intensity, and the time domain of previous and future frames to 'fill' the colors out.

Take a grayscale image, and you already have the brightness and value for each pixel. All you do is add the hue component based on the color "scribbled" by the user. Stop filling with that color when you hit something that marks a definitive boundary.

Its not that simple. The problem is how do you define a "definite boundary"? And even if you do what you suggested, you are likely to get rather flat shaded images. What these folks do is really solve an optimization problem and try to make use of the fact that neighboring pixels of similar intensity would have similar color. It is kind of like a fancy flood-fill algorithm, but applied to a new area, and thats what makes this novel!

You b/w film purists. If all you can see is a threat to your bizarre, luddite idea of what film should be, you need to get your heads checked, or at least you need to listen to your inner geek. Stop using these folks' achievement as an opportunity for chest-thumping.

The idea that one could color correct video with a few strokes from mspaint is staggering. Imagine if one could do this to color video, in real time... you could color-highlight an object and the computer could follow it without sensors or other pre-implanted devices, and that's not even a particularly original idea. This is awesome technology with applications probably well beyond what we see here.

You b/w film purists. If all you can see is a threat to your bizarre, luddite idea of what film should be, you need to get your heads checked, or at least you need to listen to your inner geek.

I see NOTHING in the original post that mentions black & white films. It might be implied, but even then I don't see any mention about how black & white is better than color, or how colorizing an old film is bastardizing that work. Some people like to see films in their ori

If you load up page [nyud.net] and watch the very last video, can see a slight artifact in the reflection of the hanging stuffed animal. There's little spots of orange color, like you would get out of the AirCan tool in MSPaint.

I have a feeling this could be corrected with another scribble or two. Really a stunning piece of work. Very cool.

I dabble more in the area of 3d art, but at times I've lifted a pencil and come up with some decent b&w sketches. Pencil shading is easy but sometimes getting the colours just right is more difficult than you might think.

When I looked at a lot of B&W webcomics, I can see that they'd look better in colour (especially the ones where the artist does occasionally do vibrant colour cells, but usually don't have time). This could change that though... want to see what your character would look like with

Very cool. For what it's worth, and you might already realize this, I believe you can scribble white on areas you don't want re-colored. So it'd be easier to change just your shirt color and the birds in the 2nd photo.

I could see this working best as a "realtime" colour filter, especially if you're using a pen or something similar. Scratch near a border and view the result... if it goes a bit beyond where you want scratch on the other side of the border. If it's not quite enough lengthen your scratch.

I wonder how much CPU power is required, if you could do this realtime or close to it would be quite awesome, but having to make your scratches and click "apply filter" then wait for 30 seconds would not be nearly as useful/efficient.

For example, on Farscape, given Virginia Hey's problems with makeup and contact lenses... heck, any of these humanoids-with-funny-skin-color shows would benefit from not having to put in the hours upon hours of makeup. Instead, we'd see hours upon hours of post-production...

In the days when colorized videos of black-and-white films were common, I watched a few. The so-called "colorization" had some very serious problems, and I wonder whether this new method addresses them.

The problems tended to be in the background, and they probably thought people's attention would stay on the foreground, but I think like many things in film you notice them subconsciously. Either the background is out of focus, in which case there are no sharp edges for the colorization to work on, or it contains a basically infinite quantity of detail as the background gets farther and farther way. Either way, it was extremely common to see uncolored areas in the background.

It was fairly common to see black-and-white paintings hanging on walls, for example. The walls would be some fairly uniform wash of plausible wall color, but nobody was going to take the time to handcolor the paintings hanging on them.

A similar problem concerned scenes with machinery in them, or anything with lots of complex, detailed motion (so that successive frames weren't similar). Thus, you'd see black-and-white printing presses operating in a colorized newspaper building...

In addition, the fact that the colorized faces, for example, were a uniformly colored wash, rather than varying in color as well as brightness, created a subtle kind of phoniness. To me, the result was the conveyance of a sort of emotional coldness. The colorized movies looked colored, but they didn't feel colored.

The exact opposite of the kind of lift you couldn't help feeling in the fifties when you saw a Technicolor spectacular--in the days when "Technicolor" meant that by golly you were watching genuine dye-imbibation prints from real color separations. Sweet as candy, but irresistable. (The effect does come through in the best DVD restorations).

" The technique works on the premise that 'neighboring pixels in space-time that have similar intensities should have similar colors"

Interestingly, the retina exploits that same property of natural scenes to compress images. This correlation between luminance and color is an opportunity to throw out redundant information. The eye multiplexes color and luminance information over a single channel, transmitting luminance while discarding color at high spatial frequencies and transmitting color while discarding luminance at low spacial frequencies. First reported by C.R. Ingling, color/luminance multiplexing is an inherent property of the linear color-opponent center-surround receptive field. For a good explication of the subject, see:

Abstract: Analysis of the simple-opponent r-g receptive field of the X-channel shows that it is tuned to both high and low temporal frequencies, high and low spatial frequencies, and that its spectral sensitivity is both chromatic and achromatic.

Some folks seem to be excited (or angry) with the possibility of coloring B&W movies with this technique. Forget realistic coloring, this looks amazing for artistic recoloring.

Go take a look at the "recoloring examples" in the coral cache. Also look at what a slashdotter did [slashdot.org] with the code. Photographers, designers and painters could do neat things with a filter like this in Gimp...

No. Fill just goes until it meets a boundary. This colorization is a lot smarter than that. It appears to notice the boundarys by the sudden changes of the temperature in the color of pixels. That way it can then make an educated guess on how much to color and when to stop. You can then optimize this by putting in more than one input of the colors you want to change. This effect is really quite amazing. Scroll down and look at the gif video of the birthday party. JUST AMAZING.

Intensity actually takes most of the bandwidth of an MPEG stream, because human eyesight tends to notice changes in luminance more than changes in chroma. The chroma channels are compressed *extremely* heavily compared to the luma channel, and are actually even at a lower resolution.

And I'm guessing neither of the three of us know all that much about the details of video compression. The "Motion Picture Experts Group" or whoever are probably smart people. They already compress video by only noting changes between frames & stuff like that. I'm going to go out on a limb and suggest that their methods may already sortof make use of this technique.

Except that groupings of 4 pixels is not necessarily a good approach. For one thing, it may be far more color information than is really necessary. Besides that, two adjacent pixels may have significantly different colors. Carefully designed "scribbles" of color could very well take up less space and give better quality.

I'm sure there are good reasons for the JPEG/MPEG method, and I'd be a bit surprised if the groups in question didn't think of this possibility, but I still think it should theoretically gi

IF you count audio, then you might as well count it twice for stereo, and claim that video is 5-dimensional. Or even more, if you have surround sound. Might as well multiply that by two because most people have two eyes. Oh, don't forget the two ears. What are we up to, now? I lost count.