Posted
by
ScuttleMonkeyon Friday May 15, 2009 @04:17PM
from the onto-dewarping-brains-next dept.

Hugh Pickens writes "Patent 7,508,978, awarded to Google, shows how the company has already managed to scan more than 7 million books. Google's system uses two cameras and infrared light to automatically correct for the curvature of pages in a book. By constructing a 3D model of each page and then 'de-warping' it afterward, Google can present flat-looking pages online without having to slice books up or mash them onto a flatbed scanner. Stephen Shankland writes that the 'sophistication of the technology illustrates that would-be competitors who want to feature their own digitized libraries won't have a trivial time catching up to Google.' First, a book is placed on a flat surface, while above it, an infrared projector displays a special mazelike pattern onto the pages. Next, two infrared cameras photograph the infrared pattern from different perspectives. 'The images can be stereoscopically combined, using known stereoscopic techniques, to obtain a three-dimensional mapping of the pattern,' according to the patent. 'The pattern falls on the surface of (the) book, causing the three-dimensional mapping of the pattern to correspond to the three-dimensional surface of the page of the book.'"

It's just wide tolerances. The whole UPC-scanning system was designed so that the output from the light return sensor could be read directly (ignoring some minor gain control/etc.) as a digital data stream, with the clock rate determined by the horizontal scan rate. There's no reason to do distortion correction because it's not reading an image in the first place, it's just reading a series of high/low signal returns as serial data.
I'm sure you could build a more complicated system to does 2-D or 3-D imaging and distortion correction, but it's way more work than is necessary to read a linear UPC.

I hate patents as much as anyone else, but:1) This isn't so obvious, and requires some fairly complex math2) It is pretty complex (in the way it functions), enough that i would actually consider this patent-worthy.

But, there is some "prior art" of such functions in the visible range for scanning bodies IIRC.

I believe this was meant to be funny, and i shall accept incoming whooshes of air with joy.Have at you.

note: i still hate patents though.I can't see why they would benefit from patenting this method...I

I hate patents as much as anyone else, but:
1) This isn't so obvious, and requires some fairly complex math
2) It is pretty complex (in the way it functions), enough that i would actually consider this patent-worthy.

I would add that at least this patent is not solely a software patent; it has a hardware component.

You jest, but this technique *has* been around for years. I remember when digital cameras first became available there was a product that could perform a 3D scan by projecting a pattern onto the object and using an offset picture. I think the pattern came on a slide - that's how long ago it was! Here's a whole wikipedia page about the scanning technique: http://en.wikipedia.org/wiki/Structured_Light_3D_Scanner [wikipedia.org]

Anyway after reading the patent abstract, it isn't about the 3D scanning at all, it appears to be about an algorithm to find the fold once you've already got the point cloud. I would have thought that was fairly trivial. A possible approach would be to take the radon transform of the height map and find the smallest value that's roughly in the middle.

It certainly is mathematics and it's not that hard to understand either. basically it is the mathematical equivilent of what a hard field tomograph does.

Consider a function of two values and consider those values to be 2D coordinates. Consider also that the function is zero outside of a defined area.

Now consider that there are an infiniate number infinitely long number of straight lines passing through that area and each can be defined by two parameters, an angle and an offset from the orgin in the direction perpendicular to the line.

Along each of those lines an integral can be calculated. those integrals form the radon transform of the function (with each integral being identified by the two parameters).

Not really that complicated, the trickiest bit is probablly deciding how best to approximate the line integrals from your limited number of data points.

I almost feel bad. I know what a radon transform is and I've taken a class on inverse problems.

My point was just that the common view of what is mathematics is rather anemic and quick to give engineering credit to relatively simple ideas. I suspect that the patent office has similar fallacious thinking.

The Russians (iirhad a cute trick too. A tiny spy cam with two lights pointing down on the page. When the two dots where joined the camera was it the right distance and the spy got a quality image of a page.

Why? Just as you said, they already have anti-copy paper. If you don't want someone to be able to copy your book, simply print using that (of course, that will cause your costs to skyrocket). It's not as if the IR block would prevent the copy, it'd just mean the copy looks like crap (thus potentially impacting your image as a publisher).

Then you just do phased-lock detection. In the IR with current cheap detectors you can modulate in the kHz without any problem. I wouldn't be surprised if they do that now. In my lab we look for changes in an IR signal that are about 10^8 times smaller than the background IR radiation. It's not a hard problem to solve.

Two things, first off, they just use something else to accomplish the same thing. If you can read it, something else can as well. It may not be as fast, it may take some time and money to develop and optimize but that amount of time and money is probably pretty trivial to Google.

Second, Google doesn't care about any book that can do that at this time, they are going after old works currently, that aren't being produced by anyone anyway, so nothing they are going after right now is going to be affected by

How long before some particularly vengeful luddite publisher starts printing on treated paper stock that has an IR visible pattern, calculated to confuse these scanners, printed on it?

Before one does it? Not long. Before any significant amount of product is produced using it? Probably forever, on cost and particularly cost/benefit issues.
Besides, if the protected product produced was particularly interesting to those wanting to scan it, they could almost certainly modify the scan system to accomodate

I have to hope that any publisher hip enough to read Slashdot for tech advice(rather than relying on glossy advertisements from "security" vendors in the latest issue of Monetizing The Everloving Fuck Out of Your Precious, Precious IP magazine) wouldn't do anything that stupid. I wouldn't bet on it, though.

With respect to the foolishness over "copy protection" it is interesting to consider the possible application of the old line "the worse, the better." [wikipedia.org] The idea is that, in order for a bad situation to change, it must get worse, so that the cost of tolerating it becomes unbearably high. As long as DRM and anti-copy paper, and macrovision and all the others cause relatively limited customer displeasure and support calls, there will be little incentive to change, and things will remain as they are. If you can drive the content guys to ever more intrusive measures, things might actually get bad enough to spur a blowback.

I can't find proof in a quick search, but I do remember others posting responses here recently (possibly Anonymous Cowards) to people mentioning the 20% time with things like (paraphrase) "that will be useful for Google". In other words, the implication (or at least my inference) was that while they are technically "non-Google", the intent was that eventually they would be Google projects or the projects would be killed off.

I think the strange appearance of the hands is due to the hand moving while being scanned. I remember, in high school, moving my hand inside a scanner while it was being scanned, causing all sorts of fun distortions: wavy fingers, extremely long fingers, etc.

There are scanners that flip pages themselves like this one:
http://www.youtube.com/watch?v=UyB5c3S4vzc&feature=related [youtube.com]
but I've seen somewhere (can't remember where though) a video of a scanner that was faster and didn't use vacuum to flip pages. It was quite a lot less noisy.

Pages lie different from the front to the back of the book, and books are bound differently. So you can't use a generic model and expect it to be accurate in most cases.

I actually think this is really cool because it seems to account for any scenario, including folded pages, I would assume. Although, I suppose that in extreme bends it might not be perfect, but certainly they just need to ensure that pages are adequately flat. It automates the entire process.

I wonder if they've built an automated page-turning mechanism; I would assume they have. Just drop in a book and let the machine go to town on it.

Building 3d computer models by stereoscopic analysis of project light patterns is at least twenty years old. In fact it mentions in the summary that it they use an established technique.

As for your second comment... that's kind of my point. Since the technique is not new, the equipment is not new, what did google do that was new? Perhaps there is some actual invention in the process somewhere; but I don't have enough faith in the patent process to unquestioningly ASSUME that there is.

...that Google licenses this to scanner manufacturers and we see this at a consumer level at some point in the future? I know I'd pay good money for a book scanner that doesn't need to have a 'book edge' (which you already have to pay through the nose for)...

This is not about the imager per se. It is about the way to take images and post process them afterwords. Basically, they take three pictures, one in visible light and two in infrared, and then use the two in infrared to create a stereoscopic image and correct the image in visible light so it is not warped. From the patent, it does look like the imager is a camera, and not a scanner, since the description talks about a book resting on a platform with cameras above it. I do notice the patent makes no men

I don't see why this is such a showstopper for other book scanning projects. Right off the top of my head I can think of three methods of dewarping book scans that have nothing do to with Google's methods. While Google's method is definitely quite interesting and seems like a great solution, it is by no means whatsoever the only way of accomplishing this.

De-warping sounds useful, but there are problems that it probably won't solve --

Like the operator who scans a book page with his/her fingers or hand stuck between the page and the scanner-glass. For example, the dreaded 'New York Hand' or its fingers can be seen occupying the place of part of the text or figures on many pages of books scanned for Google-Books from the New York Public Library. On some pages, the impression of the fingers is clear enough to show the rings worn by the Hand that was doing the scanning.:(

Google should return to the open source community a decent OCR app+engine. Tesserac+ocropus are just too little, and it's already too late.

Windows already has decent ocr habilities, any hp scanner comes with decent image to page-document sofware. It's a shame that google, that has been build upon open source and has maybe the best ocr technology in the world, hasn't returned a competitive and free ocr solution for Linux.

Obviously it was worthy enough to be issued; but I don't know how worthy it is in the broader sense.

Notably, for instance, there has been a fair bit of interest, for some years, in using digital cameras in concert with projectors, either for automatic keystone/distortion correction, for projectors that aren't perfectly aligned with the projection surface, or for automatic coordination of multiple projectors illuminating the same surface, without laborious manual tiling adjustment. This is, in essence, an equivalent problem(inferring a surface's geometry based on pictures of a known image projected upon it).

The IEEE has held "Projector-Camera systems" workshops since 2003 [procams.org], and somebody was obviously working on it before that. I'm not saying that Google's patent falls into asshole troll territory or anything; but the notion of doing surface geometry inference based on known image projection isn't nearly as novel as it might seem.

This may be a projector thing, but they are doing something of physical manipulation. It would be pretty much appropriate to be patented. The whole thing is physically transformative. Meanwhile, if someone made their own version using something different, it too, would be patentable/improvement patent, which is how the patent system is supposed to work.

To be clear, I'm saying the system as a whole should be patentable (infrared), but not the software used to decode it.

It really bothers me that neither Rock Band nor Guitar Hero can auto-calibrate the audio lag using the microphone. There's absolutely no reason I can see that they can't "listen" for the calibration beeps with the mic to get a perfect reading.

This trick has been used for 20 years in astronomy. You shine a really powerful laser of known metrics into the sky and measure the atmospheric distortion suffered by the beam.

Then you take those numbers and calculate what it would take to even out the beam, and you feed THAT set of numbers to a telescope with adaptive optics which will then correct for the atmospheric distortion. Bingo, suddenly your telescope is able to take sharp images without having the air screw it up.

The technique is very effective and results in ground-based telescopes that rival anything the Hubble can do. Plus they are easier to fix.

I want to say this is called Guidestar but I am not sure.

Anyway the similarity to Google's process is simply that you shine a light or image of known value on something unknown and look at how the image now deviates from what you expect. A little math and suddenly you know exactly the shape of the unknown object. Brilliant.

It's simply called adaptive optics (AO). In AO, a guidestar is a natural isolated point-like star that is close to your science object (what you are trying to look at). If a laser is used to excite the sodium layer to create an artificial reference, it's called a "laser guidestar".

Anyway, this "trick" is completely different from adaptive optics in both the mathematics and implementation.

I was involved in evaluating rare books back around the turn of the century.

I can personally attest that representatives of online book search companies were attempting to buy up one of a kind pieces for destructive scanning.

There was one dealer in possession of a somewhat flawed, but well examined Shakespeare folio that had to put the kabosh on a reputation making deal because he found out the buyer was going to slice the piece out of its binding for scanning.

I turned down a much smaller offer on a much less significant, but still very cool, two hundred year old angler's guide (with hand colored plates and original binding) for the same reason.

Quality scans without destruction can only help raise the profile of rare books and the value they offer society - not simply for their content, but as tangible examples of the evolution of the art of communication.

Really? Structured light to find 3D geometry is old hat... the optical and signal processing part of book scanning seem pretty easy, making the mechanical part for page flipping robust seems a lot harder to me.

I kind of doubt the patent will stop any competitors. It should be trivial to achieve the same result with dozens of different methods.

I'm kind of surprised they used that method in fact. There should of been several that allowed them to scan the books without even requiring them to fully flip open and lay flat each page. With so many books to scan speed must of been important.

I guess this method worked because the device was so cheap that they could just make a lot of scanners.

I am willing to bet that they do that with cheap books (ones they buy), but not with expensive ones (ones they borrow). One certainly can't remove the spines of books in libraries or other collections.

Keep in mind, the majority of the books they are scanning are old, out-of-print and copyright expired texts. They aren't something you can pop over to Amazon and order another one of. So the bulk ARE old and/or valuable.

Only if Google refused to license it. Google isn't Microsoft or Intel; I doubt they'd go that route.

In fact, since Google has paid for the innovation of this tech, including the R&D for it, patenting it and then allowing companies to license it reduces the barrier since companies that couldn't have paid for the research now have the technique available to them.

Cough, you don't ahve to. I can copy your book all gad damn day long and have not violated your rights or the copyright code.
The moment I try to distribute them, then it's a copyright violation.

Be sure to check out the exclusive rights in copyrighted works [cornell.edu] before making blanket assertions on what is and is not legal under copyright law. The exclusive rights granted by copyright include both reproduction and distribution. There are lots of exceptions to these exclusive rights, but an interpretation that completely eviscerates the exclusive right to reproduce a work is not supported by the Copyright Act.