I'm still enjoying the phrase "largest gigapixel photograph". I'm not sure how it compares in size to all the regular gigapixel photographs. But no doubt it's much bigger than the smallest gigapixel photograph.

In other news, a ton of bricks actually does weigh more than a ton of feathers.

Oddly, if you fire up Google earth you can zoom into Dresden town centre.
If you look very very carefully you can see a half-dressed student standing on a roof with a camera made from glitter glue, Pringle cans and Lego.

Large-resolution image taken with an 8x10 camear. A large format film camera (100+ year-old technology) can squeak out very high resolutions. Arguments abound as to the megapixel equivalent of film, but if a 35mm camera is about 20 megapixels then by my calculations a 8x10 camera is about one regular old fashioned gigapixel of resolution.

One day I was sitting in the hills under the Hollywood sign. I took shots for a 360 degree view, and stitched them up at home. That was a trick, considering the resolution of the pictures, and the speed and memory available on PC's at the time. I never bothered to calculate out the megapixel size, but it would have been pretty big. I would have never considered it to be a huge shot. I considered it an interesting collage.

They actually captured the same two people twice! There's a grassy patch near the lower right of the image that contains two bright red flags. Zoom in on those, then pan up and to the right to the sidewalk. There's a column with ads on it and some people walking to the left and right of that. Two of them are clearly doubled. I hope they get to see themselves.

I don't care about the headline or the record. I think it's a neat image in its own right.

No, it hasn't. I dare bet large chunks of the oceans and poles aren't quite as detailed as some of the cities.

You probably can still find a big chunk detailed enough to beat the 26 gigapixel record, but the resolution of any chunk should be measured by the lowest resolution part and should be without any missing parts.

Increasingly large megapixel photos are an interesting thing though, but to me they are only interesting if the focus is small. Imagine you are a woman looking at dress photos online. The photos have such amazing detail that you can zoom in and see the weave of the fabric itself, the details of the patterns. Then imagine you're looking at a mate's car photos. You can zoom in and read the badges etc. On a photo of a forest you can zoom in and check out a bee landing on an interesting flower.

For whatever reason, "DSLR" has come to mean "a camera with a large, high-quality sensor". Frustrates the hell out of me too. One can most certainly attach a pound of prisms, mirrors, and mechanical levers to the sensor out of a cameraphone and have a "DSLR". At least Panasonic managed to buck the trend and make a large-sensor camera with interchangeable lenses and only digital preview.

If you look at the hills on the left side, the big white buildings are the Infineon fabs, and the now bankrupt Qimonda fab. Also, you can almost tell which part of Dresden was destroyed from the WWII bombings by the types of buildings that are there now. The apartment box-like buildings were built during the communist times after the war.

You can't take it in all at once on your puny computer, but there is a single full resolution image file somewhere that could conceivably be printed at 100% if there were a printer in existence that could print 30 meter wide rolls (or whatever it would take) or a monitor with enough pixels to display it. So, lacking those means of displaying the image, the next best thing is a zoomable image such as shown here.

The size is dominated by the transistors, the photo-diode shares the same feature size are the transistors since it's manufactured under the same process.

Moore's law applies.

I have printed out that last post of mine and am chewing on the paper as I type this. Interesting to note, though, is these [luminous-landscape.com] two [luminous-landscape.com] articles discussing the upper limits of pixel count due to diffraction. Looks like we're not gonna see a 26 GP camera after all, even with Moore's Law applying.

There is a 111 MP single sensor camera that just got installed on a telescope. There's not a whole lot of point though. It's easier, cheaper and more reliable to create a multichip camera like the 1.4 GP camera installed on one of the telescopes in Hawaii. It's still one camera though, and takes the whole 1.4 GP in one shot.

Also note, that the denser pixels are going to get the less surface area you'll require for 26GP... though i suspect that there is a fundamental limit - the photo-diode will need to be larger than a some function of the wave length.

Presumably, since these multiple sensor cameras are currently used in telescopes, they've either worked out how to deal with sensor edges or it isn't important. There need not necessarily be a gap at the edge of a sensor anyway. Just because it technically came off a different wafer doesn't mean you couldn't line the things up closely enough that it wouldn't matter.

Denser pixels are going to get less surface area... provided the surface area of your sensor doesn't change. Astronomical cameras don't exact

Not in astronomy. Film and digital sensors respond to light in different ways. Digital sensors are MUCH more sensitive than film is, but much of that sensitivity is unusable in a regular camera because digital sensors also experience much higher levels of noise than film does.

So if you're shooting regular landscape, portrait, whatever, you might well be right. But in astronomy that extra sensitivity actually buys you something.

Most astronomical pictures you see are the result of long exposures, from seco

What you are talking about is reciprocity failure, where the film stops responding in a linear fashion to increases in exposure. However, if for some strange reason I wanted to make a wall sized print of a city, then the first thing I would be grabbing is a view camera, a few loaded film holders.

It would be an impressive achievement to note the largest picture taken at one time with a camera.
However, stitching together 1655 photos together doesn't exactly seem to be as interesting as a feat.
If that qualifies as a record, then just how many photos does the a global satellite view like Google Maps have in "total resolution"?

Stitching many images to form one big picture is challenging in many ways: First you need the camera and lens to capture enough detail. With a 400mm lens, it took a 21MP camera to get that much data. If you've ever tried to shoot a crisp 21MP picture at 400mm, you know that even just one of these 1655 photos is an achievement. Then you need the hardware to shoot these pictures in quick succession: The photoshoot took them three hours. During that time, the sun moves, shadows move, the color of the sky chang

Due to the way these images are created, they don't work at all for even moderately dynamic views, they're always full of artifacts from the light change, they usually look quite dull when zoomed out and the interesting bits are lost in a vast desert of pointless detail.

Pointless detail?

Detail was precisely the point of the image.

Further, simply because you have no immediate use for this detail does not mean its pointless and certainly not a desert. Its all still there when you zoom back in.

The detail on the facade of a building does not cease to exist just because you get in your car and drive a mile away.

This is an attempt to record that. To have the naked eye view and the telescopic view in one set of images.

The practical applications of this seem rich, if we can just get past our little self centered world view that suggests just because you can not experience every level of detail simultaneously, that, therefore none of it is warranted.

On the contrary. I found something of interest just about anywhere I zoomed in. Odd to see so many American flags in a German city. The rotting roofs of abandoned buildings. The man taking a leak under the bridge, the high percentage of missing hub caps, the bomber of replicated people appearing multiple times, (proving the shots were taken from left to right), the time difference on the clocks.

But the girl riding her bike on the bridge has gained a twin, and a couple cars in the parking lot seem to have lost their rear ends.

Serial imaging leads to anomalies. Simultaneous imaging would be more impressive. I'm not as concerned that there are multiple sensors involved as I am that the same sensor was used serially and with enough of an interval that a person could plod along on a bike for 200 feet.

It's not just photographs. This is a problem when generally when a medium is applied to a primarily technical aim (e.g. breaking a record) vs. an aesthetic one. The best example of this I've witnessed was during my freshman year of college, when a music department Prof. had the class listen to the first public recording of tape loop reverb. IIRC, it came out of MIT. The recording was performed on the recorder (the woodwind instrument) by the then-current department chair.

Doesn't using a building as a camera pretty much limit your choice of subject material? There just might be a reason why portable cameras were invented! (Unless, of course, you can get a large number of people to pose in front of the building.

I remember as a child, the sun shining through a hole in the garage door onto the freezer door created a camera obscura, although it really only displayed a silhoette of the trees. And of course, the image was upside down.

in a related note, i once accidentally converted my friends dorm room into a pinhole camera. we where lying there watching TV, and we turned the TV off, and i was staring at the wall, when i realized that the weird colors moving on the wall, was a projection of what was going on outside, it was just upside down. (small hole in the blanket over the window as it turns out)

Points for building such a huge camera obscura. Points off for whatever issues made a photograph that huge so incredibly noisy. I suspect the exposure was too short since a pinhole camera of that size would have to be open for weeks to gather enough light to cover the canvas. Possibly light leakage from unintentional pinholes was also a problem. A photo that large should be unbelievably rich in detail or there's no point in making it, except to photograph the process of making a photograph that large.

Ditto that. I read the first few sentences without a problem, until I hit the part where they talk about pixels (picture elements). I couldn't figure out why the grammar and parentheses were that screwed up.... until I accidentally moused over a sentence with a Google pop-up asking me to improve the sentence. Only then did I realized I was looking the Google Translate page of the actual German page.

It is a pity they picked afternoon to shoot this photo. As a result the most beautiful part of the city, historic center, is in a deep shadow. With so much work put into this, one would think image aesthetics would be also be a consideration besides just technological accomplishment.

No, somebody moved their car while they were taking the pictures. When you stitch together "picture with car" and "picture with no car", that's what you get.--Invisibility cloak?!? Ha! I'll believe it when I see it!

pretty sure your sig has the answer. they turned off the cloak to park, so nobody opened a door on them. that or they're turning it on as they back out, it's hard to tell with this few pixels per meter.

This functionality and resolution is easy to get and can be obtained from a normal single photo, not 1655.
All you need is a standard "enhancement" filter found on any movie of TV show worth its salt. You zoom in, everything is blurry, enhance, it gets clear again and repeat ad nauseum, or at least until the scientists in your audience are nauseated.

Not 26 gigabytes, 26 gigapixels. The tiles for the image are probably a few gigabytes total, but you will only be served the tiles for the magnification level and area you are looking at, c.f google/bing maps