Where the other Canon cameras tend to come apart in modules (you can take off the back, or take off the front, etc.) the 6D was a bit more interconnected. To get the back off required removing the sides and a bit of the bottom for example. A bit of a pain for the exploring types, but I would imagine it also gives more structural support.

The body is basically plastic, but like most modern plastics it’s thick and solid. Never a thought that a screw was going to strip out during disassembly. Anyway, after a bit the back was off, and looks, from the inside, pretty similar to all the other Canon backs.

Augmented reality startup Magic Leap is raising upwards of $1 billion in new venture capital funding, according to a Delaware regulatory filing unearthed by CB Insights. It would be Series D stock sold at $27 per share, which is a 17.2% bump from Series C shares issued in the summer of 2016.

Bottom line: Magic Leap still hasn't come out with a commercial product, having repeatedly missed expected release dates. But investors must still like what they see down in Ft. Lauderdale, given that they keep plugging in more money at increased valuations.

Digging in: Multiple sources tell Axios that the deal is closed, although we do not know exactly how much was raised. The Delaware filing is only a stock authorization, which means the Florida-based company may actually raise less. Bloomberg had reported last month that Magic Leap was raising $500 million at around a $5.5 billion pre-money valuation, with new investors expected to include Singapore's Temasek Holdings. One source suggests the final numbers should be close to what Bloomberg reported.

Philippe J DEWOST's insight:

This amazing company is not located in Silicon Valley, has not released any product in its first 7 years of existence, yet it just raised north of $1Bn in a series D round.

After roughly two years of court battles, the groundbreaking lawsuit asking a U.S. federal court to declare Naruto—a free-living crested macaque—the copyright owner of the internationally famous “monkey selfie” photographs has been settled.

PETA; photographer David Slater; his company, Wildlife Personalities, Ltd.; and self-publishing platform Blurb, Inc., have reached a settlement of the “monkey selfie” litigation. As a part of the arrangement, Slater has agreed to donate 25 percent of any future revenue derived from using or selling the monkey selfies to charities that protect the habitat of Naruto and other crested macaques in Indonesia.

Philippe J DEWOST's insight:

The picture was taken in 2011 by "Naruto," a wild macaque that pressed the shutter on Slater’s camera. Did the primate own the rights to the picture ? In a tentative opinion issued last year, a district judge said there was "no indication" that the U.S. Copyright Act extended to animals. Slater, meanwhile, argued that the UK copyright license he obtained for the picture should be valid worldwide.

Sadly, we will never know what "Naruto" thinks of human copyright laws...

If ever there was a sport that required rapid fire photography, Formula One racing is it. Which makes what photographer Joshua Paul does even more fascinating, because instead of using top-of-the-range cameras to capture the fast-paced sport, Paul chooses to take his shots using a 104-year-old Graflex 4×5 view camera.The photographer clearly has an incredible eye for detail, because unlike modern cameras that can take as many as 20 frames per second, his 1913 Graflex can only take 20 pictures in total. Because of this, every shot he takes has to be carefully thought about first, and this is clearly evident in this beautiful series of photographs.

Philippe J DEWOST's insight:

Some centennial imaging hardware is still in the game when put in proper hands...

Remember internet kerfuffle that was 'the dress' ? Well, there's another optical illusion that's puzzling the internet right now. Behold: the red strawberries that aren't really red. Or more specifically, the image of the strawberries contains no 'red pixels.'

The important distinction to make here is that there is red information in the image (and, crucially, the relationships between colors are preserved). But despite what your eyes might be telling you, there are no pixels that appear at either end of the 'H' axis of the HSV color model. i.e. there is no pixel that, in isolation, would be considered to be red, hence: no 'red pixels' in the image.

So it's not that your brain is being tricked into inventing the red information, it's that your brain knows how much emphasis to give this red information, so that colors that it would see as cyan or grey in other contexts are interpreted as red here.

As was the case with 'the dress,' it all relates to a concept called color constancy, which relates to the human brain's ability to perceive objects as the same color under different lighting (though in this case there are unambiguous visual cues to what the 'correct' answer is).

Philippe J DEWOST's insight:

No "red pixel" has ever been added to this image and they are not "invented" by your brain either. Funny enough, our human white balancing capabilities go way beyond cameras "auto white balance" mode...

Intel inked a deal to acquire Mobileye, which the chipmaker’s chief Brian Krzanich said enables it to “accelerate the future of autonomous driving with improved performance in a cloud-to-car solution at a lower cost for automakers”.

Mobileye offers technology covering computer vision and machine learning, data analysis, localisation and mapping for advanced driver assistance systems and autonomous driving. The deal is said to fit with Intel’s strategy to “invest in data-intensive market opportunities that build on the company’s strengths in computing and connectivity from the cloud, through the network, to the device”.

A combined Intel and Mobileye automated driving unit will be based in Israel and headed by Amnon Shashua, co-founder, chairman and CTO of the acquired company. This, Intel said, “will support both companies’ existing production programmes and build upon relationships with car makers, suppliers and semiconductor partners to develop advanced driving assist, highly-autonomous and fully autonomous driving programmes”.

Philippe J DEWOST's insight:

Intel is going mobile (again), and this time in the car.

The target (the Israeli Mobileye) was a former Tesla partner until September 2016 when they broke up "ugly".

In terms of volumes, according to Statista, some 77.73 million automobiles are expected to be sold by 2017 and global car sales are expected to exceed 100 million units by 2020 : depending on the growth of the autonomous vehicle segment, it will still be a fraction of the (lost) smartphone market, even if the price points are expected to be somewhat different...

The bet and race are now between vertical integration and layering the market. Any clue who might win ?

The target (the Israeli Mobileye) was a former Tesla partner until September 2016 when they broke up "ugly".

In terms of volumes, according to Statista, some 77.73 million automobiles are expected to be sold by 2017 and global car sales are expected to exceed 100 million units by 2020 : depending on the growth of the autonomous vehicle segment, it will still be a fraction of the (lost) smartphone market, even if the price points are expected to be somewhat different...

The bet and race are now between vertical integration and layering the market. Any clue who might win ?

Upsampling techniques to create larger versions of low-resolution images have been around for a long time – at least as long as TV detectives have been asking computers to 'enhance' images. Common linear methods fill in new pixels using simple and fixed combinations of nearby existing pixel values, but fail to increase image detail. The engineers at Google's research lab have now created a new way of upsampling images that achieves noticeably better results than the previously existing methods.

RAISR (Rapid and Accurate Image Super-Resolution) uses machine learning to train an algorithm using pairs of images, one low-resolution, the other with a high pixel count. RAISR creates filters that can recreate image detail that is comparable to the original, when applied to each pixel of a low-resolution image. Filters are trained according to edge features that are found in specific small areas of images, including edge direction, edge strength and how directional the egde is. The training process with a database of 10000 image pairs takes approximately an hour.

Giroptic, creator of the standalone ‘360cam’, today announced the launch of the iO 360 camera which attaches to any Lightning-enabled iPhone or iPad.

The Giroptic iO 360 camera for iPhone/iPad is available starting today for $250. With two counter-facing lenses, the camera enables Apple devices to capture full 360 degree photo and videospheres. The camera currently supports 360 degree livestreaming via YouTube, and support for 360 Facebook Live is planned

Specs include two 195 degree lenses with an aperture of F/1.8, onboard stereo microphone and a rechargeable battery. Video is captured at 1920×960 resolution at 30 FPS and is stitched in real time with no post-processing needed. Photos are shot at a higher 3840×1920 resolution.

Our latest passion project is now live. “Transient” is a compilation of Phantom Flex 4K slow motion and medium format timelapse. Possibly the largest collection of 4K 1000fps lightning footage in the world is now on display in an action-packed 3 minute short. During the Summer of 2017 we spent 30 days chasing storms and …

Philippe J DEWOST's insight:

How many young photographers have dreamt of capturing light in its transience ? How many finally captured one ? Here is a trove and we can see them develop in slo-mo as the initial shooting involves a 4K camera taking 1000 frames per second.

One thing that Google left unannounced during its Pixel 2 launch event on October 4th is being revealed today: it’s called the Pixel Visual Core, and it is Google’s first custom system-on-a-chip (SOC) for consumer products. You can think of it as a very scaled-down and simplified, purpose-built version of Qualcomm’s Snapdragon, Samsung’s Exynos, or Apple’s A series chips. The purpose in this case? Accelerating the HDR+ camera magic that makes Pixel photos so uniquely superior to everything else on the mobile market. Google plans to use the Pixel Visual Core to make image processing on its smartphones much smoother and faster, but not only that, the Mountain View also plans to use it to open up HDR+ to third-party camera apps.

The coolest aspects of the Pixel Visual Core might be that it’s already in Google’s devices. The Pixel 2 and Pixel 2 XL both have it built in, but laying dormant until activation at some point “over the coming months.” It’s highly likely that Google didn’t have time to finish optimizing the implementation of its brand-new hardware, so instead of yanking it out of the new Pixels, it decided to ship the phones as they are and then flip the Visual Core activation switch when the software becomes ready. In that way, it’s a rather delightful bonus for new Pixel buyers. The Pixel 2 devices are already much faster at processing HDR shots than the original Pixel, and when the Pixel Visual Core is live, they’ll be faster and more efficient.

Philippe J DEWOST's insight:

Google"s Pixel Visual Core and its 8 Image Processing Units unveil a counterintuitive hardware approach to High Dynamic Range processing until you understand the design principles of their HDR approach. #HardwareIsNotDead

Researchers have unveiled a new photography technique called computational zoom that allows photographers to manipulate the composition of their images after they've been taken, and to create what are described as "physically unattainable" photos. The researchers from the University of California, Santa Barbara and tech company Nvidia have detailed the findings in a paper, as spotted by DPReview.

In order to achieve computational zoom, photographers have to take a stack of photos that retain the same focal length, but with the camera edging slightly closer and closer to the subject. An algorithm and the computational zoom system then spit out a 3D rendering of the scene with multiple views based on the photo stack. All of that information is then “used to synthesize multi-perspective images which have novel compositions through a user interface” — meaning photographers can then manipulate and change a photo’s composition using the software in real time.

The researchers say the multi-perspective camera model can generate compositions that are not physically attainable, and can extend a photographer’s control over factors such as the relative size of objects at different depths and the sense of depth of the picture. So the final image isn’t technically one photo, but an amalgamation of many. The team hopes to make the technology available to photographers in the form of software plug-ins, reports DPReview.

Last month, the Camera & Imaging Products Association (CIPA) released its 2016 report detailing yearly trends in camera shipments. Using that data, photographer Sven Skafisk has created a graph that makes it easy to visualize the data, namely the major growth in smartphone sales over the past few years and the apparent impact it has had on dedicated camera sales.

The chart shows smartphone sales achieving a big spike around 2010, the same time range in which dedicated camera sales reached its peak. Each following year has represented substantial growth in smartphone sales and significant decreases in dedicated camera sales, particularly in the compact digital cameras category.

Per the CIPA report, total digital camera shipments last year fell by 31.7% over the previous year. The report cites multiple factors affecting digital camera sales overall, with smartphones proving the biggest factor affecting the sales of digital cameras with built-in lenses. The Association's 2017 outlook includes a forecast that compact digital cameras will see another 16.7-percent year-on-year sales decrease this year.

Skafisk's graph below shows the massive divide between smartphone sales and camera sales - be prepared to do some scrolling.

This value took more than 10 gigabytes of ram to render the reference.

The min-iteration number of the last keyframe is 153 619 576.Coordinates :Re : -1.74995768370609350360221450607069970727110579726252077930242837820286008082972804887218672784431700831100544507655659531379747541999999995Im : 0.00000000000000000278793706563379402178294753790944364927085054500163081379043930650189386849765202169477470552201325772332454726999999995

Ever-shifting camera tech company Lytro has raised major cash to continue development and deployment of its cinema-level camera systems. Perhaps the company’s core technology, “light field photography” that captures rich depth data, will be put to better use there than it was in the ill-fated consumer offerings.

“We believe we have the opportunity to be the company that defines the production pipeline, technologies and quality standards for an entire next generation of content,” wrote CEO Jason Rosenthal in a blog post.

Just what constitutes that next generation is rather up in the air right now, but Lytro feels sure that 360-degree 3D video will be a major part of it. That’s the reason it created its Immerge capture system — and then totally re-engineered it from a spherical lens setup to a planar one.

.../...

The $60M round was led by Blue Pool Capital, with participation from EDBI, Foxconn, Huayi Brothers and Barry Sternlicht. “We believe that Asia in general and China in particular represent hugely important markets for VR and cinematic content over the next five years,” Rosenthal said in a statement.

It’s a hell of a lot of money, more even than the $50M round the company raised to develop its original consumer camera — which flopped. Its Illum follow-up camera, aimed at more serious photographers, also flopped. Both were innovative technologically but expensive and their use cases questionable.

Philippe J DEWOST's insight:

Light Field camera design startup Lytro is still not dead after 5 years and several pivots as it gobbles $60M additional funding

When Mathieu Stern came to us with his 3D printed photographic lens project, we were immediately excited by the idea. Not only Mathieu is one of those enthusiasts of photography well known on the web (he launched for example an original and poetic web series “The Weird Lenses Challenge“) but also our professional proximity with the world of art, design, and images we work on on a daily basis at Fabulous, naturally brought us closer.

Moreover, as an application design office specialized in additive manufacturing, it is always a pleasure to embark on a new challenge. The question posed by Mathieu is identical to that posed daily by our customers: how to use the competitive advantages of 3D printing to design and manufacture a new innovative product, cheap, and above all performing.

Philippe J DEWOST's insight:

3D Printed Photo Lenses may seem a conceptual bizarrerie. Yet they exist and even seem to deliver ...

If you wanted to see in the dark, you could do worse than follow the example of moths, which have of course made something of a specialty of it. That, at least, is what NASA researchers did when designing a powerful new camera that will capture the faintest features in the galaxy.

This biomimetic “bolometer detector array,” as the instrument is called, is part of the Stratospheric Observatory for Infrared Astronomy, or SOFIA. This ongoing mission uses a custom 747 to fly at high altitudes, where observations can be made of infrared radiation that would otherwise be blocked by the water in our atmosphere.

But even the infrared that does make it here is pretty faint, so you want to capture every photon you can get. The team, led by Christine Jhabvala and Ed Wollack at NASA’s Goddard Space Flight Center, originally looked into carbon nanotubes, which have many desirable properties. But they didn’t quite fit the bill.

The eye of the common moth, however — or at least, a design inspired by it — did what the latest nanomaterial didn’t. Insects’ eyes are of course very different from our own, essentially composed of hundreds or thousands of tiny lenses that refract an image onto their own dedicated photosensitive surface at the base of a column or ommatidia. In moths, this surface is also covered in microscopic, tapered columns or spikes. These have the effect of preventing light from bouncing back out, ensuring a greater proportion of it is detected.

The team replicated this design in silicon, and you can see the result at top. Each spike is carefully engineered to reflect light downwards and retain it, making the High-Resolution Airborne Wideband Camera-plus, or HAWC+, one of the most sensitive instruments out there. Funnily enough, the sensor they modified was originally called the backshort under-grid sensor — BUGS.

Philippe J DEWOST's insight:

Insects have become the next inspiration for pushing camera sensitivity limits.

You may think you’re no good at seeing in the dark, but your eyes are actually incredibly sensitive. In fact, according to a new study, the human eye is so sensitive it can detect even a single photon of light!

The study was conducted by a team at Rockefeller University who used an innovative (and complicated) technique to reliably fire a single high energy photon directly at participants’ retinas. For their part, the participants just had to tell them when they saw something and rate how confident they were about the sighting.

The results were surprising to say the least. We’re talking about the smallest particle of light, and the results from the study show that people were able to accurately determine when a photon was fired 51.6% of the time (60% when they were very confident)—a statistically significant percentage that couldn’t possibly result from subjects guessing their way through it. What’s more, subjects were more likely to detect a second photon if it was fired less than 10 seconds after the first.

Philippe J DEWOST's insight:

About “the absolute limits of human vision” paper released on nature.com . At imsense Ltd. we called this eye-fidelity™ ...

Few months ago, we shared a chart showing how sales the camera market have changed between 1947 and 2014. The data shows that after a large spike in the late 2000s, the sales of dedicated cameras have been shrinking by double digit figures each of the following years. Mix in data for smartphone sales, and the chart can shed some more light on the state of the industry.

Philippe J DEWOST's insight:

This chart is eye-opening even if not so suprising when you realize that the billionth iPhone is expected before the end of 2016. Feeling blessed and glad to have been a participant (twice: with Realeyes3D and then imsense) to such a massive rise.

Sharing your scoops to your social media accounts is a must to distribute your curated content. Not only will it drive traffic and leads through your content, but it will help show your expertise with your followers.

Integrating your curated content to your website or blog will allow you to increase your website visitors’ engagement, boost SEO and acquire new visitors. By redirecting your social media traffic to your website, Scoop.it will also help you generate more qualified traffic and leads from your curation work.

Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic and visibility.
Creating engaging newsletters with your curated content is really easy.