film

We all know what Computer-Generated Imagery (CGI) is nowadays. It’s almost impossible to get away from it in any television show or movie. It’s gotten so good, that sometimes it can be difficult to tell the difference between the real world and the computer generated world when they are mixed together on-screen. Of course, it wasn’t always like this. This 1982 clip from BBC’s Tomorrow’s World shows what the wonders of CGI were capable of in a simpler time.

In the earliest days of CGI, digital computers weren’t even really a thing. [John Whitney] was an American animator and is widely considered to be the father of computer animation. In the 1940’s, he and his brother [James] started to experiment with what they called “abstract animation”. They pieced together old analog computers and servos to make their own devices that were capable of controlling the motion of lights and lit objects. While this process may be a far cry from the CGI of today, it is still animation performed by a computer. One of [Whitney’s] best known works is the opening title sequence to [Alfred Hitchcock’s] 1958 film, Vertigo.

Later, in 1973, Westworld become the very first feature film to feature CGI. The film was a science fiction western-thriller about amusement park robots that become evil. The studio wanted footage of the robot’s “computer vision” but they would need an expert to get the job done right. They ultimately hired [John Whitney’s] son, [John Whitney Jr] to lead the project. The process first required color separating each frame of the 70mm film because [John Jr] did not have a color scanner. He then used a computer to digitally modify each image to create what we would now recognize as a “pixelated” effect. The computer processing took approximately eight hours for every ten seconds of footage. Continue reading “Retrotechtacular: The Early Days of CGI”→

If you know anything about how films are made then you have probably heard about the “green screen” before. The technique is also known as chroma key compositing, and it’s generally used to merge two images or videos together based on color hues. Usually you see an actor filmed in front of a green background. Using video editing software, the editor can then replace that specific green color with another video clip. This makes it look like the actor is in a completely different environment.

It’s no surprise that with computers, this is a very simple task. Any basic video editing software will include a chroma key function, but have you ever wondered how this was accomplished before computers made it so simple? [Tom Scott] posted a video to explain exactly that.

In the early days of film, the studio could film the actor against an entirely black background. Then, they would copy the film over and over using higher and higher contrasts until they end up with a black background, and a white silhouette of the actor. This film could be used as a matte. Working with an optical printer, the studio could then perform a double exposure to combine film of a background with the film of the actor. You can imagine that this was a much more cumbersome process than making a few mouse clicks.

For the green screen effect, studios could actually use specialized optical filters. They could apply one filter that would ignore a specific wavelength of the color green. Then they could film the actor using that filter. The resulting matte could then be combined with the footage of the actor and the background film using the optical printer. It’s very similar to the older style with the black background.

Electronic analog video has some other interesting tricks to perform the same basic effect. [Tom] explains that the analog signal contained information about the various colors that needed to be displayed on the screen. Electronic circuits were built that could watch for a specific color (green) and replace the signal with one from the background video. Studios even went so far as to record both the actor and a model simultaneously, using two cameras that were mechanically linked together to make the same movements. The signals could then be run through this special circuit and the combined image recorded all simultaneously.

There are a few other examples in the video, and the effects that [Tom] uses to describe these old techniques go a long way to help understand the concepts. It’s crazy to think of how complicated this process can be, when nowadays we can do it in minutes with the computers we already have in our homes. Continue reading “How Green Screen Worked Before Computers”→

Developing film at home is most certainly a nearly forgotten art nowadays, but there are still a few very dedicated people who care enough to put in the time and study to this craft. [Jan] is one of the exceptional ones. He’s developing 35mm film with Lego (Dutch, Google translate).

For the build, [Jan] is using the Lego RCX 1.0, the first gen of the Lego Mindstorms, released in the late 90s. According to eBay, this is a significantly cheaper option for programmable Lego. The mechanics of the Lego film developer consisted of multiple tanks of chemicals. The film was loaded on a reel, suspended from a Lego gantry, and dunked into each tank for a specific amount of time.

A second revision of the hardware (translate) was designed, with the film loaded into a rotating cylinder. A series of chemicals would then be pumped into this unit with the hope of reducing the amount of chemicals required. This system was eventually built using the wiper fluid pump from a car. Apparently, the system worked well, judging from the pictures developed with this system. Whether it was easy or efficient is another matter entirely.

You can check out a video of the first revision of the Lego film developing system below.

We preempt this week’s Hacklet to bring you an important announcement.

Hackaday.io got some major upgrades this week. Have you checked out The Feed lately? The Feed has been tweaked, tuned, and optimized, to show you activity on your projects, and from the hackers and projects you follow.

We’ve also rolled out Lists! Lists give you quick links to some of .io’s most exciting projects. The lists are curated by Hackaday staff. We’re just getting started on this feature, so there are only a few categories so far. Expect to see more in the coming days.

Have a suggestion for a list category? Want to see a new feature? Let us know!

Now back to your regularly scheduled Hacklet

There are plenty of cameras on Hackaday.io, from complex machine vision systems to pinhole cameras. We’re concentrating on the cameras whose primary mission is to create an image. It might be for art, for social documentation, or just a snapshot with friends.

[theschlem] starts us off with Pinstax, a 3D Printed Instant Pinhole Camera. [theschlem] is using a commercial instant film camera back (the back for a cheap Diana F+) and 3D printing his own pinhole and shutter. He’s run into some trouble as Fuji’s instant film is fast, like ISO 800 fast. 3 stops of neutral density have come to the rescue in the form of an ND8 filter. Pinstax’s pinhole is currently 0.30mm in diameter. That translates to just about f/167. Nice!

Next up is [Jimmy C Alzen] and his Large Format Camera. Like many large format professional cameras, [Jimmy’s] camera is designed around a mechanically scanned linear sensor. In this case, a TAOS TSL1412S. An Arduino Due runs the show, converting the analog output from the sensor to digital values, stepping the motor, and displaying images in progress on an LCD. Similar to other mechanically scanned cameras, this is no speed demon. Images in full sunlight take 2 minutes. Low light images can take up to an hour to acquire.

[Jason’s] Democracycam aims to use open source hardware to document protests – even if the camera is confiscated. A Raspberry Pi, Pi Cam module, and a 2.8″ LCD touchscreen make up the brunt of the hardware of the camera. Snapping an image saves it to the SD card, and uses forban to upload the images to any local peers. The code is in python, and easy to work with. [Jason] hopes to add a “panic mode” which causes the camera to constantly take and upload images – just in case the owner can’t.

The venerable Raspberry Pi also helps out in [Kimondo’s] Digital Holga 120d. [Kimondo’s] fit a Raspberry Pi model A, and a Pi camera, into a Holga 120D case. He used the Slice of pi prototype board to add a GPIO for the shutter release button, a 4 position mode switch, and an optocoupler for a remote release. [Kimondo] even added a filter ring so he can replicate all those instagram-terrific filters in hardware. All he needs is to add a LiPo battery cell or two, a voltage regulator, and a micro USB socket for a fully portable solution.

Finally, we have [LeoM’s] OpenReflex rework. OpenReflex is an open source 3D printed Single Lens Reflex (SLR) 35mm film camera. Ok, not every part is 3D printed. You still need a lens, a ground glass screen, and some other assorted parts. OpenReflex avoids the use of a pentaprism by utilizing a top screen, similar to many classic twin lens reflex cameras. OpenReflex is pretty good now, but [Leo] is working to make it easier to build and use. We may just have to break out those rolls of Kodachrome we’ve been saving for a sunny day.

That’s it for this week’s Hacklet! Until next week keep that film rolling and those solid state image sensors acquiring. We’ll keep bringing you the best of Hackaday.io!

It’s hard to beat this vintage reel for learning about how vacuum tube amplifiers work. It was put together by the US Army in 1963 (if we’re reading the MCMLXIII in the title slide correctly). If you have a basic understanding of electronics you’ll appreciate at least the first half of the video, but even the most learned of radio enthusiasts will find something of interest as they make their way through the 30-minute presentation.

The instruction begins with a description of how a carbon microphone works, how that is fed to a transformer, and then into the amplifier. The first stage of the tube amp is a voltage amplifier and you’ll get a very thorough demo of the input voltage swing and how that affects the output. We really like it that the reel discusses getting data from the tube manual, but also shows how to measure cut-off and saturation voltage for yourself. From there it’s off to the races with the different tube applications used to make class A, B, and C amplifiers. This quickly moves onto a discussion of the pros and cons of each amplifier type. See for yourself after the jump.

We’re sure that this title makes some readers itch because there are still a number of well-respected directors who insist on shooting with film rather than digital, but the subject of this week’s Retrotechtacular shows a portion of the movie industry that has surely been relegated to life-support in the past few decades. Photo finishing, once the stronghold of chemical processes used by all to develop their photographs, has become virtually non-existent. This is the story of how film and photo finishing drove cinema for much of its life.

The reels seen above are negative and positive film. The negative film goes in the camera and captures the images. After developing and fixing the negative film, the process is repeated. Light shines through the fixed negative in order to expose a fresh reel of film. That film is finished and fixed to create the reel which can be used in a projector. This simple process is covered near the beginning of the clip found below. The 1940 presentation moves on to discuss the in-depth chemistry techniques used in the process. But you’re really in for a treat starting about half-way through when the old manual methods are shown, which have been replaced by the “modern laboratory”. We love those huge analog dials! The video concludes by showing the true industrialization of the film developing process.

We’re running out of Retrotechtacular topics. If you know of something that might be worth a feature please send in a tip!

It’s surprising how often a brilliant idea is missed out on until years after the fact. In this case the concept was seen publicly within ten years, but the brilliance of the inventor has been appreciated once again after 110 years. It’s a color movie which was filmed around 1901 or 1902 but it sounds like the reel wasn’t shown in its full color grandeur until 2012 when the National Media Museum in the UK started looking into the history of one particular film.

The story is well told by the curators in this video which is also embedded after the break. The reel has been in their collection for years. It’s black and white film that’s labeled as color. It just needed a clever and curious team to put three frames together with the help of color filters. It seems that [Edward Turner] patented a process in 1899 which used red, green, and blue filters to capture consecutive frames of film. The patent description helped researchers put image those frames — also using filters — to produce full color images like the one seen above.

The press release on the project shares a bit more information, like how they determined the age of the film using genealogical research and the fact that [Turner] himself died in 1904. The process didn’t die with him, but actual evolved and was exhibited publicly in 1909. This, however, is the oldest known color movie ever found.