The Sea of Ideashttps://paulbakaus.com The personal homepage of Paul Bakaus.Thu, 15 Jan 2015 12:10:58 +0000en-UShourly1http://wordpress.org/?v=4.15 reasons for 60 fpshttps://paulbakaus.com/tutorials/immersion/10-reasons-for-60fps/ https://paulbakaus.com/tutorials/immersion/10-reasons-for-60fps/#commentsThu, 15 Jan 2015 12:10:58 +0000http://paulbakaus.com/?post_type=tutorial&p=2368It’s easy to state that a higher number is better. It’s also easy to get sucked into (albeit interesting) theories and at other times non-researched half-truths when debating film projection technology and frame rate. But there are very simple, very reasonable arguments for not only high frame rate movies, but the establishment of 60 fps as standard. Here they are.

60 fps videos are both smooth and sharp

The only reason you don’t run out of movie theaters with heavy eye strain is the fact that film makers use motion blur to make the pictures look smoother than they actually are. Motion blur happens when I record your moving hand, and instead of recording a single moment in time (which would produce a sharp image of your hand), I capture a little more, so the final recorded picture includes a part of the blurred movement. And that, when combined into 24 images, makes it look quite a lot smoother.

Without Motion Blur

With Motion Blur

Sounds great? Well, not so quick. Motion blur has a huge drawback. It comes at the expense of detail. Try focussing on stuff that moves on screen to get a sharp look: won’t work. In fact, even with added motion blur, movies at 24 frames are still too choppy for fast, smooth camera pans – it only works for slow pans (heck why there’s a 7 second rule for filmmakers – I shit you not).

If you now, like me, think you’d rather prefer sharp, detailed and smooth images without stupid film making limitations, you’re in luck: At 60 fps, all these limitations go away. Motion blur is drastically reduced, movies are crystal sharp even with fast motion, and stupid rules don’t exist anymore for film makers. Want to watch a cooking tutorial and follow the copping board? Want to watch the strobe lights in a recored concert? How about debris falling around in an explosion? With 60 fps, you can.

60 fps is the only format that syncs with all of your screens

Every device you own that has a screen, with the exception of TVs, has a fixed refresh rate of 60 Hz (with the exception of some gaming monitors). That includes your tablets, cellphones, computer screens. Basically everything you watch YouTube on.

What does it mean? It means that your screen constantly draws a new image at 1/60 of a second, regardless of what you feed him. If you’re a close observer, you notice that 60 doesn’t evenly divide by 24. That’s a pretty significant problem, as running a movie with 24 fps on a 60 Hz screen will cause the screen to draw a few frames twice, in an irregular way that is extremely noticeable. A camera pan that was smooth in a movie theater will thus look super choppy on your screen.

Simplified view on 24 fps running at 50 Hz.

You can fix this problem in two ways: You either replace all of your screens with more intelligent ones that can adapt their refresh rate, like TVs do (good luck with that, except for some gaming monitors you’re out of luck), or you urge people to produce content made for, well, every screen their content is consumed on. As it turns out, the only two formats that qualify are 30 fps and 60 fps (not 29.97 and 59.94, those two are shit and remnants of NTSC). 30 fps isn’t much smoother than 24, so make it 60. 60 fps on 60 Hz screens is a combination made in heaven.

60 fps greatly reduces LCD motion blur

Ever heard someone bitch about LCD screens, saying they’re crap because they’re so blurry? Well, it’s partially true. Good old CRT TVs, to this date, produce the sharpest home entertainment image. But there’s an important detail: The screen isn’t blurry, it only looks blurry. The blur happens in your eye, and it’s because of persistent LCD frames in combination with – you guessed it – low frame rates. Read all about it here.

4K looks better at 60 fps

A lot of producers have been asking me about compatibility between 4K / Ultra HD content and high frame rates. In fact, those two are not only compatible, but highly recommended together. 4K movies at regular 24 fps produce ultra detailed imagery, vastly superior to 1080p on large screens – as long as they don’t move, that is. Due to the heavy amount of motion blur in moving scenes at low frame rates, you immediately loose the gained resolution detail. At 60 fps, that motion blur problem is largely eliminated, and 4K will look like it’s supposed to look – extremely sharp and full of detail.

60 fps makes you part of the scene

Covering all of its technological advantages, we’re now down to the look of HFR movies. Many people have been unhappy with the look of Peter Jackson’s Hobbit, the first commercial HFR movie trilogy. By a large part, the reason is simple conditioning, but there’s another important part at play: Cineasts complained that the movies felt too real, thus making them part of the New Zealand set, as opposed to part of the current scene. And it’s true – while I have tremendous respect for Peter Jackson and his fearless push into the future, The Hobbit might not have been the best use-case. At least until we develop better ways of reestablishing the dream-like quality of 24p-motion-blurry movies with higher frame rates.

But for now, where does being part of the scene become a huge advantage? How about

]]>https://paulbakaus.com/tutorials/immersion/10-reasons-for-60fps/feed/0Frame Rate as creative choicehttps://paulbakaus.com/tutorials/immersion/frame-rate-as-creative-choice/ https://paulbakaus.com/tutorials/immersion/frame-rate-as-creative-choice/#commentsTue, 02 Dec 2014 01:55:27 +0000http://paulbakaus.com/?post_type=tutorial&p=2364As much as there are people who will tell you to always use 24fps for a cinematic look, the truth is, the definition of what makes a film look cinematic varies, and maybe you’re aiming for something different! Frame rate is as much a creative choice as color, contrast and shutter speed. Let’s take a look at the most important frame rates and their use cases.

12fps (or less)

12fps is the MVP of frame rates: Below 10-12fps, your brain likely won’t perceive the scene as motion. You should use 12fps or less if you’re aiming for a stop motion look, like this one:

Use if you’re trying to create a whimsical, dreamy look.

24fps

While 12 is the minimum viable, 24 is the minimum acceptable frame rate. For no good reason in particular, it’s still today’s standard for motion picture images. 24fps is barely enough to create acceptable motion, while at the same time, subconsciously your brain understands that what you are looking at is not real. The sad truth is that we’re bound by habit – we’ve been watching 24fps more more than a hundred years, so we automatically associate 24fps with high profile motion pictures.

24fps is therefore a good choice if you’re aiming for the nostalgic “film look” moviegoers are used to, and is especially relevant if all you care for is your narrative. Movies at 24fps are shot with a wider shutter angle, so backgrounds are naturally more blurred and less distracting. This highlights the front action,

Use if you care exclusively about your narrative, or create a world that is supposed to look unreal, magical, and all you’re caring about is projection on movie projectors (and selected TVs).

30fps

Use 30fps as a substitute for 24fps for any online video, and video that needs to look great on computer screens, phones and tablets. Those devices cannot adapt their refresh rate to the frame rate, resulting in heavy judder/stuttering for 24fps videos. Those screens usually refresh 60 times a second (my advice: Don’t ever use 24fps. It’s not worth it. We live in a world where most media is consumed on devices that have a fixed refresh rate of 60Hz).

48fps

Use for the same reasons as 60fps (see below), but only if all you care about is projection on movie projectors, for the same reasons mentioned above at 30: You can’t evenly divide 60 by 48, resulting in stuttering.

60fps

When in doubt, use 60fps.

60fps produces crystal clear imagery with smooth motion, all while syncing with the refresh rate of all modern viewing devices. At 24fps, filmmakers have to make a choice between a crisp, detailed picture that stutters and a smoother picture that is blurry. They also limit themselves on how fast camera pans and fast movements can be filmed. With 60fps, you don’t have you follow these rules.

It’s true that 60fps looks more “real” than 24, so a fantastic, magical setting might not be ideal. And due to the lack of motion blur, the eye will wander around the screen in scenes that have everything in focus (greater depth of field). Of course, this can be easily solved: Just shoot with a lower depth of field (producing a much nicer, natural blur than motion blur)!

Summary

There’s no one true frame rate for all your needs. Art is as much about limitation than about breaking barriers. But 60 is truly the easiest to work with, perfect for the digital age and buttery smooth. When in doubt, choose 60.

]]>https://paulbakaus.com/tutorials/immersion/frame-rate-as-creative-choice/feed/0API Simulator feat. Service Workerhttps://paulbakaus.com/2014/10/24/api-simulator-feat-service-worker/ https://paulbakaus.com/2014/10/24/api-simulator-feat-service-worker/#commentsFri, 24 Oct 2014 06:15:34 +0000http://paulbakaus.com/?p=2351API Simulator allows you to setup any number of static routes (URLs) on the host it runs on, allowing you to test a static JSON response. Might come in handy when building prototypes or sample apps:

API Tester has been built in ~12 hours as part of an internal Service Worker hackathon, is nothing revolutionary and and is definitely not ready for production (only been tested in Chrome Canary) – but hey, you won’t use it for production code anyway ;)

More interesting than the outcome might be the path that got me there. The tech being used is a combination of Service Worker, MessageChannel and Promises. Here are some interesting gotchas, in no particular order:

Make sure caching on the SW file is disabled by your development server, or the SW will be cached for 24 hours

Open chrome://serviceworker-internals/, tick the checkbox next to “Opens the DevTools window for ServiceWorker on start for debugging.”

After making changes to SW, reload your current client (tab connected to it), should bring up new, second DevTools window with new instance of service worker, saying it has been installed (but it’s not running yet!)

Close all other tabs that could interact with the worker currently running

Shift-Reload the current tab you’re in, resulting in SW being ignored / not used

Reload tab a second time, this time normally

New version of service worker should now take over, old version should die

???

Profit

OK OK, step 8 wasn’t needed, but everything else is brutal reality today. There are good, complicated reasons, but that doesn’t change the fact that it is a pain in the butt. Unless, of course, you’re using DevTools’ Workspaces (mapping your local filesystem to the running worker), in which case the workflow is dramatically simplified:

Make change to SW

Profit

No, really. When you make a change to the SW now (whether in your external editor or directly in the DevTools window), DevTools will find out about said changes, hot-patch the currently live Service Worker with the changes (reporting “Recompilation and update succeeded.” in the console), with no need for you to reload the client or the worker. It’s magic.

Sending events through postMessage from the client to the worker is pretty straight forward, but the other way around is not. In my case, I wanted the Service Worker to store all routes in his own IndexedDB database, and let the clients consume and manage that data. But from the Worker itself, you don’t have a client object to reply back to, so what do you do?

You use a MessageChannel. A MessageChannel is basically a tin can telephone. You create an instance of it on the client, and said instance holds two ports (two tin cans). One port (port2) is being sent to the worker in a normal message, which establishes the link between both, the other port (port1) belongs to the client. Both ports have a postMessage function and take an “onmessage” function, so now that you have a tin on both sides, both can communicate freely. Problem solved.

IndexedDB is crucial pain

IndexedDB has the most convoluted crap API I’ve seen in a while, and you wouldn’t imagine the things I’ve seen. It’s worse than the drag & drop API. After a few hours with it, my anger turned to disbelief – I mean, what where they thinking? How could this ever end up in browsers? Why is there no war? Have I missed something?

All I needed is a simple key value storage. LocalStorage, you say? Meep. Try again. There’s no localStorage in Service Worker. Having never worked with IndexedDB before, I naively thought “surely they thought about how to solve the most simple use case in a satisfactory manner”, so I expected something like localStorage.setItem, just async. You know, like asyncStorage.setItem().then(..). What was two lines of code in localStorage became a 100 lines in IndexedDB. Seriously.

To those that argue that it’s been built as “well performing, underlying platform feature to build upon”, you are the problem. It’s unfortunate that I have to remind you that you are building a product for web developers. If your product is not usable, you failed.

Ending on a high note

Service Worker shows that it is possible to create an API that is both highly flexible and low level, yet very user friendly and effective. With the exception of needing to mess with IndexedDB, working with Service Worker has been surprisingly fun and empowering, even in its early state of implementation.

Ubisoft made headlines when suggesting 30 fps in games “feels better for players”, after announcing their decision to cap Assassin’s Creed Unity at 30 fps (and a resolution of 900p!). Here’s what Nicolas Guerin, Unity’s World Level Design Director, had to say about the matter:

“At Ubisoft for a long time we wanted to push 60 fps. I don’t think it was a good idea because you don’t gain that much from 60 fps and it doesn’t look like the real thing. It’s a bit like The Hobbit movie, it looked really weird.

“And in other games it’s the same – like the Rachet and Clank series [where it was dropped]. So I think collectively in the video game industry we’re dropping that standard because it’s hard to achieve, it’s twice as hard as 30fps, and its not really that great in terms of rendering quality of the picture and the image.”

Shortly after, a story on Reddit came up that suggests that in addition, Microsoft and Sony are pressuring Ubisoft to cap their games at 30 fps on PC. Now take the Reddit thread with a healthy grain of salt, but if all of it is true, it shows a very alarming industry trend.

Ubisoft is using the Hobbit and other games as a vehicle to be able to drop high frame rates by greatly overgeneralizing the issue of working with high frame rates and doing more harm than good (actually, no good at all). This gives them more room to breeze in development, at the expense of…well, the progress of mankind into a brighter and smoother future.

Don’t let them get away with it. For too long, people have been sharing the myth that are eyes can’t see beyond ~25 frames. If you want a brighter, smoother future for you and your children in both movie theaters and on game consoles, you can help by sharing debunking articles like the one I wrote: The Illusion of Motion, or this simpler, generalized answer to the Ubisoft propaganda.

Thanks!

]]>https://paulbakaus.com/2014/10/20/ubisoft-propaganda/feed/0The web is built to lasthttps://paulbakaus.com/2014/08/26/the-web-is-built-to-last/ https://paulbakaus.com/2014/08/26/the-web-is-built-to-last/#commentsTue, 26 Aug 2014 14:54:33 +0000http://paulbakaus.com/?p=2344Native platform providers boast with the large new feature sets they add to their OS’es every year. But when comparing themselves directly to the web platform, they’re conveniently missing the simple fact that they can do so only for the lack of standardization. Corporate platforms come and go and rarely last longer than a decade.

The web is built for the long haul. Through a painstakingly difficult standardization effort at the W3C, the brightest minds in the world are making sure that the platform you built upon is rock solid, consistent and well supported.

This is not a direct advantage to users, and not even a direct advantage to developers today, but it has to be a consideration to every strategically thinking CTO on the planet (unless all you aim for is to quick sell your company). If you’re directing an infrastructure and R&D effort involving hundreds of engineers, all building shared technology, you better make sure the infrastructure you build is built on a solid foundation.

If you build a web app today, it will run in browsers 10 years from now. Good luck trying the same with your favorite mobile OS (excluding Firefox OS).

Instead of following the suggestion in my sketch, please click “Yes” whenever you see this dialog and give the app a negative App Store rating stating the same reasons. Thanks!

]]>https://paulbakaus.com/2014/08/05/rate-this-app/feed/0Bigger is not betterhttps://paulbakaus.com/2014/07/31/bigger-is-not-better/ https://paulbakaus.com/2014/07/31/bigger-is-not-better/#commentsThu, 31 Jul 2014 12:56:30 +0000http://paulbakaus.com/?p=2336One common misconception that I’m hearing over and over is that popularity of a product is an indication of its quality. I’ve met (and worked with) many that were convinced that a game, for instance, had to be popular to be “good”. By that logic, McDonalds is the best restaurant in the world.

It’s just not true, on so many levels. Popular products don’t even necessarily make more money! Designing your product around popularity can be very dangerous: Many require a social viral effect to take off and “whales“, meaning a single big spender for hundreds of thousands free consumers, or finally simply huge amounts of users due to the fact that the only monetisation in place is advertising. “Too big to fail” is a myth, and rightly so.

I’m much more excited about products that make money through their first customer and scale up organically. They too become popular if they’re great, but popularity is not their main asset. They don’t require large scaling operations upfront and are generally much more maintainable.

Please don’t build the next McDonalds. Thanks!

]]>https://paulbakaus.com/2014/07/31/bigger-is-not-better/feed/0The Illusion of Motionhttps://paulbakaus.com/tutorials/performance/the-illusion-of-motion/ https://paulbakaus.com/tutorials/performance/the-illusion-of-motion/#commentsWed, 21 May 2014 16:25:36 +0000http://paulbakaus.com/?post_type=tutorial&p=2308Like to watch, rather than read? Watch my recorded talk from SmashingConf 2014: http://vimeo.com/108331968

Introduction

You may have heard the term frames per second (fps), and that 60 fps is a really good target for everything animated. But most console games run at 30 fps, and motion pictures generally run at 24 fps, so why should you go all the way to 60 fps?

Frames… per second?

The early days of filmmaking

A production scene from the 1950 Hollywood film Julius Caesar starring Charlton Heston. Via Wikipedia.

When the first filmmakers started to record motion pictures, many discoveries weren’t made scientifically, but by trial and error. The first cameras and projectors were hand controlled, and in the early days analog film was very expensive – so expensive that when directors recorded motion on camera, they used the lowest acceptable frame rate for portrayed motion in order to conserve film. That threshold usually hovered at around 16 fps to 24 fps.

When sound was added to the physical film (as an audio track next to the film) and played back along with the video at the same pace, hand-controlled playback suddenly became a problem. It turns out that humans can deal with a variable frame rate, but not with a variable sound rate (where both tempo and pitch change), so filmmakers had to settle on a steady rate for both. That rate was 24 fps and, almost a hundred years later, it remains the standard for motion pictures. (In television, frame rates had to be modified slightly due to the way CRT TVs sync with the AC power frequency.)

The human eye vs. frames

But if 24 fps is barely acceptable for motion pictures, then what is the optimal frame rate? This is a trick question, as there is none.

Motion perception is the process of inferring the speed and direction of elements in a scene based on visual, vestibular and proprioceptive inputs. Although this process appears straightforward to most observers, it has proven to be a difficult problem from a computational perspective, and extraordinarily difficult to explain in terms of neural processing. – Wikipedia

The eye is not a camera. It does not see motion as a series of frames. Instead, it perceives a continuous stream of information rather than a set of discrete images. Why, then, do frames work at all?

Two important phenomena explain why we see motion when looking at quickly rotated images: persistence of vision and the phi phenomenon.

Most filmmakers think persistence of vision is the sole reason, which isn’t true; although observed but not scientifically proven, persistence of vision is the phenomenon by which an afterimage seemingly persists for approximately 40 milliseconds (ms) on the retina of the eye. This explains why we don’t see black flicker in movie theaters or (usually) on CRTs.

The Phi Phenomenon in action. Notice that even though nothing is moving, it still feels that way?

The phi phenomenon, on the other hand, is the true reason we perceive motion when being shown individual images. It’s the optical illusion of perceiving continuous motion between separate objects viewed rapidly in succession.

Our brain is very good at helping us fake it – not perfect, but good enough. Using a series of still frames to simulate motion creates perceptual artifacts, depending largely on the frame rate. So no frame rate will ever be optimal, but we can get pretty close.

Common frame rates, from poor to perfect

To get a better idea of the absolute scale of frame rate quality, here’s an overview chart. Keep in mind that because the eye is complex and doesn’t see individual frames, none of this is hard science, merely observations by various people over time.

The sweet spot; most people won’t perceive much smoother images above 60 fps.

∞ fps

To date, science hasn’t proven or observed our theoretical limit.

Note: Even though 60 fps is observed to be a good number for smooth animations, that’s not all there is to a great picture. Contrast and sharpness can still be improved beyond that number. As an example of how sensitive our eyes are to changes in brightness, there have been scientific studies showing that test subjects can perceive a white frame between a thousand black frames. If you want to go deeper, hereareafewresources.

Demo time: How does 24 fps compare to 60 fps?

HFR: Rewiring your brain with the help of a Hobbit

“The Hobbit” was the first popular motion picture shot at twice the standard frame rate, 48 fps, called high frame rate or HFR. Unfortunately not everyone was happy about the new look. There were multiple reasons for this, but the biggest one by far was the so-called soap opera effect.

Most people’s brains have become trained to assume that 24 fps equals expensive movies, while 50-60 half frames (interlaced TV signals) reminds us of TV productions and destroys the “film look”. A similar effect is created when enabling motion interpolation on your TV for 24p (progressive) material, disliked by many viewers (even though modern algorithms are usually pretty good at rendering smooth motion without artefacts, a common reason naysayers dismiss the feature).

Even though higher frame rates are measurably better (by making motion less jerky and fighting motion blur), there’s no easy answer on how to make them feel better. It requires retraining your brain, and while some viewers reported that everything was fine after ten minutes into “The Hobbit”, others have sworn off HFR entirely.

Cameras vs. CGI: The story of motion blur

But if 24 fps is supposedly barely tolerable, why have you never walked out of a cinema, complaining that the picture was too choppy? It turns out that video cameras have a natural feature – or bug, depending on your definition – that CGI (including CSS animation!) is missing: motion blur.

Once you see it, the lack of motion blur in video games and in software is painfully obvious. Dmitri Shuralyov has created a nifty WebGL demo that simulates motion blur. Move your mouse around quickly to see the effect.

Motion blur, as defined at Wikipedia, is

…the apparent streaking of rapidly moving objects in a still image or a sequence of images such as a movie or animation. It results when the image being recorded changes during the recording of a single frame, either due to rapid movement or long exposure.

Motion blur “cheats” by portraying a lot of motion in a single frame at the expense of detail. This is the reason a movie displayed at 24 fps looks relatively acceptable compared to a video game displayed at 24 fps.

But how is motion blur created in the first place? In the words of E&S, pioneers at using 60 fps for their mega dome screens:

When you shoot film at 24 fps, the camera only sees and records a portion of the motion in front of the lens, and the shutter closes between each exposure, allowing the camera to reposition the film for the next frame. This means that the shutter is closed between frames as long as it is open. With fast motion and action in front of the camera, the frame rate is actually too slow to keep up, so the imagery ends up being blurred in each frame (because of the exposure time).

Classical movie cameras use a so-called rotary disc shutter to do the job of capturing motion blur. By rotating the disc, you open the shutter for a controlled amount of time at a certain angle and, depending on that angle, you change the exposure time on the film. If the exposure time is short, less motion is captured on the frame, resulting in less motion blur; if the exposure time is long, more motion is captured on the frame, resulting in more motion blur.

The rotary disc shutter in action. Via Wikipedia.

If motion blur is such a good thing, why would a movie maker want to get rid of it? Well, by adding motion blur, you lose detail; by getting rid of it, you lose smoothness. So when directors want to shoot scenes that require a lot of detail, such as explosions with tiny particles flying through the air or complicated action scenes, they often choose a tight shutter that reduces blur and creates a crisp, stop motion-like look.

Motion Blur capture visualized. Via Wikipedia.

So why don’t we just add it?

Sadly, even though motion blur would make lower frame rates in games and on web sites much more acceptable, adding it is often simply too expensive. To recreate perfect motion blur, you would need to capture four times the number of frames of an object in motion, and then do temporal filtering or anti-aliasing (there is a great explanation by Hugo Elias here). If making a 24 fps source more acceptable requires you to render at 96 fps, you might as well just bump up the frame rate in the first place, so it’s often not an option for live content. Exceptions are video games that know the trajectory of moving objects in advance and can approximate motion blur, as well as declarative animation systems such as CSS Animations, and of course CGI films like Pixar’s.

60 Hz != 60 fps: Refresh rates and why they matter

Note: Hertz (Hz) is usually used when talking about refresh rates, while frames per second (fps) is an established term for frame-based animation. To not confuse the two, we’ll use Hz for refresh rates and fps for frame rates.

If you’ve ever wondered why Blu-Ray playback is so poor on your laptop, it’s often because the frame rate is not evenly divisible by the refresh rate (DVD’s, on the other hand, are converted before they arrive in your drive). Yes, the refresh rate and frame rate are not the same thing. Per Wikipedia, “[..] the refresh rate includes the repeated drawing of identical frames, while frame rate measures how often a video source can feed an entire frame of new data to a display.” So the frame rate describes the number of individual frames shown on screen, while the refresh rate describes the number of times the image on screen is refreshed or updated.

In the best case, refresh rate and and frame rate are in perfect sync, but there are good reasons to use three times the refresh rate of the frame rate in certain scenarios, depending on the projection system being used.

A new problem with every display

Movie projectors

Many people think that movie projectors work by rolling a film past a light source. But if that were the case, we would only see a continuous blurry image. Instead, as with capturing the film in the first place, a shutter is used to project separate frames. After a frame is shown, the shutter is closed and no light is let through while the film moves, after which the shutter is opened to show the next frame, and the process repeats.

That isn’t the whole picture, though. Sure, this process would show you a movie, but the flicker caused by the screen being black half the time would drive you crazy. This “black out” between the frames is what would destroy the illusion. To compensate for this issue, movie projectors actually close the shutter twice or even three times during a single frame.

Of course, this seems completely counter-intuitive – why would adding more flicker feel like less flicker? The answer is that it reduces the “black out” period, which has a disproportionate effect on the vision system. The flicker fusion threshold (closely related to persistence of vision) describes the effect that these black out periods have. At roughly ~45Hz “black out” periods need to be less than ~60% of the frame time, which is why the double shutter method for movies works. At above 60Hz the “black out” period can be over 90% of the frame time (needed by displays like CRTs).The full concept is subtly more complicated, but as a rule of thumb, here’s how to prevent the perception of flicker.

Use a different display type that doesn’t flicker due to no “black out” between frames – that is, always keep a frame on display

Have constant, non-variable black phases that modulate at less than 16 ms

Flickery CRTs

CRT monitors and TVs work by shooting electrons onto a fluorescent screen containing low persistence phosphor. How low is the persistence? So low that you never actually see a full image! Instead, while the electron scan is running, the lit-up phosphor loses its intensity in less than 50 microseconds – that’s 0.05 milliseconds! By comparison, a full frame on your Android or iPhone is shown for 16.67ms.

The refresh scan, captured at a 1/3000 second exposure. From Wikipedia.

So the whole reason that CRTs work in the first place is persistence of vision. Because of the long black phase between light samples, CRTs are often perceived to flicker – especially in PAL, which operates at 50 Hz, versus NTSC, which operates at 60 Hz, right where the flicker fusion threshold kicks in.

To make matters even more complicated, the eye doesn’t perceive flicker equally in every corner. In fact, peripheral vision, though much blurrier than direct vision, is more sensitive to brightness and has a significantly faster response time. This was likely very useful in the caveman days for detecting wild animals leaping out from the side to eat you, but it causes plenty of headaches when watching movies on a CRT up close or from an odd angle.

Blurry LCDs

Liquid Crystal Displays (LCDs), categorized as a sample-and-hold type display, are pretty amazing because they don’t have any black phases in the first place. The current image just stays up until the display is given a new image.

Let me repeat that: There is no refresh-induced flicker with LCDs, no matter how low the refresh rate.

But now you’re thinking, “Wait – I’ve been shopping for TVs recently and every manufacturer promoted the hell out of a better refresh rate!” And while a large part of it is surely marketing, higher refresh rates with LCDs do solve a problem – just not the one you’re thinking of.

Eye induced motion blur

LCD manufacturers implement higher and higher refresh rates because of display or eye-induced motion blur. That’s right; not only can a camera record motion blur, but your eyes can as well! Before explaining how this happens, here are two mind blowing demos that help you experience the effect (click the image).

In this first experiment, focusing your vision onto the unmoving flying alien at the top allows you to clearly see the white lines, while focusing on the moving alien at the bottom magically makes the white lines disappear. In the words of the Blur Busters website, “Your eye tracking causes the vertical lines in each refresh to be blurred into thicker lines, filling the black gaps. Short-persistence displays (such as CRT or LightBoost) eliminate this motion blur, so this motion test looks different on those displays.”

In fact, the effect of our eyes tracking certain objects can’t ever be fully prevented, and is often such a big problem with movies and film productions that there are people whose whole job is to predict what the eye will be tracking in a scene and to make sure that there is nothing else to disturb it.

In the second experiment, the folks at Blur Busters try to recreate the effect of an LCD display vs. short-persistence displays by simply inserting black frames between display frames and, amazingly, it works.

As illustrated earlier, motion blur can either be a blessing or a curse – it sacrifices sharpness for smoothness, and the blurring added by your eyes is never desirable. So why is motion blur such a big issue with LCDs compared to CRTs that do not have such issues? Here’s an explanation of what happens if a frame that has been captured in a short amount of time is held on screen longer than expected.

When a pixel is addressed, it is loaded with a value and stays at that light output value until it is next addressed. From an image portrayal point of view, this is the wrong thing to do. The sample of the original scene is only valid for an instant in time. After that instant, the objects in the scene will have moved to different places. It is not valid to try to hold the images of the objects at a fixed position until the next sample comes along that portrays the object as having instantly jumped to a completely different place.

And, his conclusion:

Your eye tracking will be trying to smoothly follow the movement of the object of interest and the display will be holding it in a fixed position for the whole frame. The result will inevitably be a blurred image of the moving object.

Yikes! So what you want to do is flash a sample onto the retina and then let your eye, in combination with your brain, do the motion interpolation.

Extra: So how much does our brain interpolate, really?

Nobody knows for sure, but it’s clear that there are plenty of areas where your brain helps to create the final image shown to your brain. Take this wicked blind spot test as example: Turns out there’s a blind spot right where the optic nerve head on the retina is, a spot that’s supposed to be black but gets filled in which interpolated interpolation by our brain.

Frames and screen refreshes are not mix and match!

As mentioned earlier, there are problems when the refresh rate and frame rate are not in sync; that is, when the refresh rate is not evenly divisible by the frame rate.

Problem: Screen tearing

What happens when your movie or app begins to draw a new frame to the screen, and the screen is in the middle of a refresh? It literally tears the frame apart (see it on video).

Here’s what happens behind the scenes. Your CPU/GPU does some processing to compose a frame, then submits it to a buffer that must wait for a monitor to trigger a refresh through the driver stack. The monitor then reads the pending frame and starts to display it (you need double buffering here so that there is always one image being presented and one being composited). Tearing happens when the buffer that’s currently being drawn by the monitor from top to bottom gets swapped by the graphics card with the next frame pending consumption. The result is that the top half of your screen is from frame A (the frame that is drawn too early before the refresh), while the bottom half is from frame B (the frame that was drawn before frame A).

Note: To be precise, screen tearing can occur even when both refresh rate and frame rate match! They need to match both phase and frequency.

Solution: Vsync

Screen tearing can be eliminated through Vsync, short for vertical synchronization. It’s a feature of either hardware or software that ensures that tearing doesn’t happen – that your software can only draw a new frame when the previous refresh is complete. Vsync throttles the consume-vs.-display frequency of the above process so that the image being presented doesn’t change in the middle of the screen.

Thus, if the new frame isn’t ready to be drawn in the next screen refresh, the screen simply recycles the previous frame and draws it again. This, unfortunately, leads to the next problem.

New problem: Jitter

Even though our frames are now at least not torn, the playback is still far from smooth. This time, the reason is an issue that is so problematic that every industry has been giving it new names: judder, jitter, stutter, jank, or hitching. Let’s settle on “jitter”.

Jitter happens when an animation is played at a different frame rate than the rate at which it was captured (or supposed to play). Often, this means Jitter happens when the playback rate is unsteady or variable, rather than fixed (as most content is recorded at fixed rates). Unfortunately, this is exactly what happens when trying to display, for example, 24 fps on a screen with 60 refreshes per second. Every once in a while, because 60 cannot be evenly divided by 24, a single frame must be presented twice (when not utilizing more advanced conversions), disrupting smooth effects such as camera pans.

In games and websites with lots of animation, this is even more apparent. Many can’t keep their animation at a constant, divisible frame rate. Instead, they have high variability due to reasons such as separate graphic layers running independently of each other, processing user input, and so on. This might shock you, but an animation that is capped at 30 fps looks much, much better than the same animation varying between 40 fps and 50 fps.

Fighting jitter

During conversion: Telecine

Telecine describes the process of converting motion picture film to video. Expensive professional converters such as those used by TV stations do this mostly through a process called motion vector steering that can create very convincing new fill frames, but two other methods are still common.

Speed up

When trying to convert from 24 fps to a PAL signal at 25 fps (e.g., TV or video in the UK), a common practice is to simply speed up the original video by 1/25th of a second. So if you’ve ever wondered why “Ghostbusters” in Europe is a couple of minutes shorter, that’s why. While this method often works surprisingly well for video, it’s terrible for audio. How bad can 1/25th of a second realistically be without an additional pitch change, you ask? Almost a half-tone bad.

Take this real example of a major fail. When Warner released the extended Blu-Ray collection of Lord of the Rings in Germany, they reused an already PAL-corrected sound master for the German dub, which was sped up by 1/25th, then pitched down to correct the change. But because Blu-Ray is 24 fps, they had to convert it back, so they slowed it down again. Of course, it’s a bad idea to do such a two-fold conversion anyway, as it is a lossy process, but even worse, when slowing it down again to match the Blu-Ray video, they forgot to change the pitch back, so every actor in the movie suddenly sounded super depressing, speaking a half-tone lower. Yes, this actually happened, and yes, there was fan outrage, lots of tears, lots of bad copies, and lots of money wasted on a large media recall.

The moral of the story: speed change is not a great idea.

Pulldown

Converting movie material to NTSC, the US standard for television, isn’t as simple as speeding up the movie, because changing 24 fps to 29.97 fps would mean a 24.875% speed up. Unless you really love chipmunks, this may be not the best option.

Instead, a process called 3:2 pulldown was invented (among others), which became the most popular way of conversion. It’s the process of taking 4 original frames and converting them to 10 interlaced half-frames, or 5 full frames. Here’s a picture describing the process.

3:2 Pulldown in action. From Wikipedia.

On an interlaced screen (i.e. a CRT), the video fields in the middle are shown in tandem, each of them interlaced, and so are made up of only every second row of pixels. The original frame, A, is split into two half frames that are both shown on screen. The next frame, B, is also split but the odd video field is shown twice, so it’s distributed across 3 half frames and, in total, we arrive at 10 distributed half frames for the 4 original full frames.

This works fairly well when portrayed on an interlaced screen (such as a CRT TV) at roughly 60 video fields (practically half frames, every odd or even row blank), as you never see two of these together at once. But it can look terrible on displays that don’t support half frames and must composite them together again into 30 full frames, as in the row at the far right of the picture. This is because every 3rd and 4th frame is stitched together from two different original frames, resulting in what I call a “Frankenframe”. This looks especially bad with fast motion, when the difference between the two original frames is significant.

So pulldown sounds nifty, but it isn’t a general solution either. Then what is? Is there really no holy grail? It turns out there is, and the solution is deceptively simple!

During display: G-Sync, Freesync and capping

Much better than trying to work around a fixed refresh rate is, of course, a variable refresh rate that is always in sync with the frame rate, and that’s exactly what Nvidia’s G-Sync technology and AMD’s Freesync do. G-Sync is a module built into monitors that allows them to synchronize to the output of the GPU instead of synchronizing the GPU to the monitor, while Freesync achieves the same without a module. It’s truly groundbreaking and eliminates the need for telecine, and it makes anything with a variable frame rate, such as games and web animations, look so much smoother.

Unfortunately, both G-Sync and Freesync are still fairly new technologies and not yet widely deployed on consumer devices, so if you’re a developer doing animations on websites or apps and can’t afford the full 60 fps, your best bet is to cap the frame rate so that it is evenly divisible by the refresh rate – in almost every case, that cap is 30 fps.

Conclusion & actionable follow-ups

So how do we achieve a decent balance among our desired effects – minimal motion blur, minimal flickering, constant frame rates, great portrayal of motion, and great compatibility with all displays – without taxing the screen and GPUs too much? Yes, super high frame rates could reduce motion blur further, but at a great cost. The answer is clear and, after reading this article, you should know what it is: 60 fps.

Now that you are wiser, go do your best at running all of your animated content at 60 fps.

a) If you’re a web developer

Head over to jankfree.org, where members of the Chrome team are collecting the best resources on how to get all of your apps and animations silky smooth. If you only have time for one article, make it Paul Lewis’s excellent runtime performance checklist.

b) If you’re an Android developer

Check out Best Practices for Performance in our official Android Training pages, where we summarize the most important factors, bottlenecks, and optimization tricks for you.

c) If you work in the film industry

Record all of your content at 60 fps or, even better, at 120 fps, so you can scale down to 60 fps, 30 fps and 24 fps when needed (sadly, to also support PAL’s 50 fps and 25 fps, you’ll need to drive it up to 600 fps). Display all your content at 60 fps and don’t apologize for the soap opera effect. This revolution will take time, but it will work.

d) For everyone else

Demand 60 fps whenever you see moving images on the screen, and when someone asks why, direct them to this article.

Important: If this article influenced you and your business decisions in a positive way, I would love to hear from you.

Let’s all work together for a silky smooth future!

]]>https://paulbakaus.com/tutorials/performance/the-illusion-of-motion/feed/0Getting thrown out at the wrong stationhttps://paulbakaus.com/2014/05/20/getting-thrown-out-at-the-wrong-station/ https://paulbakaus.com/2014/05/20/getting-thrown-out-at-the-wrong-station/#commentsTue, 20 May 2014 10:57:53 +0000http://paulbakaus.com/?p=2303You know that system where children are forced to exit at a different subway station than adults? You don’t? Well, let me explain it to you.

More and more cities are testing a subway system where adults can normally exit on every station, but children are required to use a different door that only opens at a dedicated children-safe and friendly station. This “catch-all” solution is easier to implement than making all stations kid friendly. After all, kids don’t really know where they’re going anyway, they’re just boarding the subway to experience the ride, so it’s fine to throw them out at the last station.

Wait – what?

Ooooh. Sorry, I’ve been sneaking in a few analogies into above’s sadly accurate description. Replace children with smartphones, adults with desktop computers, and cities with companies.

When Lufthansa* sent me a promotion email on my phone and I tapped on the link, shouldn’t I be thankful that Lufthansa forwarded me to their mobile homepage instead of showing me the desktop-optimized promo page? Wait, I shouldn’t? You’re telling me that sounds like a total crap idea? That you’d rather like to zoom in and see the relevant page than go to a completely irrelevant one? Mhh yes, you make a point. But wouldn’t that be expensive, like, fixing all the stations, as in the analogy above?

Actually, no. Forwarding from a frickin’ desktop link to a frickin’ mobile link is not expensive, and not even difficult. If you have no mobile page to forward to, the problem will fix itself over time. Not in your then-bankrupt-because-of-ignoring-mobile company’s favor of course.

* not trying to pick on Lufthansa alone. This is a wide issue with thousands of providers. ]]>https://paulbakaus.com/2014/05/20/getting-thrown-out-at-the-wrong-station/feed/0Open source project ideashttps://paulbakaus.com/2014/05/16/open-source-project-ideas/ https://paulbakaus.com/2014/05/16/open-source-project-ideas/#commentsThu, 15 May 2014 23:32:00 +0000http://paulbakaus.com/?p=2293Attention: Unicorns are as real as advice in this article. As much as I hate to add an obvious disclaimer, I would hate it even more if actual juinors followed this! Now go and enjoy me breaking out of my usual serious role and have a little fun :)

So you’re a junior web developer who just started their career and you want to get your hands dirty with real code, but you’re clueless where to start and what to build. I’ve been there my young padawan, worry not! I’ve got you covered. The following is a list of highly innovative concepts that the open source world has been waiting for. Without further ado, here it goes:

A twitter client. It’s too bad that nobody came up with an alternative to the boring Twitter website yet. I think the time is ripe for a disruptive new app that uses the web’s full abilities. Think horizontal scrolling, parallax, 3D transforms. A custom, personalized font and tweet design for every of your Tweeple. Replacing hash tags with rich imagery. You’ll get the idea.

A Dialog library Dialogs are extremely useful for all kinds of UIs, but in 2014, it’s almost unbelievable that nobody came up with an alternative to alerts, confirms and popups (although there’s nothing wrong with popups, of course!). Why not always use popups, you say? Well, with popups, it’s up to the OS to decide how to style and animate them, completely breaking the look and feel of your carefully crafted interactive story-telling experience. If you decide to take a shot, one important note: Any respectable dialog plugin needs to fully support endlessly nested dialogs, a very common and valid use case for them.

A Lightbox plugin Some people say a lightbox is just another way to style a dialog, but haters gonna hate. Lightboxes are almost the complete opposite in many ways, err, ways that don’t immediately jump to the top of my mind but that surely exist. Sure, there might already be one or two jQuery-based lightbox plugins, but we’re still missing them for Angular, React, Polymer – hell, even jQuery UI has no lightbox yet! It’s up to you to fix this.

A Gradient Generator Wouldn’t it be nice to be able to forget about all the crazy syntax required to do gradients in CSS and just create them visually? Somebody should really build a tool for this. And best of all, you could combine your work on this one with another component that the editor will need – a colorpicker for the web.

A presentation framework In 2014, you have no chance of delivering a great conference talk if you don’t show real samples and code for at least half it, or even better live coding. It’s a way more innovative style of delivering a talk and the audience will appreciate you keepin’ it real as a coder. But Powerpoint and Keynote don’t allow embedding live code and iframes with your web apps, so wouldn’t it be neat to be able to do presentations in the browser? Sure, you might loose the ability to go full screen properly (i.e. with a presenter screen), but who needs presenter notes anyway.

A CSS transition effect library Since the CSS Animation Syntax, as well as CSS Transitions are sadly extremely verbose and hard to understand, it would be super handy if somebody could create a library of beautiful transitions and animations that I can use on my pages. If you decide to build one, here’s one killer feature request: Add a “random transition mode” that will pick a random transition every time the user interacts with the same element, thus making the app less predictable = less boring.

A todo list Arguably the most complex project in this list, it’s also the most rewarding. While the engineering of a todo list requires highly complex algorithms and definitely isn’t recommended for anyone without a CS degree, it has the highest potential of taking off big time. It makes for an wunderful portfolio project, showing future employers and investors that you’re into deep problem solving and don’t shy away from big ideas.

Update with new ideas from the community (especially thanks to @tobie) for the slightly more senior developers:

Now I’m sure I’m missing a lot of bright ideas (if you come up with one independently, share it with me on Twitter and I’ll add it maybe!), but starting with any one of these, you’ll gain valuable real-world application development practise and do your part to make the web more complete. Working on something that others sorely need and wished for will make you feel all warm and fuzzy inside. And best of all: Once you’ve coded your first version and uploaded it to Github, others will start to take it from there and create pull requests that polish that raw diamond for you – without having you to worry about maintenance! – allowing you to move on to the next project. The beauty of open source.