People who are color blind find watching TV incredibly frustrating because they are unable to distinguish between certain colors like red and green, or black and red.
This is particularly problematic for sports fans because they cannot tell the difference between teams. Indeed, as around 10% of the global male population suffers from CVD (colour vision deficiency), Eyeteq, a unique colour correction solution, offers a great opportunity for rights-holders to differentiate content that has potentially cost them billions of dollars in investment. Eyeteq delivers both a socially responsible, accessible solution and a better product for all.

Over the last 10 years a technological transformation has taken place that has changed the lives of the world’s population and two events in particular are having a profound impact on human visual behaviour patterns:

The global proliferation of smartphones, tablets and laptops in the home

The replacement of incandescent light bulbs with compact fluorescent tubes, supposedly more energy efficient, and more recently by LED technology

Both of the above have a bearing on digital screen technology, which has also advanced dramatically thanks to changing lifestyle behaviour patterns, 24x7 internet access and growing demand for consumer electronics for leisure and business usage.

Approximately 10% of men are color deficient, which means they have trouble distinguishing some color pairs (reds from greens, pinks from grey etc.) that are clearly distinct for the rest of the population [1]. Color deficiency is both an issue of accessibility and preference. Spectral Edge has developed the Eyeteq image processing algorithm which simultaneously makes visual information more accessible and, for the color deficient observer, makes the images dramatically more preferred.

Next-generation TVs and broadcasts will feature High Dynamic Range (HDR) capabilities as standard, with a greater contrast ratio, making TV content more vivid and real-life for a more engaging and immersive viewing experience.
It is taking time, however, for HDR UHD broadcasts to be widely rolled out over cable, terrestrial or satellite networks due to bandwidth availability and infrastructure investment. By enhancing the perceptual quality on live TV content Vividteq provides a path to next generation TV viewing experiences without a CPE upgrade. Vividteq also transforms viewing experience on live unproduced content such as conference feeds and music festivals.

Although it is possible to adjust the blue light settings on most digital displays, the trade-off is a poorer image quality overall, with details such picture sharpness or skin tone lost.
It is now possible, however, to overcome the blue light versus image quality dilemma using Nighteq, a novel night-mode solution, developed by Spectral Edge for use all digital displays.

Although most surveillance cameras feature RGB and infrared (IR) sensors as standard, they can only capture either RGB or IR data, not both which limits their ability to acquire detailed information in low or poor light situations.
Using breakthrough image fusion capabilities and leveraging understanding of the visible and invisible spectra, we've developed a powerful solution that can fuse visible and near-infrared (NIR) footage in real time, producing the highest quality images expected from high-grade security systems.

IEE: Spectral Edge, a spin-off from the University of East Anglia, has developed an image processing technology called ‘Eyeteq’. It is claimed that Eyeteq significantly improves the visual distinctiveness of colours in images for people with colour blindness, without adversely impacting on the experiences of people who are not colour blind. Indeed, evidence reported by Spectral Edge based on static images, suggests that viewers without colour blindness might even prefer images processed with Eyeteq.
Spectral Edge is further developing the Eyeteq product with a view to embed or add the technology to digital television sets. Independent evidence was required to demonstrate how colour-blind and non-colour blind people might experience this product in a television environment.

A video demonstration of how Eyeteq transforms the picture for the "deutan" category of colour-blind people. Eyeteq has been applied to one half of the screen only for comparison purposes, on a real set-top box the whole screen would be transformed:

There are many applications where multiple images are fused to form a single summary greyscale or colour output, including computational photography (e.g. RGB-NIR), dif- fusion tensor imaging (medical), and remote sensing. Often, and intuitively, image fusion is carried out in the derivative domain. Here, a new composite fused derivative is found that best accounts for the detail across all images and then the resulting gradient field is reintegrated. However........ (to read more download document)

There are many applications where multiple images are fused to form a single summary greyscale or colour output, including computational photography (e.g. RGB-NIR), dif- fusion tensor imaging (medical), and remote sensing. Often, and intuitively, image fusion is carried out in the derivative domain. Here, a new composite fused derivative is found that best accounts for the detail across all images and then the resulting gradient field is reintegrated. However........ (to read more download document)

There are many applications where multiple images are fused to form a single summary greyscale or colour output, including computational photography (e.g. RGB-NIR), dif- fusion tensor imaging (medical), and remote sensing. Often, and intuitively, image fusion is carried out in the derivative domain. Here, a new composite fused derivative is found that best accounts for the detail across all images and then the resulting gradient field is reintegrated. However........ (to read more download document)

There are many applications where multiple images are fused to form a single summary greyscale or colour output, including computational photography (e.g. RGB-NIR), dif- fusion tensor imaging (medical), and remote sensing. Often, and intuitively, image fusion is carried out in the derivative domain. Here, a new composite fused derivative is found that best accounts for the detail across all images and then the resulting gradient field is reintegrated. However........ (to read more download document)