Exploring, Software, Electronics, Photography, and Kayaking from the Gulf Coast

Arduino and the TSL230R: Photographic Conversions

In the previous post on using the Taos TSL230R with the Arduino, I covered the basic operations of the chip, and some essential conversions for going from radiometric to photometric representation of its data. In this post, we’ll expand on that knowledge to calculate exposure times and apertures using the Exposure Value system and produce much more accurate lux calculations using multiple wavelengths of light. After reading both of these tutorials, you should have enough information to create your own photographic light meter using a few simple components.

GETTING STARTED

Before we go any further, we’ll bring forward some code from the previous example, and set this as our starting point:

It is important to understand the different ways we can meter light for photographic purposes. In reflective metering the light that is measured is that which is reflected off the subject – that is, you point your meter at the subject being photographed. In incident metering the light that is measured is that which is incident upon the subject – that is you stand where your subject is, and point your meter at the light source.

In all of our calculations below, we will be operating from the aspect of reflective light metering. The formulas vary slightly for incident metering and one should take that into account when designing a project. Please note as well, that during testing, you should be metering reflected light as well – you will not get an accurate reading by pointing a light right as the sensor. Instead, use a sheet of white paper or such and measure the light reflecting off your paper.

EXPOSURE VALUE CALCULATIONS

Exposure Values are used in photography to represent different combinations of shutter speed and aperture that result in the same exposure. Normally, EV is calculated regardless of actual light – it’s a means of expressing a set of camera settings (speed and aperture). However, if we follow the APEX System we find that we can relate EV to a combination of brightness value (Bv) and film/sensor sensitivity (Sv).

In the Additive APEX System, the exposure calculation is defined: Ev = Av + Tv = Bv + Sv. For an in-depth explanation of how to calculate this, please refer to the article linked above. (Or, just read the code below!) It should be fairly obvious that, in such a calculation, if we know any three values — we can determine the fourth.

Our first task will be to calculate the exposure time, in seconds, given the following information: Aperture, ISO, and Lux. For this, and all calculations, we are going to need the Light Meter Calibration Constant (K) – we’ll use the standard constant for reflective meters made by Pentax, because I’m a Pentax guy, which is 14. We will also need to know the relationship between between the ASA Arithmetic Speed Value and the ASA Speed Value, which is approximately 0.3.

So, let’s create a function that will give us the EV for our combination of illuminance and film speed first

Note that in the above equation, we use log(x) / log(2) everywhere, this is because the arduino environment does not support log2() by default, and we find that the logarithm of any base other than the natural is the natural logarithm divided by the natural logarithm of the base. So, we get l2(x) by dividing ln(x) by ln(2).

Now that we have the relevant EV, to get the exposure time, in seconds, we need to take our Ev calculated from two of the required three values (Bv, Sv, Av) and compare it to the third to get the new value out.

Note that this function returns a floating point number that we would divide 1 second by to get the final exposure time. I.e., it would return 10 for 1/10th second, 1.5 for .5″, and 0.05 for 20″. The following function will make it easy to convert this time into milliseconds for direct exposure control:

Now we have two functions that will return us the EV equivalent given our brightness and speed values, a third to extrapolate exposure time from EV and aperture, and a fourth to convert exposure time to mS (for the purposes of controlling a camera, etc.).

It should be fairly obvious at this point as to how to extrapolate other values from combinations of three factors, but here’s another function that calculates aperture given the combination of Bv, Sv, and Tv. (i.e. you know exposure time but not aperture):

So, there you have it – you now know (well, we hope you do!) how to calculate meaningful photographic data from illuminance data (lux) we were able to calculate in our previous tutorial.

ACCURATE MEASUREMENT OF MULTIPLE WAVELENGTHS

Ok, so it’s time to get a little more complicated here. Up and to this point, we’ve only been calculating the relative intensity of a single wavelength of light. It’s pretty rare that we photograph a subject illuminated by a laser or well-tuned diode that only produces a single wavelength of light. More than likely, we’re photographing in daylight, with a flash, or under hot lights in the studio. To accurately measure these sources, we have to take into account the fact that they are made up of different wavelengths of light, and that each wavelength is more or less efficient (visible to our eyes).

As pointed out previously, we are only going to deal with photoptic vision, that is the way our eyes work in ‘bright’ environments. Our eyes change behavior when the light level drops very low (think in a dark room at night), and scotoptic vision kicks in. We rarely photograph in levels this dark, so we won’t cover this type of vision.

V() is the standard luminosity function we discussed in the earlier post, and J() is the power spectral density function for the given wavelength. The hard part here is the power spectral density function – we either have to calculate for a blackbody radiator at a particular temperature using Planck’s Law (which is fairly difficult on the arduino, and is also highly inaccurate for light bulbs and the like) or, we have to figure out this data empirically. To measure this information for a given light source, we’d have to use a spectrum analyzer (which we can build with a TSL230R, but that’s a topic for another day) on that light source. Fortunately for us, the CIE provides a couple of tables, that give us the PSD values for two illuminants: ‘A’, which corresponds roughly to an incandescent lightbulb, and D65, which corresponds roughly to mid-day sunlight.

We’ll need to create three arrays to perform this calculation, one holds the wavelengths of light we’re willing to calculate for (as we have interest in integrating from 0 to infinity), one that holds the luminous efficiency function for each of those wavelengths, and one that gives us the power spectral density function for each wavelength we want to account for.

If your background is in anything but calculus or photometry, it can be a bit of a beast to get your head around, but take some time to thoroughly read and understand the above function and tables before using them. Effectively, we are determining the efficiency function by jumping between two points on our wavelength graph, and estimating the efficiency per nm of wavelength for each nm between our current and last point. We sum these up, and then multiple by the efficiency constant of 683 (lux per w/cm2 at 555nm) to determine the lux, given the radiometric energy observed for a particular multiple-wavelength light source. Simply replace any calls to calc_lux_single() with a call to calc_lux_gauss().

That wraps it up for this tutorial on photographic conversions. You now have enough information to build a basic photographic light meter using an Arduino and a TSL230R light sensor. In the next tutorial on this subject, we’ll work on increasing accuracy by calculating for the frequency response curve of the sensor its self, and adjusting for temperature impacts on the dark frequency of the chip.

So if you wanted to see just how much of a specific wave length, say 1000nm there was on a sunny day I would change
int wavelengths[18] = { 1000 };
float v_lambda[18] = { xxx };
float ilA_spd[18] = { yyy };

David – well, sort of. It does really depend on which specific question you are asking. If you really want to know how much actual power there is represented in a specific wavelength, you would need a prism to split the light into multiple wavelengths, and measure each one independently.

However, if you just wanted to know much of the visible light (lux) you’re reading is likely to be represented within this specific wavelength, you would just simplify the formula:

Also mind you, that 1000nm is largely not visible to our eyes. Therefore there is no associated v(lambda) function for it, and the SPD tables only go up to 830nm. Lux is a measure of how bright light appears _to our eyes_. You would need to get below 830 to make a useful lux calculation.

I have enjoyed your TSL230R tutorial tremendously. You indicated there would be a third tutotial in the series relsted on increasing response and accuracy. Has that been written up yet?
If so I don’t seem to be able to find it.. Thanks for sharing your work.
don

Hi Don – I haven’t gotten to the third tutorial yet, primarily because I haven’t found many issues with temperature response curves in the 50F-85F range. Surely, it does have an impact, but for the photographic-oriented uses I have applied it to thus far, I almost can’t make the case to use up more memory with code to increase accuracy in the 0.01-0.05% range.

As I get more done on my OpenMoco project, I’ll free some time up again for the TSL230R and knock out the last bits of voodoo around this chip. =)

Thanks for the quick response. I do some photography but am a computer guy (programming in C mostly). Recently I have done some pinhole stuff including building my own cameras. I have glanced at your OpenMoco project and will try to look at it more carefully as I have time.
Don

Your TSL230R tutorial helps me a lot! I’m a landscape photographer using large format camera. I always want to develop a system which can help me to simplified the progress, and this is just right for the main light meter function! Thank you so much and I’m really looking forward to your third part of the tutorial.

One thing I notice is that in the code for calculating the exp_tm, there is a line : float exp_log = pow(2, exp_tm); but I think I should be float exp_log = 1/pow(2, exp_tm);

The reason I don’t divide by one there is because later operations work easier when dealing with this value rather than the actual value in fractions of a second. While it’s not entirely obvious from this tutorial (and I should’ve done a better job in pointing it out) is I use that value later for displaying a “natural” exposure time (1/x, x”, etc.) if you look at the code in the LightRails Dynamic External Exposure Control ( https://roamingdrone.wordpress.com/2009/03/25/lightrails-dynamic-external-exposure-control-for-time-lapse/ ), you’ll see this:

Hello. Great project. I borrowed a few ideas from it for my own project(a TTL meter for a old camera). I’m trying to get it as small as possible so i’m using a simple 8bit IC instead of the arduino and i need to reduce the floating point calculations to a minimum.
I have a question regarding the “float calc_lux_gauss” function: uw_cm2 doesn’t seem to depend on the wave-length…so, it’s a constant in the integration. Couldn’t you calculate the value of the integral off-chip, implement it as a constant, then multiply it at the end with 683 * uw_cm2? Or am i missing something?

Ady, it would appear that you could do that – yes, and in fact, that would not only take a lot less memory, but it would also be heaps faster and more accurate. =) Even the arduino doesn’t have an FPU, so these calculations are a gamble.

The TAOS chip itself has a spectral response with a strong peak in the near IR, and it’s not at all flat throughout the visible region. I’m surprised that you get reasonable results using this chip without some form of hardware IR filtration. I have used the TAOS chip to make several light meters of my own design and each time they required IR filtration to be even moderately useful for tungsten light sources or even strong IR reflectors such as grass. Do you incorporate any spectrum-correcting filters in your hardware?

Yes, the TSL230R peaks in sensitivity in the IR-A range (around 780nm). I have not needed to add IR filtration, however, as you’ll note the gaussian function leaves a great deal of energy on the floor. The conversion then finds through the CIE V(lambda) table that even though the near IR range accounts for a great deal of the available power, it has little visibility to our eyes in photopic vision. Thus, in the conversion we find that although 720nm represents a lot of the power, the efficiency is at a low level (around 1/10th of 1%). Obviously, the read values should be reduced slightly by the wavelength at which you’re calculating and how the chip responds to that frequency, but that would have to be accounted for during the gaussian function (not before or after), and I doubt would equate more than a moderate reduction in overall lux reading. For incandescent light sources, I have found it to be very close to the spot meter in my camera, when using a 15% gray target. (When rounded out to near 1/x exposure values, it usually hits the same target.) Mind you, there are critical flaws in the first part of the tutorial series which need to be accounted for, and can largely be deduced to having been covered up elsewhere in the formula (more than likely in the gaussian function, either as precision errors due to the lack of an FPU, or because a flaw in the scale of the given numbers).

The means used were validated as “near-enough” to a known lux source (i.e. a 1 lux light reads ~1 lux, etc.) and through comparison to an in-camera meter. Doesn’t necessarily mean that everything’s _right_, but that it’s darned close.

If any spectrum filtering hardware is used, those spectrums filtered out MUST be removed from the tables used in the Gaussian function.

The design was for the standard “meter off a 15% gray card for 50% exposure.” To get pure accuracy, one must even then account for the spectral sensitivity of the recording media. Ilford PAN-F 50 is more sensitive to IR light than the CMOS on my K10D. (Which has an IR cut filter.)

A long way of saying: this tutorial is not designed to give you a “perfect” light meter, but instead to introduce one to the core concepts and conversions, to build upon later as the need arises.

The determination was in fact empirical. Alas, of all the data that CIE provides on the website I used as a source, the illuminant data tables are very poorly documented, and I tried various factors before coming to an lx reading I predicted given the radiometric intensity and type of the source. (Referencing “rule-of-thumb” tables for a given source type, etc.)

I would definitely like to know if I made any errors in the calculation.

you don’t have any significant errors (except the stuff in the first part, which is mentioned there). After all, it works :).
The values from CIE are in fact relative spectral power densities – they are referred to the spectral power density at 560nm. All the books say it is a more convenient representation. We don’t have the absolute value for 560nm, like for the human eye lm/W response, but we don’t need it, actually.
How do we translate the single value we get from the light sensor (W*m-2) to a lux reading? Just as you did, we scale it. For the scaling, we could, as you did, integrate the dot product of the SPD and the eye response (or film sensitivity, maybe?), so that we get a single factor and multiply it with our sensor reading (you did it more complicated than necessary, calculation the integral every time you need it). We can use different SPDs for different conditions – daylight has less relative infrared content, compared to incandescent, so we would get a smaller mult. factor from the dot product for incand. lights.
In the integration, we should actually take into account the sensor rel. sensitivity, too – for example multiply the rel. SPD of the source with the rel. sensitivity of the sensor (nm for nm). But what would this mean to the final scaling factor for the sensor reading? It just gets multiplied with another factor (the integral of the sensor sensitivity).
What we have at the end is just a scaling factor to convert W*m-2 to lux, and it is a relative one (because of the relative quantities). So we still have to calibrate it (you did it too, with the 1meg factor… which should have also smoothed out any errors you made in previous steps).
My proposition is to skip any integral calculations and just calculate the whole scaling factor at calibration time (maybe several calibrations for the different SPD sources to choose from at runtime, using a known good light meter with similar field of view).
I will build a similar project in the very near future, but have to finish some other stuff… I will report as soon as possible. I think I will not use TSL230, but a simple photo diode or transistor with a logarithmic amplifier attached to it, just to be different. Or maybe I’ll get the TSL230, will decide in a couple of weeks.
And how did I get interested in that: got an old camera without a light meter :). Thank you for the inspiration :).

Hi,
and thanks for this extraordinary tutorial. I can’t follow it completely, as I’m not very familiar with optics, so my question might be naive: I’m wondering whether this sensor could also be used to measure the temperature of light, in order to make an assumption about the type of light source (e.g., sunlight vs. light bulb)?
As I assume that it’s not possible, would you know of any sensor that would allow this?
Cheers,
andre

Thanks for the great tutorial!
Like many others, I am taking inspiration of your work.
However, I have some questions about the low light accuracy. I am getting good results in normal light (over 100-150uwatt/cm2).
In low light, my EV results are way too low. I am not sure whether the error is my testing layout, the software calculation or the chip limitation.
Do you have any advice for this issue?
Do you have other time lapse video using your intervalometer to compare work?
Cheers!
T

This is an awesome tutorial! Near the end the author asserts the TSL230R could be used to build a spectrum analyzer. According to Wikipedia, “A spectrum analyzer measures the magnitude of an input signal versus frequency within the full frequency range of the instrument.” According to the TSL230R datasheet, it generates a single output frequency whose rate is determined by the photodiode response curve and the overall number of photons (at each given frequency) striking the surface of the photodiode. I can understand how this could work if a single stream of photons was being measured, if the TSL230R was filtered, or if something else could be used as a reference. But…any clues as to how could this be done otherwise?