Tag Archives: light

What is color, and what does it mean for an object to have a specific color? Well, color comes from the fact that light can have different sizes, the way objects reflect that light, and the way our eyes can see it.

Light is made up of these tiny packets of energy, photons, which travel as waves that can move through air or space. And there’s a distance between the peaks of the waves, the same way there would be for waves in water, which is the size of the light. Light can have a whole range of different sizes, so the microwaves that you use to cook food or the radio waves that carry sound through the air are both different sizes of light. But there’s a special range of light, the visible range, which contains the sizes of light that our eyes can detect.

So in the visible range, we have shorter lengths of light, which our eyes see as more blue, and longer lengths of light, which our eyes see as more red. In between, you have the full rainbow, which has all the colors we can see. The sun shines light on us with the whole range of sizes, but different objects will reflect different sizes or colors back at us. So an orange is absorbing most visible light but reflecting orange light, and then our eye detects that light and our brain tells us it’s orange.

But we need special cells in our eyes to detect color. Most people have three kinds of color-detecting cells, called cones, that pick up blue, green, or yellow light. From these three colors, our brain puts together the rest of the rainbow, like an artist does when mixing paint. People who have fewer or more kinds of cones will perceive color differently, maybe being color-blind or seeing even more colors than average, even though the light itself is the same!

I’ve been writing more about light recently, so I wanted to cover a basic question that most people first ask as children: why is the sky blue? We can tell that the blue color of the sky is related to sunlight, because at night, we can see out to the black of space and the stars. We also know it’s related to the atmosphere, because in photos from places like the Moon which have no atmosphere, the sky is black even when the Sun is up. So what’s going on?

When light from the Sun reaches Earth, its photons have a combination of wavelengths (or energies), and we call the sum of all of those the solar spectrum. Some of these wavelengths of light are absorbed by particles in the atmosphere, but others are scattered, which means that the photons in question are deflected to a new direction. Scattering of light happens all around us, because the electromagnetic wave nature of photons makes them very sensitive to variations in the medium through which they travel. Other than absorption of light, scattering is the main phenomenon that affects color.

There are a few different types of scattering. We talked about one type recently, when discussing metamaterials and structural color: light can be scattered by objects that are a size similar to the wavelength of that light. That is called Mie scattering, and it’s why clouds appear solid even though they are mostly empty. The clouds are formed of tiny droplets, around the size of the visible wavelengths of light, and when these droplets scatter white light, the clouds themselves appear diffuse and white. Milk also appears white because it has proteins and fat in tiny droplets, suspended in water, which scatter white light.

However, even objects much smaller than the wavelength of light can induce scattering. The oxygen and nitrogen molecules in the atmosphere can also act as scatterers, in what’s called Rayleigh scattering (or sometimes the Tyndall effect). For Rayleigh scattering, these molecules can be affected by the electromagnetic field that the photon carries. A molecule can be polarized, meaning the positive and negative charges in the molecule move in opposite directions, and then the polarized molecule interacts with the light by scattering it. But, the polarizability of individual molecules depends on the wavelength of the incoming light, meaning that some wavelengths will scatter more strongly than others. When Rayleigh worked out the mathematical form of this dependence, in 1871, he found that the scattering was inversely proportional to the fourth power of the wavelength of light, which means that blue light (which has a smaller wavelength) will scatter much more strongly than red light (which has a larger wavelength).

Thus, we see the Sun as somewhat yellow, because only the longer wavelength light in red and yellow travels directly to us. The shorter wavelength blue light is scattered away into the sky, and comes to our eyes on a very circuitous and scattered route that makes it look like the blue light is coming from the sky itself. At sunset, the sun appears even redder because of the increased amount of atmosphere that the light has travelled through, scattering away even more blue light. And, when there is pollution in the air, the sun can appear redder because there are more scattering centers that scatter away the blue light.

Of course, the fact that blue light scatters more is only half the story. If that were all there is to it, we’d see the sky as a deep violet, because that’s the shortest wavelength of light that our eyes can see. But even though we can see the violet in a rainbow, our eyes are actually much less sensitive to it than they are to blue light. Our eyes perceive color using special neurons called cones, and of the three types of cones, only one can detect blue and violet light. But the blue cone’s response to light peaks at around 450 nm, which is right in the middle of the blue part of the spectrum. So we see the sky as blue because it is the shortest wavelength that we’re capable of detecting in bulk. Different particles in the air can change the color of the sky, but so would different ways of sensing color. So Rayleigh scattering determines which light is scattered, and our visual system determines which of that light we see best: sky blue.

Remember plasmas, the phase of matter where atoms are ripped apart into electrons and nuclei? Plasmas are primed for strong electromagnetic interactions with the world around them, because of all the loose charged particles. They can be used to etch down into surfaces and catalyze chemical reactions, though the ions in a plasma won’t necessarily react with every form of matter they come across. And you can actually use an electromagnetic field on its own to contain a plasma, because of the plasma’s sensitivity to electromagnetic force. The most common design for a fusion reactor, the tokamak, uses a doughnut-shaped field to contain a plasma.

That’s how plasmas work at the macroscale, but it’s the individual charged ions in the plasma which react to electromagnetic force. Their interactions sum to a larger observable phenomena, which emerges from nanoscale interactions. But interestingly, the collective interactions of these ions can actually be approximated as discrete entities, called quasiparticles. We’ve talked about quasiparticles before, when we talked about holes which are quasiparticles defined by the absence of an electron. But in plasmas, the collective motion of the ions can also be considered as a quasiparticle, called a plasmon. Each individual ion is responding to its local electromagnetic field, but the plasmon is what we see at a larger scale when everything appears to be responding in unison. A plasmon isn’t actually a particle, just a convenient way to think about collective phenomena.

Plasmons can be excited by an external electromagnetic stimulus, such as light. And actually, anyone who has looked up at a stained glass window has witnessed plasmonic absorption of light! Adding small amounts of an impurity like gold to glass results in a mixture of phases, with tiny gold nanoparticles effectively suspended in the amorphous silica that makes up glass. Gold, like many metals, has a high electron density, and the electrons effectively comprise a plasma within each nanoparticle. When light shines through the colored glass, some wavelengths are plasmonically absorbed and others pass through. Adding a different metal to the glass can change the color, and so can different preparations of the glass that modify the size of the included nanoparticles. So all the colors in the window shown below are due to differing nanoparticles that plasmonically absorb light as it passes through!

Now you might ask, what determines which wavelengths of light pass through and which don’t? In the case of stained glass, it has to do with the size of the nanoparticles and the metal. But more generally, plasmas have a characteristic frequency at which they oscillate most easily, called the plasma frequency. The plasma frequency depends on several fundamental physical constants, including the mass and charge of an electron, but notably it also depends on the density of electrons in the plasma. For nanoparticles, the size of the particle also affects the response frequency. The practical upshot of the plasma frequency, though, is that if incident light has a frequency higher than the plasma frequency, the electrons in the plasma can’t respond fast enough to couple to the light, and it passes through the material. So the material properties that dictate the plasma frequency also determine whether light will be absorbed or transmitted.

For most metals that aren’t nanoscale, the plasma frequency is somewhere in the ultraviolet range of the electromagnetic spectrum. Thus, incident visible light is reflected by the free electron plasma in the metal, right at the surface of the material. And that’s why metals appear shiny!

The cool thing about nanoscience is that the size of a material can determine its material properties, which happens in part because energy levels are affected by size at small enough length scales. But another factor can be how the size of incident waves, such as light, compares to the size of the material. Imagine green light, with its 530 nm wavelength, striking something that’s less than 100 nm in size. Does the wave nature of the light have an effect? Or, perhaps more intriguingly, imagine green light striking a surface with blobs spaced 530 nm apart. What happens when the blob spacing is similar to the wavelength of the light?

This question is at the heart of the field of metamaterials, which are materials designed with periodic structure to create properties not found in nature. These properties come from the interaction of feature size with the wave nature of light or other natural phenomena. The periodic structure could be alternating one material with another, or even interleaving different shapes. For example, the split-ring resonator shown below can be repeated in an array to create a metamaterial.

The yellow parts are metal, patterned in almost but not quite complete rings, with one ring contained inside the other. In a split-ring resonator, any magnetic field passing through the rings induces rotating currents in the metal, which themselves induce an opposing field. Creating many small split-ring resonators and spacing them microns apart was used in 2006 to create an invisibility cloak that bends microwave radiation around the cloaked object. Microwaves were used because their wavelength is considerably longer than that of visible light, but researchers are working on smaller split-ring resonators and other methods to cloak objects from visible light.

While there’s no naturally occurring metamaterial that cloaks objects from visible light, I should mention that there are things you’ve probably seen in nature where nanoscale features manipulate light. Butterfly wings, bird feathers, beetle wing-cases, nacreous shells, and even some plants and berries have structural color. A surface with structural color, like the peacock feathers below, has small periodic features that selectively reflect certain wavelengths of light. (This is different from a pigmented surface which selectively absorbs light.) In some way, when we tune metamaterial properties, we’re following in nature’s footsteps!

Metamaterials can also be developed to control sound waves. Because sound is a compressive wave travelling through various media, like air, a metamaterial with a periodically changing density can redirect sound waves or even block the transmission of sound at certain wavelengths (frequencies). Conversely, materials can be made which preferentially allow some frequencies of sound through, like a filter for the sound you want to hear. This is useful for tuning the sonic landscape, both in casual and industrial settings.

Seismic waves are even larger in wavelength, but as we see every time a severe earthquake strikes an inhabited area, the control of seismic waves might be a great societal good. The same principles that guide researchers in designing materials to redirect sonic waves are being examined to see if seismic wave reflectors might be able to shield human settlements from quake damage in the future.

Metamaterials, which come in an astounding diversity of forms, use periodicity to manipulate light, sound, and even seismic activity! And it all comes from the fact that so many natural phenomena are waves, with characteristic wavelengths and thus a sensitivity to periodic structures at that scale.

Recently something unusual happened: I had an idea that was illustrated and published in Wired. They have a gallery of hybrid animals up, including drawings made by students in the CSU Monterey Bay Science Illustration Program, and my contribution was bioluminescent starlings. I personally think that watching a murmuration of glowing starlings flocking would be amazing. But how does bioluminescence work exactly?

Bioluminescence is light emission from a living creature. How does that happen? Remember that light is a form of energy, and if a particle undergoes a transition from one energy level to another, the difference in energy has to go somewhere and may be emitted as light. Much of the light we get from the sun comes from atomic energy level transitions that happen inside it. But the same thing can also occur in more complex chemical reactions: excess energy can be used to create a new compound, or heat up the reactants, but it may also be emitted as light. (Whether or not this happens depends on the mechanism of the chemical reactions and, as usual, on energy minimization.)

So bioluminescence occurs when a chemical reaction happens, inside a living organism, that emits light. It’s actually relatively common in deep-sea creatures, who don’t have much other light around. But it’s also seen closer to shore in bioluminescent algae, and on dry land with fireflies. What these creatures have in common is that they produce luciferin, a class of pigments that can be oxidized to produce light, and luciferase, an enzyme that catalyzes the reaction. These creatures can then use the bioluminescence to communicate with other creatures, for camouflage, luring prey, or attracting mates.

Some plants show bioluminescence too, though there are many competing theories on whether they gain some evolutionary advantage from it or not. But there are also many researchers working to introduce bioluminescence into plants and animals, by adding the genes that create luciferin and luciferase, or by adjusting their expression. Self-lighting could help with imaging, but making more things bioluminescent has both a practical and an aesthetic appeal.

We already know the basics of light: it’s electromagnetic energy, carried through space as a wave, in discrete packets called photons. But photons come in a variety of energies, and different energy photons can be used for different real-world applications. The energy of a photon determines, among other things, how quickly the electromagnetic wave oscillates. Higher energy photons oscillate more quickly than lower energy photons, so we say that high-energy photons have a higher frequency.

This frequency isn’t related to the speed that the photons travel, though. They can oscillate more or fewer times over a given distance, but still traverse that distance in the same amount of time. And as we know, the speed of light is given by Maxwell’s Equations for electromagnetism, and is constant regardless of reference frame! But another way to look at frequency is by considering the wavelength of light. Picture two photons which are traveling through space, at the same speed, but with one oscillating faster than the other. Thus one photon is high-frequency and one is low-frequency. While traversing the same distance, the high-frequency photon will oscillate more times than the low-frequency photon, so the distance covered by each cycle is smaller. We call this distance for a single cycle the wavelength, and it’s inversely proportional to the frequency. Long-wavelength photons are low-frequency, and short-wavelength photons are high frequency. Overall the range of photon frequencies is called the electromagnetic spectrum.

On Earth, photons come from an external source, often the sun, and are reflected off various objects in the world. The photons of a specific color may be absorbed, and thus not seen by an observer, which will make the absorbing object look like the other non-absorbed colors. If there are many absorbed photons or few photons to begin with, an object may just look dark. Our eyes contain molecules capable of detecting photons in the wavelength range 400-700 nanometers and passing that signal to our brains, so this is called the visible wavelength range of the electromagnetic spectrum. But it’s the interaction of photons with the world around us, and then with the sensing apparatus in our eyes, that determines what we see. Other creatures that have sensors for different frequencies of light, or who have more or less than three types of cones, may perceive the color and shape of things to be totally different. And, the visible spectrum is only a small slice of the total range of photon frequencies, as you can see in the image below!

Photons that are slightly lower energy than the visible range are called infrared, and our skin and other warm objects emit photons in the infrared. Night-vision goggles often work using infrared photons, and some kinds of snakes can see infrared. Even lower energy photons have a lot of practical uses: microwave photons can be used to heat material, and radio waves are photons with such low energy that they’re useful for long-range communication! Long wavelength photons are difficult to absorb or alter, so they’re also really useful for astronomy, for example to observe distant planets and stars.

The sun emits photons in the visible range, but it also emits a lot of photons with a slightly higher energy, called ultraviolet or UV. Sunscreen blocks UV frequency photons because they can cause biological tissue to heat up or even burn slightly, and that is sunburn! At even higher frequencies, x-rays are a type of photon that are widely used in biomedical imaging, because they can penetrate tissue and show a basic map of a person’s bones and organs without surgery. And very high energy gamma rays are photons which result from chemical processes in the nuclei of atoms, which can pass through most material. I’ll talk a bit more about x-rays and gamma rays soon, as part of a larger discussion of radiation.

There is a lot more to light than visible light, and the various parts of the electromagnetic spectrum are used in many applications. Each wavelength gives us different information about the world, and we can use technology to extend the view that we’re biologically capable of to include x-rays, infrared, and many other parts of the electromagnetic spectrum!

One of the strangest developments in modern physics came gradually, in the 19th century, as scientists learned more and more about the interactions between light and matter. In this post we’ll cover a few of the early experiments and their implications for both technology and our understanding of what light really is.

The first piece of the puzzle came when Becquerel found that shining a light on some materials caused a current to flow through the material. This is called the photovoltaic effect, because the photons incident on the material are causing a voltage difference which the current then follows. At the nanoscale, the photons are like packets of energy which can be absorbed by the many electrons in the material. In a semiconductor, some electrons can be moved this way from the valence band to the conduction band. Thus electrons that were previously immobile because they had no available energy states to jump to now have many states to choose from, and can use these to traverse the material! This effect is the basis of most solid state imaging devices, like those found in your digital camera (and trust me, we will delve further into that technology soon!).

But as it turns out, if you use photons that have a high enough energy, you can not only bump electrons out of their energy levels, you can cause them to leave the material entirely! This is called the photoelectric effect, and in some senses it seems like a natural extension of the photovoltaic effect: another consequence of light’s electromagnetic interaction with charged particles.

But actually, there is a very interesting scientific consequence of the experimental details of photoelectric effect. Imagine shining a light on a material, and observing the emitted electrons. You can change the light source in various ways, for example by changing the color or the brightness. Blue light has a shorter wavelength than red light, and thus more energy, but if you are only changing the color of the light you won’t see any difference in the electron output (unless you tune the energy low enough that no electrons are ejected, in which case you are back to the photovoltaic effect). But, if you change the intensity of the light, you find that brighter light causes more electrons to be ejected. This matters because at the time, light was thought of as a wave in space, an electromagnetic oscillation that could move through the world in certain ways. Waves with more energy were expected to liberate more electrons, just as higher energy waves in the ocean cause more erosion of the rocks and sand on the shore. But the experiment above disproves that idea, because it’s not the energy of each wave packet that matters but their overall number, which is the intensity of the light! So the photoelectric effect proves that light is quantized: while it has wave characteristics, it also has particle characteristics and breaks down into discrete packets.

The photoelectric effect is used in a few very sensitive photodetectors, but is not as common technologically as the photovoltaic effect. But there are a few weird consequences of the photoelectric effect, especially in space. Photons from the sun can excite electrons from space stations and shuttles, and since the electrons then flow away and aren’t replenished, this can give the sun-facing side a positive charge. The photoelectric effect also causes dust on the moon to charge, to the point that some dust is electrostatically repelled far from the surface. So even though the moon has no significant gas atmosphere as we have on earth, it has clouds of charged dust that fluctuate in electromagnetic patterns across its face.