The Basics

Geoffrey Fowler at the Washington Post recently published an article observing that phone battery life is getting worse. I enjoyed my conversations with Geoffrey as he researched the topic. But why is the phone battery life getting worse? Why are batteries not keeping up with the new crop of smartphones?

Like so many things in life, it is all about energy balance. Our doctors tell us that we need to balance our calories: Calories we eat versus the calories we expand on exercise. And so the smartphone needs to balance its energy stored in the battery versus the energy it spends on use. So I distill this to two simple questions on energy demand and supply:

Why is the energy demand growing with increased use of our smartphones?

Why can’t we have a bigger battery to supply our growing energy needs in a smartphone?

So let’s tackle the first question by examining the sources that drive energy consumption in a smartphone. There are three parts in your smartphone that are energy hogs:

Your screen….ok, I am sure you all know that ;

Your processor….some of you probably know that too ;

Your radios. Not your FM radio! Radios means the cellular connection, WiFi connection, bluetooth, GPS….anything that communicates with the outside world using radio waves.

Energy consumption for each of these parts depends on the nature of the hardware and you, the user — that’s the length of time you spend on the device.

The energy used by a screen is quite large, even with the new OLED screens. Screens are getting a bigger numbers of pixels. Each pixel consumes energy. More pixels means more energy.Every time you turn the screen on, it’s more energy that the battery has to supply.And that adds up rapidly.

If you follow various chatrooms, you probably know that “screen time”, meaning the total amount of available battery time with your screen on, is probably about 6 hours, give or take – regardless of what the smartphone maker advertises about all day use or more.

Next is the processor. Fortunately, that piece of hardware used to be a major energy hog but with the new generation of processors from Qualcomm or Apple or Samsung, they have become quite efficient. How much efficient? About twice more efficient than the previous generations from a few years back. All good news, right? well, not quite.

You see, processors have become efficient indeed, but now they are running a lot more frequently than they ever did. Think about an SUV parked in the garage versus a Honda Civic used for Ubering. Which one uses more energy?

A few years ago, we used our smartphones for texting and emailing….now, we stream videos. So while these processors are efficient, they are being taxed by video and social media. Net net, they are consuming more energy from the battery. How can you tell? watch how hot your smartphone becomes when you stream videos or take 4k movies on your device. That’s your processor getting hot.

Let’s talk now about radios. That’s a growing problem for the battery, so much that carriers like AT&T and Verizon in the US, or DoCoMo in Japan are really worried about it.

On one hand, carriers love that you use more and more data…that’s how they make money. But data use means your cellular connection is on, a lot more than before.

But you say wait, isn’t 5G cellular connection better than LTE? Think of 5G as adding more lanes on the internet superhighway as compared to LTE. It means more cars, a lot more cars, will use the highway. It means more energy will be consumed. And the battery needs to supply this energy.

The FCC is just auctioning a new range of frequencies between 24 GHz and 47 GHz for the future 5G spectrum. By comparison, LTE runs at frequencies between 0.5 GHz and 2 GHz. Why is this important? Energy use goes up with frequency. So by going to the new 5G frequency, energy consumption will grow with it, worsening the burden on the battery. In other words, the future will tax the battery even more!

Now we can tackle the second question: Why can’t the device manufacturer put a bigger battery in the smartphone?

It is simple: Bigger battery capacity means a physically bigger battery. Batteries are improving so slowly such that the only way to give users more battery capacity is by making the device larger or thicker. The recent iPhone XS, XR and XS Max show a clear trend to making larger devices that can hold larger batteries.

Will that be enough for the future? not really. Smartphone sizes can’t get any bigger. At 6 in or greater screen sizes, they are already too large to hold in one hand. They may get a little thicker but not by much. Our human hands determine the optimal physical form for a smartphone.

So what gives? I don’t know yet, but most likely, our behavior and expectations. It is quite likely that users may charge their smartphones more frequently in one day…perhaps charge twice instead of once. Some users might be happy with fewer pixels in their devices. Others may turn off their Facebook and social media apps.

Regardless of how we adapt to the future of smartphones, the battery will continue to be the weakest link, and the one in most need for innovation.

For the average reader, electrochemical impedance spectroscopy, often abbreviated as EIS, is more than a mouthful. Understanding its utility can be relegated to the category of unresolved mysteries. Today’s post will shed some light and a little intuitive thinking on this powerful method.

The reader’s first question might be “why are you talking about EIS in a battery blog?” The answer is simple. EIS is the foremost standard tool in laboratories around the world to measure electrochemical processes and reactions. Electrochemistry, one of the most extensive branches in chemistry, is the study of chemical reactions that have an inherent relationship to electricity, i.e. they can either generate electricity or can be influenced by electricity. Yes, you guessed right, batteries are a prime example of electrochemistry. Another practical example of electrochemistry put to good use: the gold plating on your necklace or bracelet.

What does the name EIS imply? Electrochemical impedance is scientific jargon that refers to the electrical resistance of the device under study, in this case, the lithium-ion battery. In its most elemental form, impedance is voltage divided by current. For electrical engineers, it represents components such as resistors or capacitors. For other scientists, it represents the resistance the device exhibits against the flow of electricity.

Spectroscopy is the branch of science that deals with how a property changes with frequency. Hence, EIS is the methodology and science that seek to understand how impedance measurements change with frequency, and more particularly, how these changes are intimately tied to the underlying chemical reactions.

Why frequency? Frequency adds a lot more information about the nature of the chemical process that is taking place. In science, frequency plays a very important role. Take for example the difference between blue and red light. They are both made of photons, but differ in frequency. Medical MRI imaging depends on the frequency of the oscillation of the hydrogen atoms in our bodies. Distinguishing between different broadcast stations on the radio dial operates on similar principles. In other words, we use frequency to uniquely identify chemical or physical processes.

With this long introduction, let’s dive a little deeper into EIS as related to a lithium-ion battery. If you were to measure the impedance of a standard electrical resistor component — the kind of components you may find inside your smartphone — you will find that you will measure exactly the same impedance value whether you apply a low voltage or a high voltage, or whether you measure at low frequency or high frequency. In other words, for this resistor component, the value is independent of voltage (also known as bias) and frequency. Resistors are consequently easy components to understand.

That is NOT the case for a battery. Change the voltage or frequency and you will get a different value. In other words, the battery can look like a resistor in some circumstances, or like a capacitor in others, or some complex combinations of both. When we change the voltage of the battery, it now operates at a different “state of charge,” in other words, it will have a different amount of electrical charge stored in it. As I described in this earlier post on fuel-gauges, the terminal voltage of the battery is a direct proxy of the amount of electrical charge stored in the battery, which is the state of charge (or the percentage of battery remaining).

In contrast, changing the frequency relates to different electrochemical processes that occur inside the battery. Such electrochemical processes could relate to the diffusion of the electrical charge (in this case, the lithium ions) from one electrode to the other. One can imagine that the ions have to travel a certain distance and insert themselves in the “Swiss-cheese” matrix of the material. So intuitively, this feels like a slow process, and it is. It takes several seconds to even minutes for the lithium ion to go through this diffusion process — meaning that diffusion of ions is characterized by a low-frequency signature. A distinctly different electrochemical process is how lithium ions and electrons interact right at the surface of the electrode. This interaction involves electrons and ions over very short distances. Intuitively, one can see that this can be a very fast reaction, usually on the order of microseconds. Hence its signature contains high frequency signals.

All of this goes to say that the impedance value at a particular frequency is a “unique signature” for the underlying electrochemical process of interest to our study. And that is what makes EIS such a powerful tool. To the trained scientist, he or she can read the EIS measurement as a map of the various electrochemical processes and reactions that are taking place inside the battery without cutting it open or damaging it. It also provides tremendous insight into what can also go wrong inside the battery. Not all electrochemical processes are desirable. For example, the underlying process that causes lithium metal plating is highly undesirable and can be readily measured using its unique EIS signature.

So how is the measurement made? In the laboratory, the oft-expensive and bulky instrument applies a small electrical current at a well defined frequency to the battery, then measures the voltage. Divide the voltage by the current and you now have the impedance at this frequency. For example, apply 1 mA of current at a frequency of 100 Hz, you might measure 0.5 mV. Hence the impedance is 0.5mV/1mA = 0.5 ohms at 100 Hz. This, of course, does not take into account the complex value of the impedance but it is a simple illustration of the concept. “Complex” numbers are mathematical tools to show values that have both real and imaginary components. Don’t worry if you don’t understand them fully —the key thing is that an impedance measurement has two values to represent it.

A full EIS chart shows by convention the imaginary component of the impedance (vertical axis) vs. its real value (horizontal axis). The far left of the chart shows the measurements made at high frequencies, in particular highlighting what happens in the metal conductors inside the battery as well as what occurs at the surfaces of the electrodes. As we follow the purple dots and move towards the right, the frequency of the signature gradually decreases highlighting now a different set of electrochemical processes, in particular what happens at the insulating interface between the electrode and the electrolyte (also known as SEI layer). Ultimately, to the far right of the chart, the frequency is low and is unique to the diffusion effects of the lithium ions.

An EIS tool is present in every electrochemistry laboratory around the world. Young graduates in this discipline spend countless hours operating this tool. It is not a small instrument…it fits on a desk, may weigh several pounds, and costs several thousands of dollars. Now imagine how the world would look like if an EIS tool can somehow fit inside each and every smartphone!

State-of-the-art lithium-ion batteries, whether used in smartphones or electric vehicles, all rely on the same fundamental cell structure: two opposing electrodes with an intermediate insulating separator layer, with lithium ions shuffling between the two electrodes.

The positive electrode during charging, usually called the cathode, consists of a multi-metal oxide alloy material. Lithium-cobalt-oxide, or LCO, is by far the most common for consumer electronic applications. NCM, short for lithium nickel-cobalt-manganese oxide, also known as NMC, is gradually replacing other materials in energy storage and electric vehicle applications. LCO and NCM have a great property of storing lithium ions within their material matrix. Think of a porous swiss cheese: the lithium ions insert themselves between the atomic layers.

In contrast, the anode, or negative electrode during charging, is almost universally made of carbon graphite. Carbon historically was and continues to be the material of choice. It has a large capacity to store lithium ions within its crystalline matrix, much like the metal oxide cathode.

So how do manufacturers increase energy density? In some respects, the math is simple. In practice, it gets tricky.

Energy density equals total energy stored divided by volume. The total stored energy is dictated by the amount of active material, i.e., the available amount of metal oxide alloy as well as graphite that can physically store the lithium ions (i.e., the electric charge). So battery manufacturers resort to all types of design tricks to reduce the volume of inactive material, for example, reducing the thickness of the separator and metal connectors. Of course, there are limits with safety topping the list. To a large extent, this is what battery manufacturers did for the past 20 years — amounting largely to about a 5% increase annually in energy density.

But once this extra volume of inactive material is reduced to its bare minimum, increasing energy density gets tricky and challenging. This is the difficult wall that the battery industry is facing now. So what is next?

There are two potential paths forward:

1. Find a way to pack more ions (i.e., more electric charge) within the electrodes. This is the topic of much research to develop new materials capable of such feat. But any such breakthrough is still several years away from commercial deployment, leaving the second option to….

2. Increase the voltage. Since energy equals charge multiplied by voltage, increasing the voltage also raises the amount of energy (remember that energy and charge are related but are not commutable). This is the object of today’s post.

The battery industry raised the voltage a few years back from a maximum of 4.2 V to the present-day value of 4.35 V. This was responsible for adding approximately 4 to 5% to the energy density. A new crop of batteries is now beginning to operate at 4.4 V, adding an additional 4 to 5% to the energy density. But that does not come without some serious challenges. What are they?

First, there is the electrolyte. It is a gel-like solvent that imbibes the inside of the battery. Short of a better analogy, if ions are like fish, then the electrolyte is like water. It is the medium within which the lithium ions can travel between the two electrodes. As the voltage rises, it subjects the electrolyte to increasingly higher electric fields causing its early degradation and breakdown. So we are now seeing a new generation of electrolytes that can in principle withstand the higher voltage — albeit, we see in our lab testing that some of these electrolyte formulations are responsible for worse cycle life performance. This is a first example of the compromises that battery designers are battling.

Second, there is the structural integrity of the cathode. Let’s take LCO as an example. If we peer a little closer into the cathode material (see the figure below), we find a crystal structure with layers made of cobalt and oxygen atoms. When the battery is fully discharged, the lithium ions occupy the vacant space between these ordered layers. In fact, there is a proportion of lithium ions to cobalt and oxygen atoms: there is one lithium ion for every one cobalt and two oxygen atoms.

courtesy of visualization for electronic and structural analysis (VESTA)

As the battery is charged, the lithium ions leave the cathode to the anode vacating some of the space between the ordered layers of the LCO cathode. But not all the lithium ions can leave; if too many of them leave, then the crystal structure of the cathode collapses and the material changes its properties. This is not good. So only about half of the lithium ions are “permitted” to leave during charging. This “permission” is determined by, you guessed it, the voltage. Right about 4.5 V, the LCO crystal structure begins to deteriorate, so one can easily see that at 4.4 V, the battery is already getting too close to the cliff.

Lastly, there is lithium plating. High energy-density cells push the limit of the design and tolerances in order to reduce the amount of material that is not participating in the storage. One of the unintended consequences is an “imbalance” between the amount of cathode and anode materials. This creates an “excess” of lithium ions that then deposit as lithium metal, hence plating.

These three challenges illustrate the increasing difficulties that battery manufacturing must overcome to continue pushing the limits of energy density. As they make progress, however, compromises become the norm. Cycle life is often shortened. Long gone are the days of 1,000+ cycles without intelligent adaptive controls. Fast charging becomes questionable. In some cases, safety may be in doubt. And the underlying R&D effort costs a lot of money with expenses that are stretching the financial limits of battery manufacturers without the promise of immediate financial returns in a market that is demanding performance at a the lowest possible price.

It is great to be a battery scientist with plenty of great problems to work on…but then again, may be not.

Sleep is an essential function of life. Tissue in living creatures regenerate during deep sleep. We, humans, get very cranky with sleep deprivation. And cranky we do get when our battery gets depleted because we did not give our mobile device sufficient “sleep time.”

I explained in a prior post the power needs in a smartphone, including the display, the radio functions…etc. If all these functions are constantly operating, the battery in a smartphone would last at most a couple of hours. So the key to having a smartphone battery last all day is having down time. So by now, you have hopefully noticed how the industry uses “sleep” terminology to describe these periods of time when the smartphone is nominally not active.

So what happens deep inside the mobile device during these periods of inactivity, often referred to as standby time? Sleep. That’s right. Not only sleep, but also deep sleep. This is the state of the electronic components, such as the processor and graphics chips, when they reduce their power demand. If we are not watching a video or the screen is actually turned off, there is no need for the graphics processor to be running. So the chip’s major functions are turned off, and the chip is put in a state of low power during which it draws very little from the battery. Bingo, sleep equals more battery life available to you when you need it.

Two key questions come to mind: When and how does the device go to sleep? and when and how does it wake up?

One primary function of the operating system (OS) is to decide when to go to sleep; this is the function of iOS for Apple devices, and Android OS for Android-based devices. The OS monitors the activity of the user, you, then makes some decisions. For example, if the OS detects that the smartphone has been lying on your desk for some considerable time and the screen has been off, then it will command the electronics to reduce their power demand and go to sleep.

This is similar to what happens in a car with a driver. You, the driver, gets to make decisions all the time when to turn the engine off, or put it in idle, or accelerate on the gas pedal. Each of these conditions changes the amount of fuel you draw from the fuel tank. In a smartphone, the OS is akin to the driver; the electronics replace the engine; and the fuel tank is like the battery. You get the picture. While this is colloquially referred to as managing the battery, in reality you are managing the “engine” and the power it consumes. This is why some drivers might get better mileage (mpg) than others. It is really about power management and has very little to do with true battery management. Battery management is when one addresses the battery itself, for example how to charge it, how to maintain its health…etc.

The degree of sleep varies substantially and determines how much overall power is being used. Some electronic parts may be sleeping and others may be fully awake and active. For example, let’s say you are traveling and your device is set to airplane mode, but you are playing your favorite game. The OS will make sure that the radios chips, that’s the LTE radio, the WiFi, GPS chip, and all chips that have a wireless signal associated with them, go to deep sleep. But your processor and graphics chips will be running. With the radios off, your battery will last you the entire flight while playing Angry Birds.

The degree of sleep determines how much total power is being drawn from the battery, and hence, whether your standby time is a few hours or a lot more. A smart OS needs to awaken just the right number of electronic components for just the right amount of time. Anything more than that is a waste of battery, and loss of battery life. The battery is a precious resource and needs to be conserved when not needed.

Both iOS and Android have gotten much smarter over the past years in making these decisions. Earlier versions of Android were lacking the proper intelligence to optimize battery usage. Android Marshmallow introduced a new feature called Doze that adds more intelligence to this decision making process. Nextbit recently announced yet more intelligence to be layered on top of Android. This intelligence revolves around understanding the user behavior and accurately estimating what parts need to be sleeping, yet without impacting the overall responsiveness of the device.

The next question is who gets to wake up the chips that are sleeping? This is where things get tricky. In a car, you, the driver, gets to make decisions on how to run the engine. But imagine for a moment that the front passenger gets to also press the gas pedal. You can immediately see how this can be a recipe for chaos. In a smartphone, every app gets to access the electronics and arbitrarily wake up whatever was sleeping. An overzealous app developer might have his app pinging the GPS location chip constantly which will guarantee that this chip never goes to sleep — causing rapid battery loss of life. Early versions of Facebook and Twitter apps were guilty of constantly pinging the radio chips to refresh the social data in the background — even when you put your device down and thought it was inactive. iOS and Android offer the user the ability to limit what these apps can do in the background; you can restrict their background refresh or limit their access to your GPS location. But many users do not take advantage of these power saving measures. If you haven’t done so, do yourself a favor and restrict background refresh on your device, and you will gain a few extra hours of battery life. You can find a few additional tidbits in this earlier post.

App designers have gotten somewhat more disciplined about power usage, but not entirely. Still too many apps are poorly written, or intentionally ignore the limited available power available. Just like in camp when many are sharing water, it takes one inconsiderate individual to ruin the experience. It takes one rogue app to ruin the battery experience in a smartphone. And when that happens, the user often blames the battery, not the rogue app. It’s like the campers blaming the water tank in camp instead of blaming the inconsiderate camper. Enforcement of power usage is improving with every iteration of operating systems, but the reality is that enforcement is not an easy task. There is no escaping the fact that the user experience is best improved by increasing the battery capacity (i.e., a bigger battery) and using faster charging. Managing a limited resource is essential but nothing makes the user happier than making that resource more abundant….and that, ladies and gentlemen, is what true battery management does. If power management is about making the engine more efficient, then battery management is about making the fuel tank bigger and better.

I will jump ahead in this post to discuss the merits of different lithium-ion chemistries and their suitability to energy storage systems (ESS) applications. Naturally, this assumes that lithium-ion batteries in general are among the best suited technologies for ESS. Some might take issue with this point — and there are some merits for such a discussion that I shall leave to a future post.

Made of two electrodes, the anode and the cathode, it is the choice of the cathode material that determines several key electrical attributes of the lithium-ion battery, in particular energy density, safety, longevity (cycle life) and cost. The most commonly used cathode materials are Li cobalt oxide (known as LCO), Li nickel cobalt aluminum (NCA), Li nickel manganese (NCM), Li iron phosphate (LFP) and Li manganese nickel oxide (LMNO).

LCO is by far the most common being the choice for consumer devices from smartphones to PCs. It is widely manufactured across Asian battery factories and the supply chain is very pervasive…as a result, and despite the use of cobalt (an expensive material), it bears the lowest cost per unit of energy with consumer batteries being priced near $0.50 /Ah, or equivalently, $130/kWh. LCO offers very good energy density and a cycle life often ranging between 500 and 1,500 cycles. From a material standpoint, LCO can potentially catch fire or explode especially if the battery is improperly designed or operated. That was the primary reason for the battery recalls that were frequent some 10 years ago. Proper battery design and safety electronics circuitry have greatly improved the situation and made LCO batteries far safer.

NCA came to prominence with Tesla’s use of the Panasonic 18650 cells in their model S (and the earlier Roadster). It has exceptional energy density — which translates directly to more miles of driving per charge. But NCA has a limited cycle life, often less than 500 cycles. Historically NCA was expensive because of its use of cobalt and limited manufacturing volume. This is rapidly changing with Tesla’s growing volume and the Gigafactory coming online in 2017. It is widely rumored that Tesla’s cost is at or near the figures for LCO, i.e., near $100/kWh at the cell level. It remains to be seen whether Panasonic will replicate these costs for the general market.

NCM sits between LCO and NCA. It has good energy density, better cycle life than NCA (in the range of 1,000 to 2,000 cycles) and is considered inherently less prone to safety hazards than LCO. Its historical usage was in power tools but it has become recently a serious candidate material for automotive applications. In principal, NCM cathodes should be less expensive to manufacture owing to their use of manganese, quite an inexpensive material. The two Korean conglomerates, LG Chem and Samsung SDI, are major advocates and manufacturers of NCM-based batteries.

One of the oldest used cathode materials is LMNO, or sometimes referred to as LMO. The Nissan Leaf battery uses LMNO cathodes. It is safe, reliable with long cycle life, and is relatively inexpensive to manufacture. But it suffers from low energy density especially relative to NCA. If you ever wondered why the Tesla has a far better driving range than the Leaf, the choice of cathode materials is an important part of your answer. It is not widely used outside of Japan.

Finally, we come to lithium iron phosphate, or LFP. Initially invented in North America in the 1990s, it has developed a strong manufacturing base today in China, with the Chinese government extending it significant economic incentives to make China a manufacturing powerhouse for LFP-batteries. LFP has exceptional cycle life, often exceeding 3,000 cycles, and is considered very safe. A major shortcoming of LFP is its reduced energy density: about one third that of LCO, NCA or NCM. It, in principle, should be inexpensive to manufacture. After all, iron and phosphorus are two inexpensive materials. But reality suggests otherwise: the lower energy density requires the use of twice or three times as many cells to build a battery pack with the same capacity as LCO or NCA. As a result, LFP-based batteries cost today 2 or 3x more than equivalent LCO-based battery packs.

By now, you are probably scratching your head and asking: so which one wins? and that is precisely the conundrum for energy storage and to some extent, electric vehicles. Let’s drill deeper.

Energy storage applications pose a few key requirements on the battery: 1) the battery should last 10 years with daily charge and discharge, or in other words, has a cycle life specification of 3,500 cycles or more; 2) it has to be immensely cost-effective, measured both in its upfront capital cost and cost of ownership; in other words, the total cost of owning and operating it over its 10-year life; and 3) it has to be safe.

The first and third requirements are straightforward: they make LFP and NCM favorites. LFP inherently has long cycle life, and NCM, if charged only to about 80% of its maximum capacity also can offer a very long cycle life. So if you wondered why Tesla quietly dropped its 10-kWh PowerWall product, it is because it is made with NCA cathodes and cannot meet the very long cycle life requirement of daily charging.

The second requirement gets tricky. Right now, neither LFP nor NCM are sufficiently inexpensive to make a very compelling economic case to operators of energy storage systems (ESS) — setting government incentives aside. So the question boils down to which one of them will have a steeper cost reduction curve over time. Such a question naturally creates two camps of followers, each arguing their respective case.

Notice that high energy density does not factor in these requirements, at least not directly. Unlike consumer devices or electric vehicles, ESS seldom have a volume or weight restriction and thus, in principle, can accommodate batteries with lower energy density. The problem, however, is that batteries with lower energy density do not necessarily correspond to lower cost per unit of energy. It actually costs more to manufacture a 3Ah battery using LFP than it does using NCA. This makes energy density a critical factor in the math. Lower energy density equals more needed batteries to assemble a bigger battery pack, and thus more cost. For now, in the battle between LFP and NCM, the jury is still out though my personal opinion is that NCM, by virtue of its higher energy density, has an advantage. On the other hand, China’s uninhibited support for LFP can potentially tip the scale. More later.

Before I adjourn, I would like to rebuke an oft-made statement by some builders of ESS: that they are “battery agnostic.” To them, batteries are a commodity that can be easily interchanged among vendors and suppliers, much like commodity components in a consumer electronic product. I am hoping that the reader gleans from this post the great number of subtleties and complexities involved in the choice of the proper battery in an ESS. The notion of battery-agnostic in this space is utterly misplaced and only points to the illiteracy of the engineers building these ESS. If the battery fires on the 787 Dreamliner can permanently remind us of one lesson, it should be to never underestimate the consequences of neglecting the complexities of the battery. They can be very severe and immensely costly. Battery-agnostic is battery-illiterate.

Subscribe

Twitter

RSS

About the author

Nadim Maluf

I am a consumer. I am an engineer. I innovate. I am inspired by others. I am a student. I am a teacher. I am a CEO. I admire great people who make great products. And I love it best when I make a difference in the lives of others.