Tag: Science

The Basics

The beginning of any complex or challenging endeavor is always the hardest part.

Not all of us wake up and jump out of bed ready for the day. Some of us, like me, need a little extra energy to transition out of sleep and into the day. Once I've had a cup of coffee, my energy level jumps and I'm good for the rest of the day.

Chemical reactions work in much the same way. They need their coffee too.

Whether you use chemistry in your everyday work or have tried your best not to think about it since school, the ideas behind activation energy are simple and useful outside of chemistry. Understanding the principle, for example, can help you get kids to eat their vegetables, motivate yourself and others, and overcome inertia.

How Activation Energy Works in Chemistry

Chemical reactions need a certain amount of energy to begin working. Activation energy is the minimum energy required to cause a reaction to occur.

To understand activation energy, we must first think about how a chemical reaction occurs.

Anyone who has ever lit a fire will have an intuitive understanding of the process, even if they have not connected it to chemistry.

Most of us have a general feel for the heat necessary to start flames. We know that putting a single match to a large log will not be sufficient and a flame thrower would be excessive. We also know that damp or dense materials will require more heat than dry ones. The imprecise amount of energy we know we need to start a fire is representative of the activation energy.

For a reaction to occur, existing bonds must break and new ones form. A reaction will only proceed if the products are more stable than the reactants. In a fire, we convert carbon in the form of wood into CO2 and is a more stable form of carbon than wood, so the reaction proceeds and in the process produces heat. In this example, the activation energy is the initial heat required to get the fire started. Our effort and spent matches are representative of this.

We can think of activation energy as the barrier between the minima (smallest necessary values) of the reactants and products in a chemical reaction.

The Arrhenius Equation

Svante Arrhenius, a Swedish scientist, established the existence of activation energy in 1889.

Arrhenius developed his eponymous equation to describe the correlation between temperature and reaction rate.

The Arrhenius Equation is crucial for calculating the rates of chemical reactions, and, importantly, the quantity of energy necessary to start them.

In the Arrhenius equation, K is the reaction rate coefficient (the rate of reaction.) A is the frequency factor (how often molecules collide), R is the universal gas constant (units of energy per temperature increment per mole.) T represents the absolute temperature (usually measured in kelvins) and E is the activation energy.

It is not necessary to know the value of A to calculate Ea as this can be figured out from the variation in reaction rate coefficients in relation to temperature. Like many equations, it can be rearranged to calculate different values. The Arrhenius equation is used in many branches of chemistry.

Why Activation Energy Matters

Understanding the energy necessary for a reaction to occur gives us control over our surroundings.

Returning to the example of fire, our intuitive knowledge of activation energy keeps us safe. Many chemical reactions have high activation energy requirements, so they do not proceed without an additional input. We all know that a book on a desk is flammable, but will not combust without heat application. At room temperature, we need not see the book as a fire hazard. If we light a candle on the desk, we know to move the book away.

If chemical reactions did not have reliable activation energy requirements, we would live in a dangerous world.

Catalysts

Chemical reactions which require substantial amounts of energy can be difficult to control.

Increasing temperature is not always a viable source of energy due to costs, safety issues or simple impracticality. Chemical reactions which occur within our bodies, for example, cannot use high temperatures as a source of activation energy. Consequently, it is sometimes necessary to reduce the activation energy required.

Speeding up a reaction by lowering the rate of activation energy required is called catalysis. This is done with an additional substance known as a catalyst, which is generally not consumed in the reaction. In principle, you only need a tiny amount of catalyst to cause catalysis.

Catalysts work by providing an alternative pathway with lower activation energy requirements. Consequently, more of the particles have sufficient energy to react. Catalysts are used in industrial scale reactions to lower costs.

Returning to the fire example, we know that attempting to light a large log with a match is rarely effective. Adding some paper will provide an alternative pathway and serve as a catalyst — firestarters do the same.

Within our bodies, enzymes serve as catalysts in vital reactions (such as building DNA.)

How we Can Apply the Concept of Activation Energy to our Lives

“Energy can have two dimensions. One is motivated, going somewhere, a goal somewhere, this moment is only a means and the goal is going to be the dimension of activity, goal oriented-then everything is a means, somehow it has to be done and you have to reach the goal, then you will relax. But for this type of energy, the goal never comes because this type of energy goes on changing every present moment into a means for something else, into the future. The goal always remains on the horizon. You go on running, but the distance remains the same.

No, there is another dimension of energy: that dimension is unmotivated celebration. The goal is here, now; the goal is not somewhere else. In fact, you are the goal. In fact, there is no other fulfillment than that of this moment–consider the lilies. When you are the goal and when the goal is not in the future, when there is nothing to be achieved, rather you are just celebrating it, then you have already achieved it, it is there. This is relaxation, unmotivated energy.”
— Osho, Tantra

***

Although activation energy is a scientific concept, we can use it as a practical mental model.

Returning to the morning coffee example, many of the things we do each day depend upon an initial push.

Take the example of a class of students set an essay for their coursework. Each student requires a different sort of activation energy for them to get started. For one student it might be hearing their friend say she has already finished hers. For another, it might be blocking social media and turning off their phone. A different student might need a few cans of Red Bull and an impending deadline. Or, for another, reading an interesting article on the topic which provides a spark of inspiration. The act of writing an essay necessitates a certain sort of energy.

Getting kids to eat their vegetables can be a difficult process. In this case, incentives can act as a catalyst. You can't have your dessert until you eat your vegetables is not only a psychological play on incentives, it's also often less energy than constantly fighting with them to eat their vegetables. Once kids eat a carrot, they generally eat another one and another one. While they still want dessert, you won't have to remind them each time and in the process, you'll save a lot of energy.

The concept of activation energy can also apply to making drastic life changes. Anyone who has ever done something dramatic and difficult (such as quitting an addiction, leaving an abusive relationship, quitting a long term job or making crucial lifestyle changes) knows that it is necessary to reach a breaking point first. The bigger and more challenging an action is, the more activation energy we require to do it.

Our coffee drinker might crave little activation energy (a cup or two) to begin their day if they are well rested. Meanwhile, it will take a whole lot more coffee for them to get going if they slept badly and have a dull day to get through.

Conclusion

To understand and use the concept of activation energy in our lives does not require a degree in chemistry. While the concept, as used by scientists, is complex we can use the basic idea.

It is no coincidence that many of most useful mental models in our latticework originate from science. There is something quite poetic about the way in which human behavior mirrors what occurs at a microscopic level.

In his piece in 2014’s Edge collection This Idea Must Die: Scientific Theories That Are Blocking Progress, dinosaur paleontologist Scott Sampson writes that science needs to “subjectify” nature. By “subjectify”, he essentially means to see ourselves connected with nature, and therefore care about it the same way we do the people with whom we are connected.

That's not the current approach. He argues: “One of the most prevalent ideas in science is that nature consists of objects. Of course, the very practice of science is grounded in objectivity. We objectify nature so that we can measure it, test it, and study it, with the ultimate goal of unraveling its secrets. Doing so typically requires reducing natural phenomena to their component parts.”

But this approach is ultimately failing us.

Why? Because much of our unsustainable behavior can be traced to a broken relationship with nature, a perspective that treats the nonhuman world as a realm of mindless, unfeeling objects. Sustainability will almost certainly depend upon developing mutually enhancing relations between humans and nonhuman nature.

This isn't a new plea, though. Over 200 years ago, the famous naturalist Alexander Von Humboldt (1769-1859) was facing the same challenges.

Fascinated by scientific instruments, measurements and observations, he was driven by a sense of wonder as well. Of course nature had to be measured and analyzed, but he also believed that a great part of our response to the natural world should be based on the senses and emotions.

Humboldt was a rock star scientist who ignored conventional boundaries in his exploration of nature. Humboldt's desire to know and understand the world led him to investigate discoveries in all scientific disciplines, and to see the interwoven patterns embedded in this knowledge — mental models anyone?

If nature was a web of life, he couldn’t look at it just as a botanist, a geologist or a zoologist. He required information about everything from everywhere.

Humboldt grew up in a world where science was dry, nature mechanical, and man an aloof and separate chronicler of what was before him. Not only did Humboldt have a new vision of what our understanding of nature could be, but he put humans in the middle of it.

Humboldt’s Essay on the Geography of Plants promoted an entirely different understanding of nature. Instead of only looking at an organism, … Humboldt now presented relationships between plants, climate and geography. Plants were grouped into zones and regions rather than taxonomic units. … He gave western science a new lens through which to view the natural world.

Revolutionary for his time, Humboldt rejected the Cartesian ideas of animals as mechanical objects. He also argued passionately against the growing approach in the sciences that put man atop and separate from the rest of the natural world. Promoting a concept of unity in nature, Humboldt saw nature as a “reflection of the whole … an organism in which the parts only worked in relation to each other.”

Furthermore, that “poetry was necessary to comprehend the mysteries of the natural world.”

Wulf paints one of Humboldt’s greatest achievements as his ability and desire to make science available to everyone. No one before him had “combined exact observation with a ‘painterly description of the landscape”.

By contrast, Humboldt took his readers into the crowded streets of Caracas, across the dusty plains of the Llanos and deep into the rainforest along the Orinoco. As he described a continent that few British had ever seen, Humboldt captured their imagination. His words were so evocative, the Edinburgh Review wrote, that ‘you partake in his dangers; you share his fears, his success and his disappointment.'

In a time when travel was precarious, expensive and unavailable to most people, Humboldt brought his experiences to anyone who could read or listen.

On 3 November 1827, … Humboldt began a series of sixty-one lectures at the university. These proved so popular that he added another sixteen at Berlin’s music hall from 6 December. For six months he delivered lectures several days a week. Hundreds of people attended each talk, which Humboldt presented without reading from his notes. It was lively, exhilarating and utterly new. By not charging any entry fee, Humboldt democratized science: his packed audiences ranged from the royal family to coachmen, from students to servants, from scholars to bricklayers – and half of those attending were women. Berlin had never seen anything like it.

The subjectification of nature is about seeing nature, experiencing it. Humboldt was a master of bringing people to worlds they couldn’t visit, allowing them to feel a part of it. In doing so, he wanted to force humanity to see itself in nature. If we were all part of the giant web, then we all had a responsibility to understand it.

When he listed the three ways in which the human species was affecting the climate, he named deforestation, ruthless irrigation and, perhaps most prophetically, the ‘great masses of steam and gas’ produced in the industrial centres. No one but Humboldt had looked at the relationship between humankind and nature like this before.

Cosmos was unlike any previous book about nature. Humboldt took his readers on a journey from outer space to earth, and then from the surface of the planet into its inner core. He discussed comets, the Milky Way and the solar system as well as terrestrial magnetism, volcanoes and the snow line of mountains. He wrote about the migration of the human species, about plants and animals and the microscopic organisms that live in stagnant water or on the weathered surface of rocks. Where others insisted that nature was stripped of its magic as humankind penetrated into its deepest secrets, Humboldt believed exactly the opposite. How could this be, Humboldt asked, in a world in which the coloured rays of an aurora ‘unite in a quivering sea flame’, creating a sight so otherworldly ‘the splendour of which no description can reach’? Knowledge, he said, could never ‘kill the creative force of imagination’ – instead it brought excitement, astonishment and wondrousness.

This is the ultimate subjectivity of nature. Being inspired by its beauty to try and understand how it works. Humboldt had respect for nature, for the wonders it contained, but also as the system in which we ourselves are an inseparable part.

Wulf concludes at the end that Humboldt,

…was one of the last polymaths, and died at a time when scientific disciplines were hardening into tightly fenced and more specialized fields. Consequently his more holistic approach – a scientific method that included art, history, poetry and politics alongside hard data – has fallen out of favour.

Maybe this is where the subjectivity of nature has gone. But we can learn from Humboldt the value of bringing it back.

In a world where we tend to draw a sharp line between the sciences and the arts, between the subjective and the objective, Humboldt’s insight that we can only truly understand nature by using our imagination makes him a visionary.

The Basics

Occam’s razor (also known as the ‘law of parsimony’) is a problem-solving principle which serves as a useful mental model. A philosophical razor is a tool used to eliminate improbable options in a given situation, of which Occam’s is the best-known example.

Occam’s razor can be summarized as such:

Among competing hypotheses, the one with the fewest assumptions should be selected.

In simpler language, Occam’s razor states that the simplest solution is correct. Another good explanation of Occam’s razor comes from the paranormal writer, William J. Hall: ‘Occam’s razor is summarized for our purposes in this way: Extraordinary claims demand extraordinary proof.’

In other words, we should avoid looking for excessively complex solutions to a problem and focus on what works, given the circumstances. Occam’s razor is used in a wide range of situations, as a means of making rapid decisions and establishing truths without empirical evidence. It works best as a mental model for making initial conclusions before adequate information can be obtained.

A further literary summary comes from one of the best-loved fictional characters, Arthur Conan Doyle’s Sherlock Holmes. His classic aphorism is an expression of Occam’s razor: “If you eliminate the impossible, whatever remains, however improbable, must be the truth.”

A number of mathematical and scientific studies have backed up its validity and lasting relevance. In particular, the principle of minimum energy supports Occam’s razor. This facet of the second law of thermodynamics states that, wherever possible, the use of energy is minimized. In general, the universe tends towards simplicity. Physicists use Occam’s razor, in the knowledge that they can rely on everything to use the minimum energy necessary to function. A ball at the top of a hill will roll down in order to be at the point of minimum potential energy. The same principle is present in biology. For example, if a person repeats the same action on a regular basis in response to the same cue and reward, it will become a habit as the corresponding neural pathway is formed. From then on, their brain will use less energy to complete the same action.

The History of Occam’s Razor

The concept of Occam’s razor is credited to William of Ockham, a 13-14th-century friar, philosopher, and theologian. While he did not coin the term, his characteristic way of making deductions inspired other writers to develop the heuristic. Indeed, the concept of Occam’s razor is an ancient one which was first stated by Aristotle who wrote “we may assume the superiority, other things being equal, of the demonstration which derives from fewer postulates or hypotheses.”

Robert Grosseteste expanded on Aristotle's writing in the 1200s, declaring that:

That is better and more valuable which requires fewer, other circumstances being equal… For if one thing were demonstrated from many and another thing from fewer equally known premises, clearly that is better which is from fewer because it makes us know quickly, just as a universal demonstration is better than particular because it produces knowledge from fewer premises. Similarly, in natural science, in moral science, and in metaphysics the best is that which needs no premises and the better that which needs the fewer, other circumstances being equal.

Early writings such as this are believed to have led to the eventual, (ironic) simplification of the concept. Nowadays, Occam’s razor is an established mental model which can form a useful part of a latticework of knowledge.

Examples of the Use of Occam’s Razor

Theology

In theology, Occam’s razor is used to prove or disprove the existence of God. William of Ockham, being a Christian friar, used his theory to defend religion. He regarded the scripture as true in the literal sense and therefore saw it as simple proof. To him, the bible was synonymous with reality and therefore to contradict it would conflict with established fact. Many religious people regard the existence of God as the simplest possible explanation for the creation of the universe.

In contrast, Thomas Aquinas used the concept in his radical 13th century work – The Summa Theologica. In it, he argued for atheism as a logical concept, not a contradiction of accepted beliefs. Aquinas wrote ‘it is superfluous to suppose that what can be accounted for by a few principles has been produced by many.’ He considered the existence of God to be a hypothesis which makes a huge number of assumptions, compared to scientific alternatives. Many modern atheists consider the existence of God to be unnecessarily complex, in particular, due to the lack of empirical evidence.

Taoist thinkers take Occam’s razor one step further, by simplifying everything in existence to the most basic form. In Taoism, everything is an expression of a single ultimate reality (known as the Tao.) This school of religious and philosophical thought believes that the most plausible explanation for the universe is the simplest- everything is both created and controlled by a single force. This can be seen as a profound example of the use of Occam’s razor within theology.

The Development of Scientific Theories

Occam’s razor is frequently used by scientists, in particular for theoretical matters. The simpler a hypothesis is, the more easily it can be proved or falsified. A complex explanation for a phenomenon involves many factors which can be difficult to test or lead to issues with the repeatability of an experiment. As a consequence, the simplest solution which is consistent with the existing data is preferred. However, it is common for new data to allow hypotheses to become more complex over time. Scientists chose to opt for the simplest solution the current data permits while remaining open to the possibility of future research allowing for greater complexity.

Failing to observe Occam’s razor is usually a sign of bad science and an attempt to cover poor explanations. The version used by scientists can best be summarized as: ‘when you have two competing theories that make exactly the same predictions, the simpler one is the better.’

Obtaining funding for simpler hypothesis tends to be easier, as they are often cheaper to prove. As a consequence, the use of Occam’s razor in science is a matter of practicality.

Albert Einstein referred to Occam’s razor when developing his theory of special relativity. He formulated his own version: ‘it can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.’ Or “everything should be made as simple as possible, but not simpler.” This preference for simplicity can be seen in one of the most famous equations ever devised: E=MC2. Rather than making it a lengthy equation requiring pages of writing, Einstein reduced the factors necessary down to the bare minimum. The result is usable and perfectly parsimonious.

We could still imagine that there is a set of laws that determines events completely for some supernatural being, who could observe the present state of the universe without disturbing it. However, such models of the universe are not of much interest to us mortals. It seems better to employ the principle known as Occam's razor and cut out all the features of the theory that cannot be observed.

Isaac Newton used Occam’s razor too when developing his theories. Newton stated: “we are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” As a result, he sought to make his theories (including the three laws of motion) as simple as possible, with the fewest underlying assumptions necessary.

Medicine

Modern doctors use a version of Occam’s razor, stating that they should look for the fewest possible causes to explain their patient's multiple symptoms and also for the most likely causes. A doctor I know often repeats, “common things are common.” Interns are instructed, “when you hear hoofbeats, think horses, not zebras.” For example, a person displaying influenza-like symptoms during an epidemic would be considered more probable to be suffering from influenza than an alternative, rarer disease. Making minimal diagnoses reduces the risk of over treating a patient, or of causing dangerous interactions between different treatments. This is of particular importance within the current medical model, where patients are likely to see numerous different health specialists and communication between them can be poor.

Prison Abolition and Fair Punishment

Occam’s razor has long played a role in attitudes towards the punishment of crimes. In this context, it refers to the idea that people should be given the least punishment necessary for their crimes.

This is to avoid the excessive penal practices which were popular in the past, (for example, a Victorian could receive five years of hard labour for stealing a piece of food.) The concept of penal parsimony was pioneered by Jeremy Bentham, the founder of utilitarianism. He stated that punishments should not cause more pain than they prevent. Life imprisonment for murder could be seen as justified in that it may prevent a great deal of potential pain, should the perpetrator offend again. On the other hand, long-term imprisonment of an impoverished person for stealing food causes substantial suffering without preventing any.

Bentham’s writings on the application of Occam’s razor to punishment led to the prison abolition movement and our modern ideas of rehabilitation.

Crime solving and forensic work

When it comes to solving a crime, Occam’s razor is used in conjunction with experience and statistical knowledge. A woman is statistically more likely to be killed by a male partner than any other person. Should a female be found murdered in her locked home, the first person police interview would be any male partners. The possibility of a stranger entering can be considered, but the simplest possible solution with the fewest assumptions made would be that the crime was perpetrated by her male partner.

By using Occam’s razor, police officers can solve crimes faster and with fewer expenses.

Exceptions and Issues

It is important to note that, like any mental model, Occam’s razor is not failsafe and should be used with care, lest you cut yourself. This is especially crucial when it comes to important or risky decisions. There are exceptions to any rule, and we should never blindly follow a mental model which logic, experience, or empirical evidence contradict. The smartest people are those who know the rules, but also know when to ignore them. When you hear hoofbeats behind you, in most cases you should think horses, not zebras- unless you are out on the African savannah.

Simplicity is also a subjective topic- in the example of the NASA moon landing conspiracy theory, some people consider it simpler for them to have been faked, others for them to have been real. When using Occam’s razor to make deductions, we must avoid falling prey to confirmation bias and merely using it to backup preexisting notions. The same goes for the theology example mentioned previously – some people consider the existence of God to be the simplest option, others consider the inverse to be true. Semantic simplicity must not be given overt importance when selecting the solution which Occam’s razor points to. A hypothesis can sound simple, yet involve more assumptions than a verbose alternative.

My second concern about Occam’s Razor is just a matter of fact. The world is more complicated than any of us would have been likely to conceive. Some particles and properties don’t seem necessary to any physical processes that matter—at least according to what we’ve deduced so far. Yet they exist. Sometimes the simplest model just isn’t the correct one.

Harlan Coben has disputed many criticisms of Occam’s razor by stating that people fail to understand its exact purpose:

Most people oversimplify Occam’s razor to mean the simplest answer is usually correct. But the real meaning, what the Franciscan friar William of Ockham really wanted to emphasize, is that you shouldn’t complicate, that you shouldn’t “stack” a theory if a simpler explanation was at the ready. Pare it down. Prune the excess.

I once again leave you with Einstein: “Everything should be made as simple as possible, but not simpler.”

We’ve written quite a bit about the marvelous British naturalist Charles Darwin, who with his Origin of Species created perhaps the most intense intellectual debate in human history, one which continues up to this day.

Darwin’s Origin was a courageous and detailed thought piece on the nature and development of biological species. It's the starting point for nearly all of modern biology.

Charlie Munger thinks Darwin would have placed somewhere in the middle of a good private high school class. He was also in notoriously bad health for most of his adult life and, by his son’s estimation, a terrible sleeper. He really only worked a few hours a day in the many years leading up to the Origin of Species.

Yet his “thinking work” outclassed almost everyone. An incredible story.

In his autobiography, Darwin reflected on this peculiar state of affairs. What was he good at that led to the result? What was he so weak at? Why did he achieve better thinking outcomes? As he put it, his goal was to:

“Try to analyse the mental qualities and the conditions on which my success has depended; though I am aware that no man can do this correctly.”

In studying Darwin ourselves, we hope to better appreciate our own strengths and weaknesses and, not to mention understand the working methods of a “mental overachiever.”

Let's explore what Darwin saw in himself.

***

1. He did not have a quick intellect or an ability to follow long, complex, or mathematical reasoning. He may have been a bit hard on himself, but Darwin realized that he wasn't a “5 second insight” type of guy (and let's face it, most of us aren't). His life also proves how little that trait matters if you're aware of it and counter-weight it with other methods.

I have no great quickness of apprehension or wit which is so remarkable in some clever men, for instance, Huxley. I am therefore a poor critic: a paper or book, when first read, generally excites my admiration, and it is only after considerable reflection that I perceive the weak points. My power to follow a long and purely abstract train of thought is very limited; and therefore I could never have succeeded with metaphysics or mathematics. My memory is extensive, yet hazy: it suffices to make me cautious by vaguely telling me that I have observed or read something opposed to the conclusion which I am drawing, or on the other hand in favour of it; and after a time I can generally recollect where to search for my authority. So poor in one sense is my memory, that I have never been able to remember for more than a few days a single date or a line of poetry.

2. He did not feel easily able to write clearly and concisely. He compensated by getting things down quickly and then coming back to them later, thinking them through again and again. Slow, methodical….and ridiculously effective: For those who haven't read it, the Origin of Species is extremely readable and clear, even now, 150 years later.

I have as much difficulty as ever in expressing myself clearly and concisely; and this difficulty has caused me a very great loss of time; but it has had the compensating advantage of forcing me to think long and intently about every sentence, and thus I have been led to see errors in reasoning and in my own observations or those of others.

There seems to be a sort of fatality in my mind leading me to put at first my statement or proposition in a wrong or awkward form. Formerly I used to think about my sentences before writing them down; but for several years I have found that it saves time to scribble in a vile hand whole pages as quickly as I possibly can, contracting half the words; and then correct deliberately. Sentences thus scribbled down are often better ones than I could have written deliberately.

3. He forced himself to be an incredibly effective and organized collector of information. Darwin's system of reading and indexing facts in large portfolios is worth emulating, as is the habit of taking down conflicting ideas immediately.

As in several of my books facts observed by others have been very extensively used, and as I have always had several quite distinct subjects in hand at the same time, I may mention that I keep from thirty to forty large portfolios, in cabinets with labelled shelves, into which I can at once put a detached reference or memorandum. I have bought many books, and at their ends I make an index of all the facts that concern my work; or, if the book is not my own, write out a separate abstract, and of such abstracts I have a large drawer full. Before beginning on any subject I look to all the short indexes and make a general and classified index, and by taking the one or more proper portfolios I have all the information collected during my life ready for use.

4. He had possibly the most valuable trait in any sort of thinker: A passionate interest in understanding reality and putting it in useful order in his head. This “Reality Orientation” is hard to measure and certainly does not show up on IQ tests, but probably determines, to some extent, success in life.

On the favourable side of the balance, I think that I am superior to the common run of men in noticing things which easily escape attention, and in observing them carefully. My industry has been nearly as great as it could have been in the observation and collection of facts. What is far more important, my love of natural science has been steady and ardent.

This pure love has, however, been much aided by the ambition to be esteemed by my fellow naturalists. From my early youth I have had the strongest desire to understand or explain whatever I observed,–that is, to group all facts under some general laws. These causes combined have given me the patience to reflect or ponder for any number of years over any unexplained problem. As far as I can judge, I am not apt to follow blindly the lead of other men. I have steadily endeavoured to keep my mind free so as to give up any hypothesis, however much beloved (and I cannot resist forming one on every subject), as soon as facts are shown to be opposed to it.

Indeed, I have had no choice but to act in this manner, for with the exception of the Coral Reefs, I cannot remember a single first-formed hypothesis which had not after a time to be given up or greatly modified. This has naturally led me to distrust greatly deductive reasoning in the mixed sciences. On the other hand, I am not very sceptical—a frame of mind which I believe to be injurious to the progress of science. A good deal of scepticism in a scientific man is advisable to avoid much loss of time, but I have met with not a few men, who, I feel sure, have often thus been deterred from experiment or observations, which would have proved directly or indirectly serviceable.

[…]

Therefore my success as a man of science, whatever this may have amounted to, has been determined, as far as I can judge, by complex and diversified mental qualities and conditions. Of these, the most important have been—the love of science—unbounded patience in long reflecting over any subject—industry in observing and collecting facts—and a fair share of invention as well as of common sense.

5. Most inspirational to us of average intellect, he outperformed his own mental aptitude with these good habits, surprising even himself with the results.

With such moderate abilities as I possess, it is truly surprising that I should have influenced to a considerable extent the belief of scientific men on some important points.

“As the Island of Knowledge grows, so do the shores of our ignorance—the boundary between the known and unknown. Learning more about the world doesn't lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.”

***

Common across human history is our longing to better understand the world we live in, and how it works. But how much can we actually know about the world?

What we know of the world is limited by what we can see and what we can describe, but our tools have evolved over the years to reveal ever more pleats into our fabric of knowledge. Gleiser celebrates this persistent struggle to understand our place in the world and travels our history from ancient knowledge to our current understanding.

While science is not the only way to see and describe the world we live in, it is a response to the questions on who we are, where we are, and how we got here. “Science speaks directly to our humanity, to our quest for light, ever more light.”

To move forward, science needs to fail, which runs counter to our human desire for certainty. “We are surrounded by horizons, by incompleteness.” Rather than give up, we struggle along a scale of progress. What makes us human is this journey to understand more about the mysteries of the world and explain them with reason. This is the core of our nature.

While the pursuit is never ending, the curious journey offers insight not just into the natural world, but insight into ourselves.

“What I see in Nature is a magnificent structure that we can comprehend only
very imperfectly, and that must fill a thinking person with a feeling of humility.”
— Albert Einstein

We tend to think that what we see is all there is — that there is nothing we cannot see. We know it isn't true when we stop and think, yet we still get lulled into a trap of omniscience.

Science is thus limited, offering only part of the story — the part we can see and measure. The other part remains beyond our immediate reach.

“What we see of the world,” Gleiser begins, “is only a sliver of what's out there.”

There is much that is invisible to the eye, even when we augment our sensorial perception with telescopes, microscopes, and other tools of exploration. Like our senses, every instrument has a range. Because much of Nature remains hidden from us, our view of the world is based only on the fraction of reality that we can measure and analyze. Science, as our narrative describing what we see and what we conjecture exists in the natural world, is thus necessarily limited, telling only part of the story. … We strive toward knowledge, always more knowledge, but must understand that we are, and will remain, surrounded by mystery. This view is neither antiscientific nor defeatist. … Quite the contrary, it is the flirting with this mystery, the urge to go beyond the boundaries of the known, that feeds our creative impulse, that makes us want to know more.

While we may broadly understand the map of what we call reality, we fail to understand its terrain. Reality, Gleiser argues, “is an ever-shifting mosaic of ideas.”

However…

The incompleteness of knowledge and the limits of our scientific worldview only add to the richness of our search for meaning, as they align science with our human fallibility and aspirations.

What we call reality is a (necessarily) limited synthesis. It is certainly our reality, as it must be, but it is not the entire reality itself:

My perception of the world around me, as cognitive neuroscience teaches us, is synthesized within different regions of my brain. What I call reality results from the integrated sum of countless stimuli collected through my five senses, brought from the outside into my head via my nervous system. Cognition, the awareness of being here now, is a fabrication of a vast set of chemicals flowing through myriad synaptic connections between my neurons. … We have little understanding as to how exactly this neuronal choreography engenders us with a sense of being. We go on with our everyday activities convinced that we can separate ourselves from our surroundings and construct an objective view of reality.

The brain is a great filtering tool, deaf and blind to vast amounts of information around us that offer no evolutionary advantage. Part of it we can see and simply ignore. Other parts, like dust particles and bacteria, go unseen because of limitations of our sensory tools.

As the Fox said to the Little Prince in Antoine de Saint-Exupery's fable, “What is essential is invisible to the eye.” There is no better example than oxygen.

Science has increased our view. Our measurement tools and instruments can see bacteria and radiation, subatomic particles and more. However precise these tools have become, their view is still limited.

There is no such thing as an exact measurement. Every measurement must be stated within its precision and quoted together with “error bars” estimating the magnitude of errors. High-precision measurements are simply measurements with small error bars or high confidence levels; there are no perfect, zero-error measurements.

[…]

Technology limits how deeply experiments can probe into physical reality. That is to say, machines determine what we can measure and thus what scientists can learn about the Universe and ourselves. Being human inventions, machines depend on our creativity and available resources. When successful, they measure with ever-higher accuracy and on occasion may also reveal the unexpected.

“All models are wrong, some are useful.”
— George Box

What we know about the world is only what we can detect and measure — even if we improve our “detecting and measuring” as time goes along. And thus we make our conclusions of reality on what we can currently “see.”

We see much more than Galileo, but we can't see it all. And this restriction is not limited to measurements: speculative theories and models that extrapolate into unknown realms of physical reality must also rely on current knowledge. When there is no data to guide intuition, scientists impose a “compatibility” criterion: any new theory attempting to extrapolate beyond tested ground should, in the proper limit, reproduce current knowledge.

[…]

If large portions of the world remain unseen or inaccessible to us, we must consider the meaning of the word “reality” with great care. We must consider whether there is such a thing as an “ultimate reality” out there — the final substrate of all there is — and, if so, whether we can ever hope to grasp it in its totality.

[…]

We thus must ask whether grasping reality's most fundamental nature is just a matter of pushing the limits of science or whether we are being quite naive about what science can and can't do.

Here is another way of thinking about this: if someone perceives the world through her senses only (as most people do), and another amplifies her perception through the use of instrumentation, who can legitimately claim to have a truer sense of reality? One “sees” microscopic bacteria, faraway galaxies, and subatomic particles, while the other is completely blind to such entities. Clearly they “see” different things and—if they take what they see literally—will conclude that the world, or at least the nature of physical reality, is very different.

Asking who is right misses the point, although surely the person using tools can see further into the nature of things. Indeed, to see more clearly what makes up the world and, in the process to make more sense of it and ourselves is the main motivation to push the boundaries of knowledge. … What we call “real” is contingent on how deeply we are able to probe reality. Even if there is such thing as the true or ultimate nature of reality, all we have is what we can know of it.

[…]

Our perception of what is real evolves with the instruments we use to probe Nature. Gradually, some of what was unknown becomes known. For this reason, what we call “reality” is always changing. … The version of reality we might call “true” at one time will not remain true at another. … Given that our instruments will always evolve, tomorrow's reality will necessarily include entitles not known to exist today. … More to the point, as long as technology advances—and there is no reason to suppose that it will ever stop advancing for as long as we are around—we cannot foresee an end to this quest. The ultimate truth is elusive, a phantom.

Gleiser makes his point with a beautiful metaphor. The Island of Knowledge.

Consider, then, the sum total of our accumulated knowledge as constituting an island, which I call the “Island of Knowledge.” … A vast ocean surrounds the Island of Knowledge, the unexplored ocean of the unknown, hiding countless tantalizing mysteries.

The Island of Knowledge grows as we learn more about the world and ourselves. And as the island grows, so too “do the shores of our ignorance—the boundary between the known and unknown.”

Learning more about the world doesn't lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.

As we move forward we must remember that despite our quest, the shores of our ignorance grow as the Island of Knowledge grows. And while we will struggle with the fact that not all questions will have answers, we will continue to progress. “It is also good to remember,” Gleiser writes, “that science only covers part of the Island.”

Richard Feynman has pointed out before that science can only answer the subset of question that go, roughly, “If I do this, what will happen?” Answers to questions like Why do the rules operate that way? and Should I do it? are not really questions of scientific nature — they are moral, human questions, if they are knowable at all.

There are many ways of understanding and knowing that should, ideally, feed each other. “We are,” Gleiser concludes, “multidimensional creatures and search for answers in many, complementary ways. Each serves a purpose and we need them all.”

“The quest must go on. The quest is what makes us matter: to search for more answers, knowing that the significant ones will often generate surprising new questions.”

The Island of Knowledge is a wide-ranging tour through scientific history from planetary motions to modern scientific theories and how they affect our ideas on what is knowable.