Category: Learning

Many people, database experts among them, dismiss Big Data as a fad that’s already come and gone, and argue that it was a meaningless term, and that relational databases can do everything NoSQL databases can. That’s not the point! The point of Big Data, pointed out by George Dyson, is that computing undergoes a fundamental phase shift when it crosses the Big Data threshold: when it is cheaper to store data than to decide what to do with it. The point of Big Data technologies is not to perversely use less powerful database paradigms, but to defer decision-making about data — how to model, structure, process, and analyze it — to when (and if) you need to, using the simplest storage technology that will do the job.A organization that chooses to store all its raw data, developing an eidetic corporate historical memory so to speak, creates informational potential and invests in its own future wisdom.

Next, there is machine learning. Here the connection is obvious. The more you have access to massive amounts of stored data, the more you can apply deep learning techniques to it (they really only work at sufficiently massive data scales) to extract more of the possible value represented by the information. I’m not quite sure what a literal Maxwell’s Historian might do with its history of stored molecule velocities, but I can think of plenty of ways to use more practical historical data.

And finally, there are blockchains. Again, database curmudgeons (what is it about these guys??) complain that distributed databases can do everything blockchains can, more cheaply, and that blockchains are just really awful, low-capacity, expensive distributed databases (pro-tip, anytime a curmudgeon makes an “X is just Y” statement, you should assume by default that the(X-Y) differences they are ignoring are the whole point of X). As with Big Data, they are missing the point. The essential feature of blockchains is not that they can poorly and expensively mimic the capabilities of distributed databases, but do so in a near-trustless decentralized way, with strong irreversibility and immutability properties.

We always focus on the downsides of super intelligent AI. There are, however, upsides. Super intelligence can help solve some of the biggest problems of our time: Safety, medical issues, justice, etc.

Containment is both a technical and a moral issue. Much more difficult than currently given credit for. Given ways we have to construct it, we likely can just “unplug” it.

Tegmark defines these three stages of life:

Life 1.0: Both hardware and software determined by evolution. (Flagella)

Life 2.0: Hardware determined by evolution, software can be learned (Humans)

Life 3.0: Both hardware and software can be changed at will. (AI machines)

Wide vs narrow intelligence: Humans have wide intelligence. Generally good a lot a lot of different tasks and can learn a lot implicitly. Computers have (so far) with narrow intelligence. They can calculate and do programmed tasks much better than us. But will completely fail at needing to account for unwritten constraints when someone says, “take me to the airport as fast as possible.”

The moment the top narrow intelligence gets knit together and meets the minimum of general intelligence, it will likely surpass human intelligence.

What makes us intelligent is the pattern in which the hardware is arranged. Not the building blocks themselves.

The software isn’t aware of the hardware. Our bodies are completely different from when we were young, but we feel like the same person.

The question of consciousness is key. A subjective experience depends on it.

We probably already have the hardware to get human-level general intelligence. What we are missing is the software. It is unlikely to be the same architecture as the human brain, likely similar. (Planes are much more simple than birds.)

AI Safety research needs to go hand-in-hand with AI research. How do we make computers unhackable? How do we contain it in development? How do we ensure system stability?

One further issue you are going to need to overcome is having computers answer how a decision was made in an understandable way instead of just dumping a stack trace.

Tegmark councils his own kids to go into fields that computers are bad at. Fields where people pay a premium for them to be done by Humans.

“It’ll-get-worse-before-it-gets-better” fallacy: A variant of confirmation bias. If the problem gets worse, the prediction is confirmed. If the situation improves unexpectedly, the customer is happy and the expert attributes it to his prowess. Look for verifiable cause-and-effect evidence instead.

Story bias: We tend to interpret things with meaning, especially things that seem connected. Stories are more interesting than details. Our lives are mostly series of unconnected, unplanned events and experiences. Looking at these ex post facto and making up an overarching narrative is disingenuous. The problem with stories is that they give us a false sense of understanding, which leads us to take bigger risks and urges us to take a stroll on thin ice. Whenever you hear a story, ask: Who is the sender, what are his intentions, and what does this story leave out or gloss over?

Hindsight bias: Possibly a variant on story bias. In retrospect, everything seems clear and inevitable. It makes us think we are better predictors than we actually are, causing us to be arrogant about our knowledge and take too much risk. To combat this, read diaries, listen to oral histories, and read news stories from the time you are looking at. Check out predictions from the time. And keep your own journal with your own predictions about your life, career, and current events. Compare them later to what happened to see how poor of a predictor we all are.

Overconfidence effect: We systematically overestimate and our ability to predict on a massive scale. The difference between what we know and what we think we know is huge. Be aware that you tend to overestimate your knowledge. Be skeptical of predictions, especially from so-called experts. With all plans, favor the pessimistic scenario.

Chauffeur Knowledge: There are two types of knowledge: Real knowledge (deep, nuanced understanding) and Chauffeur knowledge (enough knowledge to put on a show, but understanding to answer questions or make connections). Distinguishing between the two is difficult if you don’t understand the topics yourself. One method is the circle of competence. True experts understand the limits of their competence: The perimeter of what they do and do not know. They are more likely to say “I don’t know.” The chauffeurs are unlikely to do this.

Illusion of Control: Similar to placebo effect. The tendency to believe that we can influence something over which we have absolutely no sway. Sports, gambling, etc. Also: Elevators, cross walks, fake temperature dials. This illusion led prisoners (like Frankel, Solzhenitsyn, etc) to not give up hope in concentration camps. Federal reserve’s federal funds rate is probably a fake dial, too. The world is mostly an uncontrollable system at the level we currently understand it. The things we can influence are very few.

Incentive Super-Response Tendency: People respond to incentives by doing whatever is in their best interest. Extreme examples: Hanoi rats being bred, Dead Sea scrolls being torn apart. Good incentive systems take into account both intent and reward. Poor incentive systems often overlook and even corrupt the underlying aim. “Never ask a barber if you need a haircut.” Try to ascertain what actions are incentivized in any situation.

Regression to Mean: A cousin of the “It’ll-get-worse-before-it-gets-better” and the Illusion of Control fallacies. Extreme performances are often interspersed with less extreme ones. There are natural variations in performance. Students are rarely always high or low performers. They cluster around the mean. Thinking we can influence these high and low performers is an illusion of control.

Outcome Bias: We tend to evaluate decisions based on the result rather than the decision process. This is a variant on the Hindsight Bias. Only in retrospect do signals seem clear. When samples are too small, the results are meaningless. A bad result does not necessary indicate a bad decision and vice versa. Focus on the reasons behind actions: Were they rational and understandable?

Paradox of Choice: A large selection leads to inner paralysis and also poorer decisions. Think about what you want before inspecting existing offers. Write down the criteria and stick to them rigidly. There are never perfect decisions. Learn to love a good choice.

Liking Bias: The more we like someone, the more we are inclined to but from or help that person. We see people as pleasant if (a) they are outwardly attractive, (b) they are similar to you, or (3) they like you. This is why the salesperson copies body language and why multi-level marketing schemes work. Advertising employs likable figures in ads. If you are a salesperson, make people like you. If you are a consumer, judge the product independent of the seller and pretend you don’t like the seller.

Endowment effect: We consider things to be more valuable the moment we own them. If we are selling something, we charge more than we ourselves would spend on it. We are better at holding on to things than getting rid of them. This effect works on auction participants, too, which drives up bidding. And late-stage interview rejections. Don’t cling to things, rather view them as the universe temporarily bestowing them to you.

I’m working my way through Rolf Dobelli’s The Art of Thinking Clearly by reading a few sections each morning. Here are my notes on the first 11 sections (Confirmation Bias had two sections, which I’ve only noted as one below):

Survivorship bias: You overestimate your probability of success because you only see success stories. You find common threads in success stories and think they are the answer. Both ignore the failures because those stories aren’t told. When you are a survivor you think, “I did it! Everyone else can!” Look for counter examples and failures to overcome it.

Swimmer’s body illusion: Swimmers usually choose swimming because they have good physiques. Swimming doesn’t necessarily cause good physiques. Harvard has a rigorous vetting process and skilled, driven people tend to get in. They’d likely be successful without Harvard. This may actually be a subset of the survivorship bias. (You don’t see ugly models selling makeup or fat swimmers because they don’t tend to last long in the business. Dumb people don’t make it though Harvard’s screening, so won’t bring down their salary numbers after 4 years.)

Clustering illusion: Our brains are pattern and meaning recognizing machines. First regard patterns as pure chance. If there seems to be more, test it statistically.

Social Proof: We are hardwired to copy the reactions of others. In the past it was beneficial for survival. Remember to look for links. Popular does not equal best on objective measures. “If 50M people say something foolish, it is still foolish.”

Sunk Cost Fallacy: Investments of time or money to date don’t matter. Only future benefits or costs count.

Reciprocity: The allure of both positive and negative reciprocity is so strong that it is best to avoid saying yes in the first place if it is something you don’t want.

Confirmation bias: The tendency to interpret new information so it becomes compatible with your existing beliefs. We filter out disconfirming evidence. Look for disconfirming evidence and give it serious consideration. “Murder your darlings.”

Authority bias: When making decisions, think about which authority figures are influencing your reasoning. Challenge them.

Contrast Effect: Things seem cheaper, prettier, healthier, better, etc in contrast to something else. This is how magicians and con men remove your watch: Press hard in one area so you don’t feel the lighter touch of removing your watch. This is also why it is easy to ignore inflation. Compare things in individual cost/benefit calculations, not in contrast to an “original price” or what they are framed against.

Availability bias: We create a picture of the world using the examples that most easily come to mind. This creates an incorrect risk map in our heads. We attach too much likelihood to flashy outcomes. We think dramatically, not quantitatively. We tend to focus on what is in front of us, whether or not it is the most important question. We can overcome it by getting others’ input with different experiences and expertise.

If you write React in plain javascript, everything should run as-is. If you write your React code in JSX, babel first finds the JSX, parses and generates the corresponding javascript code, then evaluates it. The big-picture of React is that it is kind of like the view layer in MVC, with a few more bells and whistles added. Everything renders to a virtual DOM first, which is significantly faster than the real DOM. Changes are then compared with the real DOM and then the differences are sent to the real DOM.

It looks like you can write Angular code in javascript or Typescript (which then compiles to javascript). Here is a great high-level architecture overview of Angular that explains how it works: https://angular.io/guide/architecture

Tristan is a former Design Ethicist at Google and studied at Stanford’s Persuasive Technology Lab. His work highlights the design patterns in technology that grab our attention, pull us back in, and addict us. These designs are not only manipulating us, but they are making us unhappy.

I’m getting increasingly interested in this topic. Taking a long break from social media, turning off almost all phone notifications, and deleting all addictive apps from my phone has had a positive impact on my reading & thinking time. Breaking the typical pattern of waking up and surfing social media before getting moving for the day has made my mornings better, too.

Two related topics I’m interested in pursuing:

Decreasing my cognitive load. Getting things off my mind so I can focus on what matters.

Making myself less susceptible to advertising.

If you have any books, articles, or podcasts I should check out on these topics, let me know!

Methods of thinking are more important than raw intelligence. The people who were burning witches probably didn’t have a lower IQ than the people who went to the moon. They thought about the world in a different way.

Bullshit and lying aren’t the same. Bullshit may contain lies, but the purpose is different. Lies intentionally deceive, but bullshit’s goal is impressing the listener.

Falling for bullshit isn’t a good indicator for someone’s intelligence.

People may fall for bullshit for non-obvious reasons, such as reading too far into it or projecting their own beliefs onto it.

Today’s drawing is still in progress. I had a busy day today and spent the entire evening down in the city, so I only got about 30 minutes to start a drawing of a leaf on the cover of this book I’m reading. I’m going to work on filling in the details tomorrow.

Today I decided to take a break from the specific Drawing on the Right Side of the Brain exercises and try out drawing on my new 10.5″ iPad Pro with the Apple Pencil. I used the Linea app and did another pass at my Day 8 hand drawing.

I don’t yet have fine control over the Apple Pencil. I’m still getting used to it. I love using my finger as the eraser and doing each part of the drawing as separate layers (outlines, details, and shading). I found shading much easier to control on the iPad than with a real pencil. I’m still going to do exercises in my real drawing pad, but I’ll probably shift a decent number to my iPad. One of my goals for learning to draw is being able to draw illustrations for my blog, which will all be done digitally.

Burning the midnight oil. Today I read about expanding the sighting and spacing I’ve been working on the last few days to faces. Then I spent about an hour applying what I learned to a line drawing of a portrait by Sargent.

Here is the comparison:

Tomorrow I draw a profile portrait of a real person. It will probably be Amanda.

Today I did exercises to learn how to draw perspectives. The first was about finding scales and angles, then the second was a drawing of a complex scene to put those to use. I chose our entryway, complete with a crooked doormat and a pile of our shoes.

I think the left side came out much better than the right. I spent more time on it. I rushed the right side because I spent more time and I wanted on the left and started to get impatient.

It turned out better than I could have done a week ago, but it took much more time, energy, and focus that I expected.

Today I had to draw a chair, but not in the usual way. Instead of drawing the lines and shapes that make up the chair, I had to draw the negative space instead. I didn’t take a photo or use the plastic pane very much, but drew from looking at the chair and occasionally using the frame to check proportions. This exercise is supposed to help with noticing negative space, framing, picking a guide for scaling, and comparing angles. After I was finished, I erased out the tone from the area between the shapes I drew. In this case, that ended up being the chair.

I don’t think I nailed the proportions. The top is rough. The only area what I think is strong is the triangle area between the right leg and the seat.