It's that time of year when lots of people are giving to various charities, and I've been thinking about, if I decided to give, how I would decide. My mental economic model for businesses is that businesses engage in trade. Trade is the exchange of one thing for a more valuable thing. On both sides. From that I've tried to build a model for charities and philanthropy.

Economics of trade

Suppose you (running a business) have a sandwich that you're selling for $2. It's very likely that the sandwich is worth less than $2 to you. If it was worth more than $2, you'd rather keep the sandwich, so you wouldn't be selling it. So let's say it's worth $1 to you. Now suppose I have $2 and decide to buy a sandwich from you. It's very likely that the sandwich is worth more than $2 to me. If it was worth less than $2, I'd rather keep $2 than buy the sandwich. So let's say it's worth $3 to me. Both of us have chosen to trade something less valuable for something more valuable. Before the trade, the sum total of what we had was $1 (your sandwich) + $2 (my money) = $3. After the trade, we have $2 (your money) + $3 (my sandwich) = $5. Trades generate value out of thin air. The world is now $2 richer because of this trade. Free markets make for wealthy countries because people aren't prevented from trading.

Model for Gifts

If you think about trades, they work because both sides have the right incentives. When they both choose to trade, the world is better off. When either chooses not to trade, there's no harm done. Yes, there are some cases where the world is better off if they trade, but they don't both choose to trade, but in those situations a change in price will enable the trade, and both will end up benefiting.

With giving gifts, not only charities but also Christmas gifts, birthday gifts, etc., the person paying and the person receiving the sandwich are not the same. It's much harder to ensure that the trade is actually increasing wealth in the world. Suppose I buy you a sandwich, and I paid $2 for it. I don't know how much it's worth to you. You could say that it's worth $5 to you, but since it's not your money on the line, there's no incentive for you to say the true value. Suppose it's worth nothing to you. So before the trade, the seller has $1 (sandwich), I have $2 (money), and you have $0, a total of $3. After the trade, the seller has $2 (money), I have $0, and you have $0 (sandwich), a total of $2.

Gift giving opens up the possibility that a trade makes the world worse off economically. There are other reasons to give gifts of course, but just keep in mind that economically they're not so good.

Christmas and Birthday gifts are rare; the most common source of economic loss from giving is kids (and to a lesser extent, spouses). Kids might say they really want the $150 pair of shoes, but since they're not the ones working 20 extra hours to buy those shoes, they have no incentive not to make the trade. They make that sad face, or whine, or tell you that all their friends have those shoes, or nag you a great deal. They know how to manipulate parents. Lots and lots of inefficient trades are made by or for kids, causing parents to struggle to make ends meet.

These inefficient trades also can occur with charities. If you're giving to a charity and they “do good” with that money, you don't know whether what they did was worth more than what you gave. You end up judging based on how good it makes you feel, which means charities have an incentive to make you feel good, through special events (seeing lots of people involved in a “special event” makes you much more likely to give money), pictures of needy people (seeing a few people makes that emotional connection that you don't get if you read about helping millions), and other tactics. My view of charities, until recently, has been that there's a high potential for inefficiency and emotional manipulation, just as with kids. I've been trying to form a model that would help me make decisions about whether to give, and to whom, that isn't just to make me feel good, but something that will actually do good.

Externalities

There's something I left out of the model of trades. There's more than just the two parties involved. Maybe the business polluted to make that sandwich. Maybe I littered instead of throwing the sandwich wrapper away. The trade didn't take into account these “externalities” — effects on the rest f the world outside the seller and the buyer.

The standard solution to this is to estimate those effects and charge people for them. Let's take an extreme example, with $5 of pollution for a sandwich. (I think this is extreme for sandwiches but there are probably other industries for which it's reasonable to say the damage to the environment is higher than the value of the product.) Before the trade, there was $1 (your sandwich) + $2 (my money) + $5 (rest of world) = $8, and after the trade there was $2 (your money) + $3 (my sandwich) + $0 (rest of world) = $5. Although the seller and buyer are better off, the trade made the world worse. If we could charge the seller $5 for polluting, the price of the sandwich would be $7 instead of $2. And at $7, I wouldn't buy the sandwich, since it's only worth $3 to me. The trade would be stopped, which is just what we wanted here. To make money, the business needs to figure out how to reduce pollution.

I like the charging-for-pollution solution better than laws against pollution, or trade-and-cap systems (I'll have to make another post about that). With a law, the business's incentive is to fight the law, doing as little as possible, because they still make more money if they pollute. Whereas with charging, the businesses makes more money by not pollution. Since profit motivates business, I want the system to give more profit when the business pollutes less. However it's often impractical to measure the impact of pollution, and that's why we have simplistic black-and-white laws in place.

Model for Philanthropy

We have this artificial line between business and charity. We even call the charities “non-profits”, to distinguish them from businesses (even though some businesses don't make a profit). And there's the sense that charities do good for the world and businesses are bad. I think the world is more complicated than that. There's a whole set of potential projects that have both business and philanthropic aspects. For example, microfinance can generate profits and help lots of people at the same time. Tesla Motors might not ever make money, but it could jump-start the market for cleaner cars. Neither counts as a charity; you're not just handing out money. But there's the potential for it to do more good than a charity. The tax laws in the U.S. encourage giving to charity over investing in a “socially reponsible” business that does good things for the world.

If I'm going to do something with money that I don't need, I want to put it into a place that gives the highest return on investment. But in the case of philanthropy, in that return I need to consider not only what I get back, but also what good it does for the world. Externalities are a way of looking at this. Normally an externality is something negative. But there are also positive externalities. If I invested in a business that makes less profit than other businesses, but generates a lot of good in the world, that's a positive externality. The return on investment for me might be low, but the return to the world could be high. Whether it's classified as an investment or a charity isn't relevant (except there are tax implications that affect the return on investment); I want to find things that have a high rate of return. There's also regular shopping. Just as I might avoid “sweatshop“ products, I might favor products that have positive externalities.

Giving

Unfortunately, I haven't found much that can guide me to high-return philanthropy. Businesses report cost and revenue but not externalities. Charities report costs and donations but not the effects they have. I started following GiveWell a while ago because they seemed to be the only ones that came even close to what I was looking for. I was pleased to see they got mainstream press recently. They tend to pick things that are more tangible, and they seem to consider “saving lives” separately from economic benefit, but in general I like what they're saying and doing. I like that they focus on benefits more than cost efficiency, and they're trying to compare different approaches to see what's most effective. I'm also following Acumen Fund, which tries to help people through investments rather than charity.

Businesses looking for customers use emotions, marketing, sales tactics, and they come to you. When they get investors they use numbers, and you tend to go to them. I want to see the numbers side of philanthropic organizations. I don't want them contacting me to tell me how they're “doing good”. I assume almost all of them are doing good. What I'm really trying to decide, by building an economic model, is where to invest. It may be giving to a charity; it may be investing in a business; it may be buying products. It'll probably be all three. The main problem I see is that there isn't enough information, for businesses or charities, and that there's this artificial line drawn between the two. There should be a unified way of thinking about business and charity that finds the best of both, and allows for new types of organizations that don't fit into the current system. Good information about externalities would change the world.

The Verizon announcement, and Kindle, are making me think we might actually see a revolution in wireless communication technology.

Back in the early days of electric companies, they were light companies. They sent electricity to your house just for lights, and later offered other services at different prices. They gave away light bulbs and made it back in monthly charges. But eventually they made electric outlets that would accept lots of things. Once lots of electric products were developed, people used electricity for a lot more things. And they stopped subsidizing light bulbs. The electric companies made more money not by charging more for the existing light bulbs but because people did more with electricity.

Cell phone companies have built a huge infrastructure that's only for cell phones. They sell a very small number of services (voice, SMS, web, GPS), all at different prices. They practically give away cell phones and then charge you monthly to make up for it. Just as with electric companies, there's no way the phone company's going to come up with all the possible things you'll want to do with wireless communications. If they open it up I think they'll make a lot more money, because there will be lots more products that work on their networks. I want my thermometer to send me data wirelessly. I want my microwave to read the current time wirelessly instead of me having to set it. I want my car to send a message to my air conditioning system to turn itself on when I'm getting close to home. I want the rain sensor in my yard to send a message to my car windows to close themselves. My cell phone company will never produce every product, but if they sell access to the network, someone will develop some cool products that use the network. Kindle is an example of such a device.

I've seen some complaints on the blogs that this is a big scam for Verizon to make more money by charging per byte. I think they should be charging by the byte. When they control the service, they are able to control the byte-to-service ratio. They average the cost of the bytes across customers and give you a single price for “unlimited” voice. But then they and the ISPs hate you if you use more than average. But I think consumers are better off in the long run paying by the byte. And I think consumers are better off if the phone companies stop giving away the cell phones, and instead lower the monthly charges. I think voice-only folk will end up paying less, because the money will come from the higher bandwidth products that people come up with. Just as with electricity or shipping/mail, it makes little sense to offer unlimited service for a fixed fee. That will lead to overconsumption and hard limits (like Comcast shutting off consumers who use a lot of bandwidth). I'd rather have people pay for bandwidth, so that we choose what to use and not worry about being shut off. Does FedEx stop shipping your packages if you ship lots of them? No! They treat you even nicer! In a world where people pay per byte, the ISPs will want customers who use a lot. This is the “fat head” of data transmission. Other devices can use very little, without making it too expensive. I'm not going to buy a cell phone plan at $20/month for my thermometer or microwave. If I only paid by the byte, it'd be incredibly cheap to transmit the time and temperature once in a while, and those devices will become feasible. This is the “long tail” of data transmission.

I also wanted to mention that when people talk about “3G” they are thinking about the higher bitrate. But it also provides always-on service that can be used simultaneously with voice. I suspect (just as with broadband) that always-on is what will change society more than higher bandwidth. Phone companies thought videophones (high bandwidth, not always on) would be the big thing, but it turned out SMS text messaging (low bandwidth, always on) was what was really took off.
Twitter and IM are going to be used more than Second Life or World of Warcraft. Flash games in your web browser are being played more than Playstation 3 games.

I think it's reasonable for electric companies to initially offer only a small set of products, like lighting. It's the low hanging fruit. And it lets them build out their system and make sure everything works. Once the infrastructure is there, it makes sense to open it up. I think the same is true for the cell networks. It makes sense to start with something more limited, so you can work out all the details and build the infrastructure. Once it's established, it's time to open it up to even more products, so that you can make even more money. I'm happy to see Verizon's announcement (I haven't seen details yet). I hope they've thought about the history of electric companies, and are thinking about a world in which every device uses the network, and cell phones are just one of many.

It seems that many of the main characters in Heroes, Season 2, are
becoming either evil or dumb, and they're coming
in pairs. Last season Jessica was evil and
Niki as dumb. Let's go through the list for season
2:

Kensei is becoming evil, and Hiro is becoming dumb.

Nathan might be evil, and Matt is dumb.

West seems to be somewhat evil (and somewhat dumb), and Claire is really dumb.

Noah has returned to evil, and The Haitian I'm not sure about.

Elle is evil, but has no dumb counterpart (yet); maybe she'll pair up with Peter, who's not really dumb, but has lost his memory.

Maya is turning to evil, and Alejandro I'm not sure about. Or maybe Sylar is the evil one and Maya is dumb.

Niki/Jessica has turned into evil Jessica, and she's working with Mohinder, who's dumb.

What if Niki/Jessica was the first to experience some sort of evil/dumb pair virus that will spread throughout the hero community? I know, it's a ridiculous theory. I can't even fit Angela, Molly, Micah, Monica, or the mysterious Adam into this list. But every time I see one of the above characters it does seem to me that they're getting more evil or more dumb.

I'm also starting to think Bob may be the only good hero. But then, I'm probably the only one who thought Linderman was good. He was the one who wanted to save the world (from losing 93% of the population) by setting off a small bomb (losing 0.07% of the population). Evil means but a good end. Except he's dead now, the bomb didn't go off, and now the world is in trouble.

There are lots of types of information I want to search and
browse, but it's typically information that I want to see right
now. I also want to be notified about new information and changes
to information. Emails, text messages, and pager alerts are ways
to send this information to me. Feeds are a little different in
that they're controlled by the receiver pulling information
rather than the sender pushing it. All of these systems allow
senders to transmit messages to receivers. However all of them
treat each message separately.

What I really want is information, not a set of messages. The
messages should be grouped and summarized. Gmail for example
groups messages into conversations. Facebook groups and
summarizes friend updates. For example instead of separately
telling me, “A is a friend of D”, “B is a friend of D”, and “C is
a friend of D”, Facebook will tell me “A, B, and C are friends of
D”. They can do this because they know the structure, not only
text, of messages, and also because they know when I last read
messages.

There are lots more things I'd like to see along these lines. For
example if I receive 100 messages telling me my site is down, and
I'm away from my computer, I'd like them to be combined
together. Or if the site is back up maybe those messages should
go away, replaced by a note saying the site went down and back
up. If I get a traffic alert it should expire when the traffic
clears up again. If I read news every hour I want to see what's
new in the past hour but if I read news once a week I want to see
the week's biggest stories, not the 168 hours of updates. When I
come back from vacation I shouldn't have lots of low-importance
and redundant messages. Group and summarize. Show me what's
important. Don't overwhelm me with every individual message.

I love maps. I especially love online maps. They let me zoom, pan, and change the features being displayed. Google Maps was truly wonderful after using Mapquest, etc., for so many years. Microsoft and Yahoo also have draggable maps now.

I was planning a trip to Mount Saint Helens recently and tried out some online maps:

Google:

Yahoo:

Microsoft (Live):

There are three things I really want to know when visiting Mount Saint Helens:

Roads. I want to know how to get there. Only Google shows the roads clearly. Yahoo shows dark gray on medium green, which is barely visible. Microsoft shows dark gray on medium gray, which is even worse, and nearly impossible to find unless you already know where to look. Just try to find the western end of NF-99 on Microsoft's maps. None of the services offered as much information as the park maps.

Terrain. I want to know where the volcano is and where the crater is. Only Microsoft shows this at all, and it shows it beautifully. I can see the ravines and mountains and river valleys. It's great! Yahoo and Google show nothing about terrain in their maps. Instead, you have to switch to Satellite view, which works for this volcano, but no match for the clarity of Microsoft's maps. (I'm sure terrain slows down Microsoft map loading quite a bit though; maybe it should be an optional layer.)

Stops. I want to know where I can stop, take pictures, go on a hike, etc. None of the three mapping sites I tried have this information; I instead got some of it from the park map. Another way to get this data would be from geotagged images; if I could see the most popular spots for photos, I'd know where I should go.

The other thing you might notice from these three maps is that Microsoft's typography is excellent. Google has a nice font but surrounds the text with bright yellow or white to increase contrast, which is quite distracting. Yahoo's text is decent and has less distracting contrast. But Microsoft's text is really nice. Instead of a white border they use a subtle white shading to increase contrast. It's very clean. They also use different fonts for different types of features — compare “WASHINGTON”, “Mount St Helens National Volcanic Monument”, “Spirit Lake”, and “Mount St Helens 8365 ft”. It looks more like a “traditional” map than Google or Yahoo.

In the end I used the park map for roads and features, Google maps for driving times and directions, Microsoft maps for getting a general sense of where the scenic areas are, and my Rand McNally U.S. Atlas (paper, not online) to find out which routes were marked “scenic”.

In urban areas Microsoft continues to have good typography with labels that are easier to read than Google's or Yahoo's. They also seem to be placing the labels more intelligently to avoid drawing over important areas. Microsoft also has labels for more items on the map. The terrain doesn't play a role here, although the east side of Mercer island has terrain that explains why the roads curve so much. Google's roads are easiest to see. Yahoo shows each city in its own color, which can be useful at times, and it also makes highway entrances easy to see. Microsoft shows more roads (faintly), which gives me a better sense of which areas are sparse/industrial and which areas are dense/residential.

Which site do I use most often?

In general I get more information out of Microsoft's maps than Google's or Yahoo's, but Microsoft has a few annoying implementation details that keep me from using their maps more: (a) their awful browser detection script rejects me unless I lie about my user agent, and (b) I get a stupid dialog box asking me to install a Windows 3d plugin … even though I'm not wanting 3d, nor am I using Windows. There are things about each of the three that I like, but in the end I'm still using Google's maps most often. It's fast and the roads are easy to see, especially in cities.

Update: [2007-11-30] Google now has a “terrain” map mode that gives me what I liked from Microsoft Maps:

Prices are typically driven by supply and demand. I was curious
about the price of gasoline. When I buy a gallon of gasoline,
I pay for it, but others pay for it
too. My purchase increases the aggregate demand. Higher demand
means higher prices. Higher prices means other people pay more
for gas.

How much more do others have to pay?

I can't calculate exactly but with some simplifying assumptions
I can make an estimate:

Limit this calculation to the United States. There are
complex issues that influence gas prices around the world, some
economic and some political, and it's much simpler to make this
estimate with just one country. This would ordinarily not work,
except the next assumption makes it possible:

Gasoline supply in the U.S. is limited by refining capacity in
the U.S. With “maintenance”, “shutdowns”, “inspections”,
fires, and other capacity issues, I believe it's reasonable to
say the supply — at least in the short term — is fixed, and
that everything that is produced is consumed.

In other words, if I buy one extra gallon of gas, other
Americans need to buy one less gallon. What would it take to
make them buy less? Raise the price. By how much?

Let's call the total quantity consumed by everyone else Q. Let's
call the current price P. We want to know how much the price has
to go up to make Q go down by 1. If everyone else (collectively)
buys 1 gallon les, then I can buy that gallon. The key to the
relationship between P and Q is the price
elasticity of demand:

e = %ΔQ / %ΔP = (ΔQ/Q)/(ΔP/P)

What I really want to know is when I spend $D on gas, how much
more do other Americans have to spend? Spending $D means
ΔQ = D/P, which can also be written as D =
ΔQ×P. What everyone else has to spend is
Q×ΔP. So let's compute Q×ΔP. First,
let's rearrange e:

e = (ΔQ/Q)/(ΔP/P) = (ΔQ×P) / (Q×ΔP)

So Q×ΔP = (ΔQ×P)/e = D/e.

When I spend an extra $D on gas, others have to spend an extra
$D/e on gas. That's the answer I was looking for.

Except … what's the value of e?

There are various estimates: 0.2, 0.01, 0.1,
0.034
to 0.077 in 2001-2006, and 0.26.
When I spend an extra $40 on gas, other Americans have to spend
between $153 and $4000. I'm not sure which to believe, but I'm
going to guess it's around 0.1, which means others have to spend
an extra $400 on gas. Where does that money go? To the oil companies.

Let's look at it in reverse: if you found a way to spend $40
less on gas (maybe carpooling, planning errands better, or
driving less aggressively), not only would you save that $40,
the oil companies would miss out on $400 (maybe as much as
$4000), because you'd be helping other Americans spend less on
gas.

I'm not even going to try estimating how much more everyone pays
when someone drives a big SUV instead of a fuel efficient car…

Suppose you want to divide a numeric range (such as 0–1 or 0–23 or 1–365) into even segments. If you know how many segments you have, it's easy; you divide by N. But if you don't know how many segments you will have, and you can't go back once you've divided something, it gets trickier. If you divide into 3 equal segments and need 3, you're at the optimal point. But if you instead need 4 and have already divided into 3 segments, you end up subdividing one segment of length 1/3 into 2, leaving you with 4 segments of length 1/6, 1/6, 1/3, and 1/3.

It's so simple. Why does this work? I don't know. But it's pretty neat.

I first ran across this when I was looking for a way to pick sample points in 1 year of data. I wanted a set that would be roughly evenly spaced, because I wanted to draw a timeseries chart with the results, but I didn't know how much time it would take to analyze the points. So I analyzed one at a time, using the golden ratio to guide me.

I've been playing with Amazon's S3 and EC2, and they look potentially useful. S3 is the storage system. You pay for storage and transfers. EC2 is the computation system. You pay for virtual computers. Their Getting Started Guide for EC2 is pretty good. It describes step by step how to set up your development environment, then gives you a starter virtual machine to play with. I followed the instructions and got Apache and SSH.

The big advantage of EC2 over running your own servers is that you can get more capacity quickly. In fact it's called Elastic Cloud for that reason. If you're running a web service on a conventional hosting system and then are mentioned on Digg, you're either going to run out of capacity, or you're paying for extra capacity that you're not using most of the time. With EC2, you can monitor for the Digg Effect and add more virtual machines to handle the extra traffic, then release them when the Diggers move on. You only pay when you use the machines.

For my own projects though, I'm never going to get hit by Digg. I was hoping to use EC2 as a cheap low-capacity server. I misunderstood the pricing though. I thought it was $0.10/CPU-hour, but it's actually $0.10/clock-hour. When my server is sitting idle, I'm still getting charged. At $0.10/hour, that's over $70/month, and that's a bit too much to pay for an idle server. I'll instead use my Mac Mini at home.

I might still use S3 for off-site backup. I have regular backups at home, but all the backups are … at home. If anything happens to my home, I lose all copies of all my data. S3 charges for storage, uploads, and downloads, and I estimate that after I upload all my photos, I'd pay $4.50/month. That's pretty reasonable for off-site backup. I haven't investigated whether there are off-the-shelf backup solutions for S3. I want something portable (Linux, Windows, Mac) and command line (so I can automate it). I might end up writing my own quick&dirty scripts for this.

If you're starting a web service, you should definitely take a look at S3 and EC2. They're fairly cheap, and the reliability and flexibility may be worth a lot to your company.

Yes, after many many years, and several failed predictions, Emacs 22 is finally released! Find Emacs 22.1 on GNU's site and see the list of changes in the NEWS file. The main features that I can think of: Mac OS X support, Unicode support, Cygwin support, the use of ~/.emacs.d/init.elc instead of ~/.emacs (for faster startup times), separate colors for active/inactive modelines, colors in terminal mode, highlighting of active minibuffers, grep highlighting, drag and drop, mouse support in xterms, tramp for remote file access, an included python mode, IRC, org-mode for keeping notes and appointments, a URL library, the super powerful calc mode, an RSS reader, better keyboard macros, better search and replace, word wrapping mode (only works with fixed width fonts), spreadsheet package, as-you-type compile checking (flymake), and Subversion support. I especially like the ido package, which enhances buffer switching and file opening.

With my home phone, I sometimes get “wrong number” calls. Someone dialed the wrong number, or someone gave out the wrong number, or someone forgot to dial the area code. I also get these types of emails. Other people make a typo when giving out their email address, or they use the wrong domain, or they just don't know their own email address. Here are some examples of emails that should've gone to someone else:

Citibank India emailing me financial information about my account (this has happened for three different people, all of whom accidentally used my email address)

Someone emailing me the design for the anchor bolts he wants me to manufacture

A hotel chain emailing me about work I'm doing for their new hotel

A mattress company emailing me about mattresses I've ordered for that hotel

Someone asking me for a job at my company

Someone who wanted a refund for the flight I booked for him

A pizza restaurant emailing me NDAs I need to sign before I can meet with them

A company that was pleased with my interviews and offered me a job paying $55,000/year, plus a Blackberry and a TiVo (!)

A London jobs site sending me updates on on jobs I've applied to

Monster.com India sending me updates on legal jobs

A pregnancy site sending me information about my 27th week of pregnancy

A photo site sending me my password

A dating site sending me potential matches

An email asking about various legal cases I'm involved with

Yahoo sending me accounts for which my email is listed as the secondary email address

Someone sending me pictures from a hike I went on

Invites from several social networking sites from people I don't know

Someone asking me to send my software to an address in … Nigeria

Citibank India especially worries me. They didn't try to verify my email address. What you're supposed to do if you run a web service with accounts:

Let me sign up online.

Mark the account “unverified”.

Send me an email with a verification code or link.

Only once I verify the email address should you mark the account “verified”.

Only send sensitive information to verified accounts.

Citibank India and Monster India do not do this. You should also never send passwords to people over email; send them a link that lets them reset their password.

I'm not sure why I get so many “wrong number” emails, but my guess is that it's because there are lots and lots of people named Amit Patel, and if you search for that name, you end up with me, so people assume I'm the Amit Patel they're looking for. If I'm getting repeated emails or if it seems like an important email, I'll send them an explanation of how they have the wrong email address; other times I just delete the email.

Kaito Nakamura (Hiro's father) likely has some power. I think his power is immortality. Here's why:

In Landslide, he mentions to Hiro that he's been waiting for generations for some spirit/power to manifest, and didn't think it was in Hiro, but now he sees that it is. Has his family been waiting this long, or has Kaito himself been waiting?

In the preview for Season 2, Hiro sees a samurai warrior on a horse. It looks like George Takei's eyes in there. It is just family resemblance, or is it Kaito himself in there?

Immortality is also time-related, so maybe there's a time theme to the powers in that family.

Another power I'm waiting for is the ability to see alternative futures. Linderman somehow knew that he needed to get Niki and DL together so that he could use Micah's power to help Nathan win the election. If Niki and DL hadn't gotten together, even Isaac's power wouldn't have predicted that Micah would be born or that he would have the power he does. I think we need a different sort of foretelling power than what Isaac has; it's possible Kaito has this. My top choice for Kaito's power though is still immortality.

Update: [2007-09-30] Given the events of Four Months Later, both in revealing Kensei, and in the events on top of the Duveaux Building, it seems unlikely that Kaito is immortal.

Update: [2007-11-09] My original theory was that Kaito was Kensei, and immortal, but although Kaito is not Kensei, it looks like Kensei is indeed immortal. And he's probably trying to destroy the world in an attempt to put his broken hearted self out of his misery.

Emacs has several packages for dealing with parentheses. Emacs comes with ways to highlight the matching parenthesis when you're on one; try show-paren-mode. One of the newer add-on packages is Nikolaj Schumacher's highlight-parentheses mode, which
shows the parentheses that enclose the current cursor position. I tried modifying it to highlight the containing expressions instead of only their parentheses:

Unfortunately, as you can see, it's a mess. I tried better colors (white, gray, etc.) but I just couldn't make it usable. So I gave up on highlighting the regions and went back to highlighting just the parentheses. It's a bit better:

It bolds the parentheses and also the first s-expression inside the opening parenthesis. It doesn't understand when the parentheses begin a form (instead of all the other uses of parentheses), so it sometimes highlights the first s-expression even when it's not special in any way. Despite this wart, I like this form of highlighting so far.

Update: [2007-05-29] However, there is one more thing I wanted. I de-emphasize parentheses by using a lighter color for them; I want the enclosing parentheses to be bold and black. However I want the enclosing first s-expressions to be bold, but not necessarily black. Note in the above example the keywords are normally blue, but when enclosing the current point they are black. I fixed this by adding separate highlighting for the enclosing parentheses and the first s-expression:

For my car, after experimenting with different driving styles to see what works best, I'm now getting 26–27 mpg city and 31–32 mpg highway at 75mph and 32–33 mpg at 65mph. The EPA rating for my car is 25 city / 31 highway. However the EPA is now lowering all of their estimates to match the average driver, and with the new estimates, my car is listed as 22 city, 29 highway*. This might make people feel better about their own driving habits instead of making them think about improving them.

High gas prices are what leads many people to look at gas mileage, but gas prices are not that important. Everyone looks at gas prices because they're printed in big bold numbers at every gas station. Fewer people look at the gas mileage. Learn techniques for using less gas while driving. When the light turns red, take your foot off the gas. Watch for light timing (many lights are timed so that if you drive at the speed limit, you'll get more greens.) For city driving, don't accelerate quickly, and don't drive so fast. Your driving habits make more of a difference than which gas station you go to. And even fewer people look at how much they drive. Plan ahead. Reduce the number of trips you take, and combine multiple errands together into one trip. Move closer to your workplace (this is one reason renters are better off than home buyers—a topic for another blog post). If you're looking to save money on gas, how much you drive is probably the place you should be looking first. Keep track of miles (or gallons) per week.

It's nice to see the EPA adjusting the ratings, but the lower estimates don't match what I've measured with my own driving. The old estimates match closer.

*This is just an estimate that doesn't take into account wind resistance. My car's coefficient of drag is 0.31, and as a result my real-world highway gas mileage is higher than the estimates. It might be better to take into account drag area as well.

If you've seen 0.07% and Five Years Gone, you'll know that everything has gone crazy. People with powers are being hunted in the future.

But we already expected that.

Linderman, in 0.07%, says that he wants to help the world unite. When I saw that episode, I wondered, “unite against what?” Either Linderman is stupid, and thinks the world will be great, or he's smart, and thinks the world will unite against people with powers. In Five Years Gone we see that indeed, the world united against people with powers. I suspect Linderman knew this would happen.

We also hear from Linderman that he used to know other people with powers, but they turned to the dark side. He may not think highly of other people with powers.

We also learn that Linderman is behind Primatech Paper, which is trying to track down and sometimes kill people with powers, even though Linderman has powers.

In Five Years Gone we hear that the Linderman Act is what led to persecution of people with powers. If it was named after Linderman or funded by Linderman, it would be consistent with Linderman not liking other people having powers.

So I think some experience in Linderman's life led him to think other people with powers are dangerous, and that he needs to save the world by eliminating everyone else with powers.

Sylar too is quite interested in eliminating everyone else with powers. From the very beginning he's been killing them. With his first victim, he talked about “fixing” Davis by killing him. When Sylar became President, he talked about making tough decisions to save the world. He also talked about eliminating all competition.

So it seems that both Linderman and Sylar have the same objective: remove everyone else with powers. They both talk about “helping”. Sylar's job was fixing things; Linderman's job was healing people. Sylar and Linderman seem similar in many respects.

I'm also quite impressed by Heroes in that I can't predict what's going to happen, so I'm likely wrong about Linderman too.

Monday, April 30, 2007Update: [2018-10-29] As this blog post is over 10 years old, I have posted my configuration as of 2018

Tabs to show overlapping windows are becoming more common these days, especially in terminals, browsers, and chat programs. The idea is that a single window can contain several … buffers. Emacs already has this, and has had this for a long time. It's just that by default Emacs doesn't have visible tabs to show the buffers. XEmacs and SXEmacs can show tabs with “buffer tabs”; for GNU Emacs 21 you need to install TabBar mode (thanks to Jemima for finding this), which gives you tabs like this:

Well, it doesn't look like that by default. The standard settings give each tab a 3d button appearance. I wanted something simpler, so I changed the settings:

This makes the currently selected tab match my default background (#f2f2f6), removes the 3d borders, and adds a bit of space between the tabs. I also define Alt+J and Alt+K to switch tabs; I use the same keys in other tabbed apps, because they're easier to type than moving my hands to the arrow keys.

TabBar-mode looks neat, but I'm not sure how useful it will be. In Emacs I have lots of buffers—more than will fit as tabs. The main thing I like so far are the keys for cycling between related buffers, but as the number of buffers grows it becomes faster to switch directly to the buffer I want.

Edit: [2010-11-20] I like tabbar-mode but I also find myself using other buffer switching quite a bit. I'm using tabbar within a project, and ido-switch-buffer for moving between projects. I've changed the tabbar groups to show only buffers in the same directory:

I've met lots of people who complain about Lisp and lots of people (especially Lisp folks) who complain about Python. Lisp is very elegant. There's something nice about its syntax (don't laugh!). The uniformity lets you do all sorts of neat things once you have macros. The basic syntactic construct in Lisp is the list, (a b c …), and it can mean lots of things:

Sometimes (f x) is a function call, and f is the name of the function, and x is evaluated as an argument.

Sometimes (f x) is a macro invocation, and f is the name of the macro, and x may be treated specially (it's up to the macro to decide).

Sometimes (f x) is a binding. For example, (let ((f x)) …) binds a new variable f to the value x.

Sometimes (f x) is a list of names. For example, (lambda (f x) …) creates a function that has parameters named f and x.

Sometimes (f x) is a literal list. For example, (quote (f x)).

Sometimes (f x) is interpreted in some other way because it's enclosed inside a macro. How it's interpreted depends on the macro definition.

The ability to use the same syntactic form for so many different things give you great power. You can define all sorts of cool things this way. I'm writing a pattern matcher that uses list expressions to define patterns and macros to interpret those list expressions. Macros are great for writing elegant, concise code.

The trouble is that you can't easily tell just by looking at (f x) how to interpret it. It could do anything. You'd think maybe a text editor like Emacs (which uses Lisp as its native language) would be able to help you in some way. But no. Emacs can't tell either. So how can you, the person reading the code, figure it out? Well, you can, but it takes a lot of effort. You can't determine the syntactic meaning of code (e.g., whether it's a definition or an expression) by looking at the code locally; you have to know a lot more of the program to figure it out. Lisp's syntactic strength is at the same time a weakness.

Python on the other hand has no macros and doesn't give you much to write concise, abstract, elegant code. There's a lot of repetition and many times it's downright verbose. But where Lisp is nice to write and hard to read, Python makes the opposite tradeoff. It's easy to read. You can determine how to interpret something—a string, a list, a function call, a definition—just by looking at the code locally. You never have to worry that somewhere in some other module someone defined a macro that changes the meaning of everything you're reading. By restricting what people can write, the job of the reader becomes easier.

Lisp seems to be optimized for writing code; Python seems to be optimized for reading it. Which you prefer may depend on how often you write new code vs. read unfamiliar code; I'm not entirely sure. What bothers me the most though is not that these two languages do different things, but that the people who argue about it seem to think that there is one “best” answer, and don't see that this is a tradeoff. When I'm writing code I prefer Lisp; when I'm reading code I prefer Python. I think this is an inherent tradeoff—any added flexibility for the writer means an added burden for the reader, and there is no answer that will be right for everyone.

–Amit

P.S. When I read debates online, I have a bias towards the people who view these things as tradeoffs and a bias against the people who say there's only one right answer and everyone else is stupid or clueless. This has sadly pushed me away from Lisp, the Mac, and other systems that I think are really good but have fanatical communities. When you're in a debate, consider that the other person might not be stupid, and there might be good reasons for his or her choices. You'll not only learn something about their position, but you'll be more likely get people to listen to you and adopt your point of view.

In shell buffers inside Emacs (M-x shell), many programs want to use color in useful ways. For example, grep can highlight the portion of the line that matches the search pattern. Here's what I use to make Emacs and XEmacs show colors in shell windows:

Shell mode is handy but I find that I often just switch to a terminal window, mainly because I can pipe commands through less. If the output is very short, either Emacs or a terminal is fine. If it's of medium length, Emacs is usually nicer, since it lets me search and cut and paste easily. If the command has very long output, the terminal is nicer, because less lets me see just parts of the output. I haven't found a way in Emacs to deal with processes that output lots of lines.

Emacs already has decent completion capabilities. Any time there's a list of possible answers, you can press Tab for completion. When there's more than one possible completion, it brings up a list for you.

I like to see the list of possible completions without pressing Tab or ?. In XEmacs, I use two packages, iswitchb and icomplete to get a list of completions as I type, at least for switching buffers and for minibuffer inputs:

If you're really into the power of completion, be sure to check out the icicles package by Drew Adams. It has a lot more features, and it has some things that look incredibly useful. It works for buffers, files, and the minibuffer, and it allows you to chain together multiple commands in powerful ways.

If icicles looks so good, why am I using ido and icomplete? It's because they come with Emacs22. The bar is higher for third party packages because it's an added dependency. I can't just tell a friend to put something into their .emacs; I have to tell them to download it and add to their load-path and so on. I wish there was a standard Emacs package system. I do plan to try out icicles and other packages once I've finished exploring the standard set of packages that comes with Emacs22.

Update [2015]: I now use helm, even though it's a third party dependency. Since I wrote this post in 2007, emacs added a package system so it's much easier to try out third party packages.

The season. Some counties at some times have observed Daylight Saving Time.

(Thanks to Google Current for bringing this to my attention. Just watch the beginning.) What time it is in Indiana depends on Federal law, State law, and the choice of County. But the rules are so confusing that sometimes people just do their own thing.

Even worse, Indiana just can't win:

They want the state to be on the same time zone.

They want the northwest part of the state to match Chicago, which is Central Time.

They want the southeast part of the state to match Cincinnati, which is Eastern Time.

The only solution is for Chicago and Cincinnati to be on the same time zone.

Long ago every town had its own time. Time zones were introduced as rail travel became more common, and people interacted with others outside of their own town more often. As more of the country becomes connected through trade, transportation, the media, and the Internet, the burden of different people being on different times increases. Just as we switched from every town having its own time to every zone having its own time, I think we need to switch to the entire country having its own time (just like China and India and most of Western Europe). Eventually, as air and high-speed rail travel becomes commonplace and we begin to live in space colonies, we will have to abolish time zones altogether and use UTC.

Update: [2014-09-26] Watch this video if you want to get a sense of just how bad it is.

Disclaimer: I am not a biologist. I'm interested in this topic but I haven't studied it that much. These are my random thoughts on how a species can form:

It seems to me that we only “need” new species when things are going badly. In these stressful situations, the population of an existing species will decline. A small population is more likely to lead to inbreeding/incest. What happens with inbreeding? We get increased mutations. The history of royalty in Europe has some examples (hemophilia, six fingered folk, etc.). It's exactly when things are going badly with a species that a new species has a chance, and I think it's no coincidence that mutation rates are higher then.

More specifically, I think susceptibility to mutation is an evolved characteristic. Species that “fixed” the problem with genetic errors would not evolve, and would eventually be wiped out. Only the species that had mutations would survive. Secondly, I think the mutation rate varies, and it responds to stress and inbreeding, not by accident, but because a variable mutation rate evolved as well. When a population no longer fits well into the environment, it needs to increase the mutation rate so that it can turn itself into a new species.

A consequence of this line of thinking is that when populations are large, we should rarely see new species form. We shouldn't see many new species forming until the environment changes drastically.

I also think in extreme cases, a very small population might lead to asexual reproduction with a high mutation rate. Species that allowed for asexual reproduction in rare cases are more likely to survive.

If small populations lead to new species, what would we observe?

If there are several populations of a species, and one of them mutates into a new species, we will see a new species and call it a “branch” on the tree of life.

If on the other hand the small population is the only surviving instance of a species, then as it turns into a new species, the old species will be wiped out. There may be no record of it. In the tree of life, we would only see two distinct species if the creatures are physically different and if both populations left fossils.

I think most new species are of the latter form, and never show up in the fossil record. How could I call this a new species then? If we had a time machine and brought a creature of the first form forwards in time, and it tried to breed with a creature of the second form, we'd be able to decide whether the old and new creatures are genetically compatible. If they are, I'd say they are the same species. But in a lot of cases, they won't be able to interbreed, and we have a new species. We can't really test this without a time machine.

To summarize: I think that variable-rate mutation is an evolved behavior that shows itself when populations are small and stressed, and that there have been a lot more species than the ones we see in the fossil record.

One thing that really bugs me about Emacs is the way it clutters up my directories with backup files (filenames ending in ~) and autosave files (filenames starting with #). Fortunately there's an easy way to move them elsewhere. Unfortunately the technique isn't consistent across Emacs versions. In GNU Emacs 21, you can set backup-directory-alist and auto-save-file-name-transforms. In XEmacs 21, you can set bkup-backup-directory-info and auto-save-directory. Here's what I do in GNU Emacs:

Long ago, I added a key to Emacs to quickly close a buffer. I used to use Alt+F3, because that's what I used with Turbo Pascal. These days I use Cmd+W to match the Macintosh key for closing windows. The trouble is that when it's easy to close buffers, I do it often, and I occasionally close a buffer I shouldn't have closed. The solution is to make Emacs ask me for confirmation if the buffer hasn't been saved.

In both GNU Emacs and XEmacs, the answer is in a variable named kill-buffer-query-functions:

kill-buffer-query-functions is a variable defined
in `C source code'.
Documentation:
List of functions called with no args to query before
killing a buffer. The buffer being killed will be current
while the functions are running. If any of them returns nil,
the buffer is not killed.

Emacs will call each function listed in kill-buffer-query-functions before killing a buffer. If any of these functions returns nil, Emacs will not kill the buffer. So I defined a function that would ask for confirmation if the buffer hadn't been saved:

I've been reading The Paradox of Choice, a book about more choices not always being better. There's also a one hour talk by the author. It made me think about how we might model choices, either for understanding our own behavior or for writing simulation games. The author of the book argues that at an abstract level people understand the benefit of additional choices but ignore their costs, whereas in practice people are affected by those costs, albeit not always in a rational way.

To model the benefit of choice, I'm going to say that if there are N-1 choices, and you are presented with 1 additional choice, your benefit has increased. By how much? It's only a benefit if the new choice was better. Since there are now N choices, let's say the probability the new one is better is 1/N. If it is better, by how much is it better? I think you can build an expectation function based on the distribution (for example, a gaussian distribution), but I'm going to be simplistic and say the benefit is constant. The new item is always 1 unit of value better than the old one. In practice I think the benefit decreases as the number of choices goes up, so I'm being generous here. So the added benefit of the Nth choice is the probability it's better multiplied by the amount it's better: 1/N * 1. To determine the total benefit, we have to sum from choice 1 to choice N, and we end up with something approximately equal to the logarithm: ln(N).

To model the cost of choice, I'm going to say that you have to make the comparison between the new item and the old items, even if the new item isn't better. You might compare to each of the old items, giving a cost of N, or maybe you only compare to the best of the previous items, giving a cost of 1. I'm going to be generous here and say the added cost is just 1. The total cost then is 1 for each new item, or a total of N.

So now we have a model in which both the benefits and costs go up as the number of choices increases. Each additional choice brings smaller and smaller benefits but larger and larger costs. Here's a plot of what this might look like:

Initially having choices greatly adds to your well-being. However, the rising costs eventually overtake the diminishing benefits, and the total value of having choices goes down. This seems to be the main message of the book: that additional choices do not always make us better off.

When I defined the model, I decided to be generous. The incremental benefit is 1 in my model, but it's probably decreasing as the number of choices goes up. This means the total benefit is lower than in my model. The incremental cost is 1 in my model, but it's probably increasing as the number of choices goes up, because people at some level will compare to all the alternatives, not just one. This means the total cost is higher than in my model. So the graph above is optimistic; in reality it probably drops even faster.

Note that the graph has no scale. That's because I think the costs and benefits will depend a great deal on the situation. When buying toothpaste, the benefit of more choices is pretty limited. But when choosing a job or spouse, it makes a much larger impact on your life. The main point is that additional choices will eventually not be worth the cost of evaluating them, so at some point you should just make your decision and not worry about it anymore.