The first such national legislation of its kind in the world calls for a more humane death for lobsters: “rendering them unconscious” before plunging them into scalding water. Two methods are recommended: electrocution or sedating the lobster by dipping it into saltwater and then thrusting a knife into its brain.

The same law also gives domestic pets further protections, such as dogs can no longer be punished for barking.

The measure is part of the broad principle of “animal dignity” enshrined in Switzerland’s Constitution, the only country with such a provision. The Constitution already protects how various species must be treated and specifies that animals need socialization.

That means cats must have a daily visual contact with other felines, and hamsters or guinea pigs must be kept in pairs. And anyone who flushes a pet goldfish down the toilet is breaking the law.

But really, this is just an excuse to revisit a sublime piece of journalism that David Foster Wallace wrote in 2004 for Gourmet magazine called Consider the Lobster (later collected in a book of the same name). In it, Wallace travels to the Maine Lobster Festival and comes away asking similar questions that the Swiss had in formulating their law.

So then here is a question that’s all but unavoidable at the World’s Largest Lobster Cooker, and may arise in kitchens across the U.S.: Is it all right to boil a sentient creature alive just for our gustatory pleasure? A related set of concerns: Is the previous question irksomely PC or sentimental? What does “all right” even mean in this context? Is it all just a matter of individual choice?

Wallace being Wallace, he then dives deep into these questions at a length of several thousand words, a bunch of which are:

Since, however, the assigned subject of this article is what it was like to attend the 2003 MLF, and thus to spend several days in the midst of a great mass of Americans all eating lobster, and thus to be more or less impelled to think hard about lobster and the experience of buying and eating lobster, it turns out that there is no honest way to avoid certain moral questions.

There are several reasons for this. For one thing, it’s not just that lobsters get boiled alive, it’s that you do it yourself — or at least it’s done specifically for you, on-site. As mentioned, the World’s Largest Lobster Cooker, which is highlighted as an attraction in the Festival’s program, is right out there on the MLF’s north grounds for everyone to see. Try to imagine a Nebraska Beef Festival at which part of the festivities is watching trucks pull up and the live cattle get driven down the ramp and slaughtered right there on the World’s Largest Killing Floor or something — there’s no way.

The intimacy of the whole thing is maximized at home, which of course is where most lobster gets prepared and eaten (although note already the semiconscious euphemism “prepared,” which in the case of lobsters really means killing them right there in our kitchens). The basic scenario is that we come in from the store and make our little preparations like getting the kettle filled and boiling, and then we lift the lobsters out of the bag or whatever retail container they came home in …whereupon some uncomfortable things start to happen. However stuporous the lobster is from the trip home, for instance, it tends to come alarmingly to life when placed in boiling water. If you’re tilting it from a container into the steaming kettle, the lobster will sometimes try to cling to the container’s sides or even to hook its claws over the kettle’s rim like a person trying to keep from going over the edge of a roof. And worse is when the lobster’s fully immersed. Even if you cover the kettle and turn away, you can usually hear the cover rattling and clanking as the lobster tries to push it off. Or the creature’s claws scraping the sides of the kettle as it thrashes around. The lobster, in other words, behaves very much as you or I would behave if we were plunged into boiling water (with the obvious exception of screaming). A blunter way to say this is that the lobster acts as if it’s in terrible pain, causing some cooks to leave the kitchen altogether and to take one of those little lightweight plastic oven timers with them into another room and wait until the whole process is over.

The analysis, which was not limited to studies conducted in the U.S. and Canada, showed that GMO corn varieties have increased crop yields worldwide 5.6 to 24.5 percent when compared to non-GMO varieties. They also found that GM corn crops had significantly fewer (up to 36.5 percent less, depending on the species) mycotoxins — toxic chemical byproducts of crop colonization.

Some have argued that GMOs in the U.S. and Canada haven’t increased crop yields and could threaten human health; this sweeping analysis proved just the opposite.

Image Credit: Couleur / Pixabay

For this study, published in the journal Scientific Reports, a group of Italian researchers took over 6,000 peer-reviewed studies from the past 21 years and performed what is known as a “meta-analysis,” a cumulative analysis that draws from hundreds or thousands of credible studies. This type of study allows researchers to draw conclusions that are more expansive and more robust than what could be taken from a single study.

Repairing Misinformation

There have been, for a variety of largely unscientific reasons, serious concern surrounding the effects of GMOs on human health. This analysis confirms that not only do GMOs pose no risk to human health, but also that they actually could have a substantive positive impact on it. Click to View Full Infographic

Mycotoxins, chemicals produced by fungi, are both toxic and carcinogenic to humans and animals. A significant percentage of non-GM and organic corn contain small amounts of mycotoxins. These chemicals are often removed by cleaning in developing countries, but the risk still exists.

GM corn has substantially fewer mycotoxins because the plants are modified to experience less crop damage from insects. Insects weaken a plant’s immune system and make it more susceptible to developing the fungi that produce mycotoxins.

In their analysis, the researchers stated that this study allows us “to draw unequivocal conclusions, helping to increase public confidence in food produced with genetically modified plants.”

While there will likely still be questions raised as GMOs are incorporated into agriculture, this analysis puts some severe concerns to rest. Additionally, this information might convince farmers and companies to consider the potential health and financial benefits of using genetically modified corn. Some are already calling this meta-analysis the “final chapter” in the GMO debate.

Epistemic status: Reasonably confident, but I should probably try to back this up with numbers about how often elementary results actually do get missed.

Attention conservation notice: More than a little rambling.

Fairly regularly you see news articles about how some long-standing problem that has stumped experts for years has been solved, usually with some nice simple solution.

This might be a proof of some mathematical result, a translation of the Voynich algorithm, a theory of everything. Those are the main ones I see, but I’m sure there are many others that I don’t see.

These are almost always wrong, and I don’t even bother reading them any more.

The reason is this: If something is both novel and interesting, it requires an explanation: Why has nobody thought of this before?

Typically, these crackpot solutions (where they’re not entirely nonsensical) are so elementary that someone would surely have discovered it before now.

Even for non-crackpot ideas, I think this question is worth asking when you discover new. As well as being a useful validity check for finding errors and problems, if there is a good answer then it can often be enlightening about the problem space.

Potentially, it could also be used as a heuristic in the other direction: If you want to discover something new, look in places where you would have a good answer to this question.

There are a couple ways this can play out, but most of them boil down to numbers: If a lot of people have been working for a problem for a long time during which they could have discovered your solution, they probably would have. As nice as it would be to believe that we were uniquely clever compared to everyone else, that is rarely the case.

So an explanation basically needs to show some combination of:

Why not many people were working on the problem

Why the time period during which they could have discovered your technique in is small

The first is often a bad sign! If not many people work on the problem, it might not be very interesting.

This could also be a case of bad incentives. For example, I’ve discovered a bunch of new things about test case reduction, and I’m pretty sure most of that is because not many people work on test case reduction: It’s a useful tool (and I think the problem is interesting!), but it’s a very niche problem at a weird intersection of practical needs and academic research where neither side has much of a good incentive to work on it.

As a result, I wouldn’t be surprised an appreciable percentage of person-hours ever spent on test-case reduction were done by me! Probably not 10%, but maybe somewhere in the region of 1-5%. This makes it not very surprising for me to have discovered new things about it even though the end result is useful.

More often I find that I’m just interested in weird things that nobody else cares about, which can be quite frustrating and it can make it difficult to get other people excited about your novel thing. If that’s the case, you’re probably going to have a harder time marketing your novel idea than you are discovering it.

The more interesting category of problem is the second: Why have the people who are already working on this area not previously thought of this?

The easiest way out of this is simply incremental progress: If you’re building on some recent discovery then there just hasn’t been that much time for them to discover it, so you’ve got a reasonable chance of being the first to discover it!

Another way is by using knowledge that they were unlikely to have – for example, by applying techniques from another discipline with little overlap in practice with the one the problem is form. Academia is often surprisingly siloed (but if the problem is big enough and the cross-disciplinary material is elementary enough, this probably isn’t sufficient. It’s not that siloed).

An example of this seems to be Thomas Royen’s recentish proof of the Gaussian Correlation Inequality (disclaimer: I don’t actually understand this work). He applied some fairly hairy technical results that few people working on the problem were likely to be familiar with, and as a result was able to solve something people had been working on for more than 50 years.

A third category of solution is to argue that everyone else had a good chance of giving up before finding your solution: e.g. If the solution is very complicated or involved, it has a much higher chance of being novel (and also a much higher chance of being wrong of course)! Another way this can happen is the approach looks discouraging in some way.

Sometimes all of these combine. For example, I think the core design of Hypothesis is a very simple, elegant, idea, that just doesn’t seem to have been implemented before (I’ve had a few people dismissively tell me they’ve encountered the concept before, but they never could point me to a working implementation).

I think there are a couple reasons for this:

Property-based testing just doesn’t have that many people working on it. The number might top 100, but I’d be surprised if if topped 200 (Other random testing approaches could benefit from this approach, but not nearly as much. Property-based testing implements lots of tiny generators and thus feels many of the problems more acutely).

Depending on how you count, there’s maybe been 20 years during which this design could have been invented.

Simple attempts at this approach work very badly indeed (In a forthcoming paper I have a hilarious experiment in which I show that something only slightly simpler than what we do completely and totally fails to work on the simplest possible benchmark).

So there aren’t that many people working on this, they haven’t had that much time to work on it, and if they’d tried it it probably would have looked extremely discouraging.

In contrast I have spent a surprising amount of time on it (largely because I wanted to and didn’t care about money or academic publishing incentives), and I came at it the long way around so I was starting from a system I knew worked, so it’s not that surprising that I was able to find it when nobody else had (and does not require any “I’m so clever” explanations).

In general there is of course no reason that there has to be a good explanation of why something hasn’t been discovered before. There’s no hard cut off line where something goes from “logically must have been discovered” to “it’s completely plausible that you’re the first” (discontinuous functions don’t exist!), it’s just a matter of probabilities. Maybe it’s very likely that somebody hasn’t discovered it before, but maybe you just got lucky. There are enough novel things out there that somebody is going to get lucky on a fairly regular basis, it’s probably just best not to count on it being you.

PS. I think it very unlikely this point is novel, and I probably even explicitly got it from somewhere else and forgot where. Not everything has to be novel to be worthwhile.