lcamtuf's blog

May 04, 2017

"It's tough to make predictions, especially about the future."- variously attributed to Yogi Berra and Niels Bohr

Right. So let's say you are visited by transdimensional space aliens from outer space. There's some old-fashioned probing, but eventually, they get to the point. They outline a series of apocalyptic prophecies, beginning with the surprise 2032 election of Dwayne Elizondo Mountain Dew Herbert Camacho as the President of the United States, followed by a limited-scale nuclear exchange with the Grand Duchy of Ruritania in 2036, and culminating with the extinction of all life due to a series of cascading Y2K38 failures that start at an Ohio pretzel reprocessing plan. Long story short, if you want to save mankind, you have to warn others of what's to come.

But there's a snag: when you wake up in a roadside ditch in Alabama, you realize that nobody is going to believe your story! If you come forward, your professional and social reputation will be instantly destroyed. If you're lucky, the vindication of your claims will come fifteen years later; if not, it might turn out that you were pranked by some space alien frat boys who just wanted to have some cheap space laughs. The bottom line is, you need to be certain before you make your move. You figure this means staying mum until the Election Day of 2032.

But wait, this plan is also not very good! After all, how could your future self convince others that you knew about President Camacho all along? Well... if you work in information security, you are probably familiar with a neat solution: write down your account of events in a text file, calculate a cryptographic hash of this file, and publish the resulting value somewhere permanent. Fifteen years later, reveal the contents of your file and point people to your old announcement. Explain that you must have been in the possession of this very file back in 2017; otherwise, you would not have known its hash. Voila - a commitment scheme!

Although elegant, this approach can be risky: historically, the usable life of cryptographic hash functions seemed to hover at somewhere around 15 years - so even if you pick a very modern algorithm, there is a real risk that future advances in cryptanalysis could severely undermine the strength of your proof. No biggie, though! For extra safety, you could combine several independent hashing functions, or increase the computational complexity of the hash by running it in a loop. There are also some less-known hash functions, such as SPHINCS, that are designed with different trade-offs in mind and may offer longer-term security guarantees.

Of course, the computation of the hash is not enough; it needs to become an immutable part of the public record and remain easy to look up for years to come. There is no guarantee that any particular online publishing outlet is going to stay afloat that long and continue to operate in its current form. The survivability of more specialized and experimental platforms, such as blockchain-based notaries, seems even less clear. Thankfully, you can resort to another kludge: if you publish the hash through a large number of independent online venues, there is a good chance that at least one of them will be around in 2032.

(Offline notarization - whether of the pen-and-paper or the PKI-based variety - offers an interesting alternative. That said, in the absence of an immutable, public ledger, accusations of forgery or collusion would be very easy to make - especially if the fate of the entire planet is at stake.)

Even with this out of the way, there is yet another profound problem with the plan: a current-day scam artist could conceivably generate hundreds or thousands of political predictions, publish the hashes, and then simply discard or delete the ones that do not come true by 2032 - thus creating an illusion of prescience. To convince skeptics that you are not doing just that, you could incorporate a cryptographic proof of work into your approach, attaching a particular CPU time "price tag" to every hash. The future you could then claim that it would have been prohibitively expensive for the former you to attempt the "prediction spam" attack. But this argument seems iffy: a $1,000 proof may already be too costly for a lower middle class abductee, while a determined tech billionaire could easily spend $100,000 to pull off an elaborate prank on the entire world. Not to mention, massive CPU resources can be commandeered with little or no effort by the operators of large botnets and many other actors of this sort.

In the end, my best idea is to rely on an inherently low-bandwidth publication medium, rather than a high-cost one. For example, although a determined hoaxer could place thousands of hash-bearing classifieds in some of the largest-circulation newspapers, such sleigh-of-hand would be trivial for future sleuths to spot (at least compared to combing through the entire Internet for an abandoned hash). Or, as per an anonymous suggestion relayed by Thomas Ptacek: just tattoo the signature on your body, then post some post some pics; there are only so many places for a tattoo to go.

Still, what was supposed to be a nice, scientific proof devolved into a bunch of hand-wavy arguments and poorly-quantified probabilities. For the sake of future abductees: is there a better way?

It is also fun to challenge yourself to employ fuzzers in non-conventional ways. Two canonical examples are having your fuzzing target call abort() whenever two libraries that are supposed to implement the same algorithm produce different outputs when given identical input data; or when a library produces different outputs when asked to encode or decode the same data several times in a row.

Such tricks may sound fanciful, but they actually find interesting bugs. In one case, AFL-based equivalence fuzzing revealed a
bunch of fairly rudimentary flaws in common bignum libraries,
with some theoretical implications for crypto apps. Another time, output stability checks revealed long-lived issues in
IJG jpeg and other widely-used image processing libraries, leaking
data across web origins.

In one of my recent experiments, I decided to fuzz
brotli, an innovative compression library used in Chrome. But since it's been
already fuzzed for many CPU-years, I wanted to do it with a twist:
stress-test the compression routines, rather than the usually targeted decompression side. The latter is a far more fruitful
target for security research, because decompression normally involves dealing with well-formed inputs, whereas compression code is meant to
accept arbitrary data and not think about it too hard. That said, the low likelihood of flaws also means that the compression bits are a relatively unexplored surface that may be worth
poking with a stick every now and then.

In this case, the library held up admirably - spare for a handful of computationally intensive plaintext inputs
(that are now easy to spot due to the recent improvements to AFL).
But the output corpus synthesized by AFL, after being seeded just with a single file containing just "0", featured quite a few peculiar finds:

Nonsensical but undeniably English sentences:
them with them m with them with themselves,
in the fix the in the pin th in the tin,
amassize the the in the in the inhe@massive in,
he the themes where there the where there,
size at size at the tie.

The results are quite unexpected, given that they are just a product of randomly mutating a single-byte input file and observing the code coverage in a simple compression tool. The explanation is that brotli, in addition to more familiar binary coding methods, uses a static dictionary constructed by analyzing common types of web content. Somehow, by observing the behavior of the program, AFL was able to incrementally reconstruct quite a few of these hardcoded keywords - and then put them together in various semi-interesting ways. Not bad.

April 15, 2017

For the past three years and a change, the security industry has been mesmerized by a steady trickle of leaks that expose some of the offensive tooling belonging to the Western world's foremost intelligence agencies. To some folks, the leaks are a devastating blow to national security; to others, they are a chilling peek at the inner workings of an intrusive security apparatus that could be used to attack political enemies within.

I find it difficult to get outraged at revelations such as the compromise of some of the banking exchanges in the Middle East, presumably to track the sources of funding for some of our sworn enemies; at the same time, I'm none too pleased about the reports of the agencies tapping overseas fiber cables of US companies, or indiscriminately hacking university e-mail servers in Europe to provide cover for subsequent C&C ops. Still, many words have been written on the topic, so it is not a debate I am hoping to settle here; my only thought is that if we see espionage as a legitimate task for a nation state, then the revelations seem like a natural extension of what we know about this trade from pre-Internet days. Conversely, if we think that spying is evil, we probably ought to rethink geopolitics in a more fundamental way; until then, there's no use complaining that the NSA is keeping a bunch of 0-days at hand.

But in a more pragmatic sense, there is one consequence of the leaks that I worry about: the inevitable shifts in IT policies and the next crop of commercial tools and services meant to counter this supposedly new threat. I fear this outcome because I think that the core exploitation capabilities of the agencies - at least to the extent exposed by the leaks - are not vastly different from those of a talented teenager: somewhat disappointingly, the intelligence community accomplishes their goals chiefly by relying on public data sources, the attacks on unpatched or poorly configured systems, and the fallibility of human beings. In fact, some of the exploits exposed in the leaks were probably not developed in-house, but purchased through intermediaries from talented hobbyists - a black market that has been thriving over the past decade or so.

Of course, the NSA is a unique "adversary" in many other ways, but there is no alien technology to reckon with; and by constantly re-framing the conversation around IT security as a response to some new enemy, we tend to forget that the underlying problems that enable such hacking have been with us since the 1990s, that they are not unique to this actor, and that they have not been truly solved by any of the previous tooling and IT spending shifts.

I think that it is useful to compare computer spies to another, far better understood actor: the law enforcement community. In particular:

Both the intelligence agencies and law enforcement are very patient and systematic in their pursuits. If they want to get to you but can't do so directly, they can always convince, coerce, or compromise your friends, your sysadmins - or heck, just tamper with your supply chain.

Both kinds of actors operate under the protection of the law - which means that they are taking relatively few risks in going after you, can refine their approaches over the years, and can be quite brazen in their plans. They prefer to hack you remotely, of course - but if they can't, they might just as well break into your home or office, or plant a mole within your org.

Both have nearly unlimited resources. You probably can't outspend them and they can always source a wide range of tools to further their goals, operating more like a well-oiled machine than a merry band of hobbyists. But it is also easy to understand their goals, and for most people, the best survival strategy is not to invite their undivided attention in the first place.

Once you make yourself interesting enough to be in the crosshairs, the game changes in a pretty spectacular way, and the steps to take might have to come from the playbooks of rebels holed up in the mountains of Pakistan more than from a glossy folder of Cyberintellics Inc. There are no simple, low-cost solutions: you will find no click-and-play security product to help you, and there is no "one weird trick" to keep you safe; taping over your camera or putting your phone in the microwave won't save the day.

And ultimately, let's face it: if you're scrambling to lock down your Internet-exposed SMB servers in response to the most recent revelations from Shadow Brokers, you are probably in deep trouble - and it's not because of the NSA.

February 03, 2017

Several days ago, The New Yorker ran a lengthy story titled
"Doomsday prep for the super-rich". The article revealed that some of the Silicon Valley's most successful entrepreneurs - including the execs from Yahoo, Facebook, and Reddit - are planning ahead for extreme emergencies, "Doomsday Preppers" style.

The article made quite a few waves in the tech world - and invited near-universal ridicule and scorn. People sneered at the comical excess of blast-proof bunkers and bug-out helicopters, as if a cataclysm that kills off most of us would somehow spare the nouveaux riches. Hardcore survivalists gleefully proclaimed that the highly-paid armed guards would turn on their employers to save own families. Conservatives rolled their eyes at the story of a VC who is preparing for the collapse of the civilized world but finds guns a bit too icky for his taste. Progressives were repulsed by the immorality of the world's wealthiest people wanting to hide when the angry underclass finally takes it to the streets. In short, no matter where you stood, the story had something for you to hate.

My first instinct was to join the fray; if nothing else, it's cathartic to have fun at the expense of people who are far wealthier and far more powerful than we could ever be. Sure, I have written about the merits of common-sense emergency preparedness, but to me, it meant having a rainy day fund and a fire extinguisher, not holing up in a decommissioned ICBM silo with 10,000 rounds of ammo and a pallet of canned cheese. Now hold my beer and let me throw the first stone!

But then, I realized that the article in The New Yorker is a human interest piece; it is meant to entertain us and has no other reason to exist. The author is trying to show us a dazzling world that is out of ordinary and out of reach. It may be that the profiled execs spend most of their time planning ahead for far more pedestrian risks, but no sane newspaper would publish a multi-page expose about the brand of fire extinguishers or tarp favored by the ultra-rich. The readers want to read about helicopters and ICBM silos instead - and so the author obliges.

It is also a fallacy to look at the cost of purchases outlined in the article in absolute terms. For us, spending $5M on a luxury compound and a helicopter may seem insane - but for a person with hundreds of millions in the bank, such an investment would be just 1% of their wealth - a reasonable price to pay for insurance against unlikely but somewhat plausible risks. In terms of relative financial impact, it is no different than a person with $10k in the bank spending $100 on a fire extinguisher and some energy bars - hardly a controversial thing to do.

What's more, although we're living in a period of unprecedented prosperity and calm, there's no denying that in the history of mankind, revolutions happen with near-clockwork regularity. We had quite a few in the past one hundred years alone - and when the time comes, it's usually the heads of the variously-defined aristocracy that roll. Angry mobs are unlikely to torch down Joe Prepper's cookie-cutter suburban neighborhood, but being near the top of the social ladder carries some distinct risk. We can have a debate about the complicity of the elites, or the comparative significance of this risk versus the plight of other social classes - but either way, the paranoia of the rich may be more grounded in reality than it seems.

Of course, an argument can be made that preparing for the collapse of the society is immoral when their wealth could be better spent on trying to bridge the income gap or otherwise make the world a more harmonious place. It is an interesting claim, but it rings a bit hollow to me. We would not deny the rich the right to buy a fire extinguisher or a bug-out bicycle; our outrage is rather conveniently reserved for the purchases we can't afford. But more importantly, prepping and philanthropy are not mutually exclusive; in fact, I suspect that some of the folks mentioned in the article spend far more on trying to help the less fortunate than they are spending on canned cheese. Whether this can make a difference, and whether they should be doing more, is a different story.

February 01, 2017

People who are accomplished in one field of expertise tend to believe that they can bring unique insights to just about any other debate.
I am as guilty as anyone: at one time or another, I aired my thoughts on anything from
CNC manufacturing, to
electronics, to
emergency preparedness, to
politics.
Today, I'm about to commit the same sin - but instead of pretending to speak from a position of authority, I wanted to share a more personal tale.

The author, circa 1995. The era of hand-crank computers and punch cards.

Back in my school days, I was that one really tall and skinny kid in the class. It wasn't trying to stay this way; I preferred computer games to sports, and my grandma's Polish cooking was heavy on potatoes, butter, chicken, dumplings, cream, and cheese. But that did not matter: I could eat what I wanted, as often as I wanted, and I still stayed in shape. This made me look down on chubby kids; if my reckless ways had little or no effect on my body, it followed that they had to be exceptionally lazy and must have lacked even the most basic form of self-control.

As I entered adulthood, my habits remained the same. I felt healthy and stayed reasonably active, walking to and from work every other day and hiking with friends whenever I could. But my looks started to change:

The author at a really exciting BlackHat party in 2002.

I figured it's just a part of growing up. But somewhere around my twentieth birthday, I stepped on a bathroom scale and typed the result into an online calculator. I was surprised to find out that my BMI was about 24 - pretty darn close to overweight.

"Pssh, you know how inaccurate these things are!", I exclaimed while searching online to debunk that whole BMI thing. I mean, sure, I had some belly fat - maybe a pizza or two too far - but nothing that wouldn't go away in time. Besides, I was doing fine, so what would be the point of submitting to the society's idea of the "right" weight?

It certainly helped that I was having a blast at work. I made a name for myself in the industry, published a fair amount of cool research, authored a book, settled down, bought a house, had a kid. It wasn't until the age of 26 that I strayed into a doctor's office for a routine checkup. When the nurse asked me about my weight, I blurted out "oh, 175 pounds, give or take". She gave me a funny look and asked me to step on the scale.

Turns out it was quite a bit more than 175 pounds. With a BMI of 27.1, I was now firmly into the "overweight" territory. Yeah yeah, the BMI metric was a complete hoax - but why did my passport photos look less flattering than before?

A random mugshot from 2007. Some people are just born big-boned, I think.

Well, damn. I knew what had to happen: from now on, I was going to start eating healthier foods. I traded Cheetos for nuts, KFC for sushi rolls, greasy burgers for tortilla wraps, milk smoothies for Jamba Juice, fries for bruschettas, regular sodas for diet. I'd even throw in a side of lettuce every now and then. It was bound to make a difference. I just wasn't gonna be one of the losers who check their weight every day and agonize over every calorie on their plate. (Weren't calories a scam, anyway? I think I read that on that cool BMI conspiracy site.)

By the time I turned 32, my body mass index hit 29. At that point, it wasn't just a matter of looking chubby. I could do the math: at that rate, I'd be in a real pickle in a decade or two - complete with a ~50% chance of developing diabetes or cardiovascular disease. This wouldn't just make me miserable, but also mess up the lives of my spouse and kids.

Presenting at Google TGIF in 2013. It must've been the unflattering light.

I wanted to get this over with right away, so I decided to push myself hard. I started biking to work, quite a strenuous ride. It felt good, but did not help: I would simply eat more to compensate and ended up gaining a few extra pounds. I tried starving myself. That worked, sure - only to be followed by an even faster rebound. Ultimately, I had to face the reality: I had a problem and I needed a long-term solution. There was no one weird trick to outsmart the calorie-counting crowd, no overnight cure.

I started looking for real answers. My world came crumbling down; I realized that a "healthy" burrito from Chipotle packed four times as many calories as a greasy burger from McDonald's. That a loaded fruit smoothie from Jamba Juice was roughly equal to two hot dogs with a side of mashed potatoes to boot. That a glass of apple juice fared worse than a can of Sprite, and that bruschetta wasn't far from deep-fried butter on a stick. It didn't matter if it was sugar or fat, bacon or kale. Familiar favorites were not better or worse than the rest. Losing weight boiled down to portion control - and sticking to it for the rest of my life.

It was a slow and humbling journey that spanned almost a year. I ended up losing around 70 lbs along the way. What shocked me is that it wasn't a painful experience; what held me back for years was just my own smugness, plus the folksy wisdom gleaned from the covers of glossy magazines.

A really hip bathroom selfie, December 2016.

I'm not sure there is a moral to this story. I guess one lesson is: don't be a judgmental jerk. Sometimes, the simple things - the ones you think you have all figured out - prove to be a lot more complicated than they seem.

November 11, 2016

I dislike commenting on politics. I think it's difficult to contribute any novel thought - and in today's hyper-polarized world, stating an unpopular or half-baked opinion is a recipe for losing friends or worse. Still, with many of my colleagues expressing horror and disbelief over what happened on Tuesday night, I reluctantly decided to jot down my thoughts.

I think that in trying to explain away the meteoric rise of Mr. Trump, many of the mainstream commentators have focused on two phenomena. Firstly, they singled out the emergence of "filter bubbles" - a mechanism that allows people to reinforce their own biases and shields them from opposing views. Secondly, they implicated the dark undercurrents of racism, misogynism, or xenophobia that still permeate some corners of our society. From that ugly place, the connection to Mr. Trump's foul-mouthed populism was not hard to make; his despicable bragging about women aside, to his foes, even an accidental hand gesture or an inane 4chan frog meme was proof enough. Once we crossed this line, the election was no longer about economic policy, the environment, or the like; it was an existential battle for equality and inclusiveness against the forces of evil that lurk in our midst. Not a day went by without a comparison between Mr. Trump and Adolf Hitler in the press. As for the moderate voters, the pundits had an explanation, too: the right-wing filter bubble must have clouded their judgment and created a false sense of equivalency between a horrid, conspiracy-peddling madman and our cozy, liberal status quo.

Now, before I offer my take, let me be clear that I do not wish to dismiss the legitimate concerns about the overtones of Mr. Trump's campaign. Nor do I desire to downplay the scale of discrimination and hatred that the societies around the world are still grappling with, or the potential that the new administration could make it worse. But I found the aforementioned explanation of Mr. Trump's unexpected victory to be unsatisfying in many ways. Ultimately, we all live in bubbles and we all have biases; in that regard, not much sets CNN apart from Fox News, Vox from National Review, or The Huffington Post from Breitbart. The reason why most of us would trust one and despise the other is that we instinctively recognize our own biases as more benign. After all, in the progressive world, we are fighting for an inclusive society that gives all people a fair chance to succeed. As for the other side? They seem like a bizarre, cartoonishly evil coalition of dimwits, racists, homophobes, and the ultra-rich. We even have serious scientific studies to back that up; their authors breathlessly proclaim that the conservative brain is inferior to the progressive brain. Unlike the conservatives, we believe in science, so we hit the "like" button and retweet the news.

But here's the thing: I know quite a few conservatives, many of whom have probably voted for Mr. Trump - and they are about as smart, as informed, and as compassionate as my progressive friends. I think that the disconnect between the worldviews stems from something else: if you are a well-off person in a coastal city, you know people who are immigrants or who belong to other minorities, making you acutely attuned to their plight; but you may lack the same, deeply personal connection to - say - the situation of the lower middle class in the Midwest. You might have seen surprising charts or read a touching story in Mother Jones few years back, but it's hard to think of them as individuals; they are more of a socioeconomic obstacle, a problem to be solved. The same goes for our understanding of immigration or globalization: these phenomena make our high-tech hubs more prosperous and more open; the externalities of our policies, if any, are just an abstract price that somebody else ought to bear for doing what's morally right. And so, when Mr. Trump promises to temporarily ban travel from Muslim countries linked to terrorism or anti-American sentiments, we (rightly) gasp in disbelief; but when Mr. Obama paints an insulting caricature of rural voters as simpletons who "cling to guns or religion or antipathy to people who aren't like them", we smile and praise him for his wit, not understanding how the other side could be so offended by the truth. Similarly, when Mrs. Clinton chuckles while saying "we are going to put a lot of coal miners out of business" to a cheering crowd, the scene does not strike us as a thoughtless, offensive, or in poor taste. Maybe we will read a story about the miners in Mother Jones some day?

Of course, liberals take pride in caring for the common folk, but I suspect that their leaders' attempts to reach out to the underprivileged workers in the "flyover states" often come across as ham-fisted and insincere. The establishment schools the voters about the inevitability of globalization, as if it were some cosmic imperative; they are told that to reject the premise would not just be wrong - but that it'd be a product of a diseased, nativist mind. They hear that the factories simply had to go to China or Mexico, and the goods just have to come back duty-free - all so that our complex, interconnected world can be a happier place. The workers are promised entitlements, but it stands to reason that they want dignity and hope for their children, not a lifetime on food stamps. The idle, academic debates about automation, post-scarcity societies, and Universal Basic Income probably come across as far-fetched and self-congratulatory, too.

The discourse is poisoned by cognitive biases in many other ways. The liberal media keeps writing about the unaccountable right-wing oligarchs who bankroll the conservative movement and supposedly poison people's minds - but they offer nothing but praise when progressive causes are being bankrolled by Mr. Soros or Mr. Bloomberg. They claim that the conservatives represent "post-truth" politics - but their fact-checkers shoot down conservative claims over fairly inconsequential mistakes, while giving their favored politicians a pass on half-true platitudes about immigration, gun control, crime, or the sources of inequality. Mr. Obama sneers at the conservative bias of Fox News, but has no concern with the striking tilt to the left in the academia or in the mainstream press. The Economist finds it appropriate to refer to Trump supporters as "trumpkins" in print - but it would be unthinkable for them to refer to the fans of Mrs. Clinton using any sort of a mocking term. The pundits ponder the bold artistic statement made by the nude statues of the Republican nominee - but they would be disgusted if a conservative sculptor portrayed the Democratic counterpart in a similarly unflattering light. The commentators on MSNBC read into every violent incident at Trump rallies - but when a a random group of BLM protesters starts chanting about killing police officers, we all agree it would not be fair to cast the entire movement in a negative light.

Most progressives are either oblivious to these biases, or dismiss them as a harmless casualty of fighting the good fight. Perhaps so - and it is not my intent to imply equivalency between the causes of the left and of the right. But in the end, I suspect that the liberal echo chamber contributed to the election of Mr. Trump far more than anything that ever transpired on the right. It marginalized and excluded legitimate but alien socioeconomic concerns from the mainstream political discourse, binning them with truly bigoted and unintelligent speech - and leaving the "flyover underclass" no option other than to revolt. And it wasn't just a revolt of the awful fringes. On the right, we had Mr. Trump - a clumsy outsider who eschews many of the core tenets of the conservative platform, and who does not convincingly represent neither the neoconservative establishment of the Bush era, nor the Bible-thumping religious right of the Tea Party. On the left, we had Mr. Sanders - an unaccomplished Senator who offered simplistic but moving slogans, who painted the accumulation of wealth as the source of our ills, and who promised to mold the United States into an idyllic version of the social democracies of Europe - supposedly governed by the workers, and not by the exploitative elites.

I think that people rallied behind Mr. Sanders and Mr. Trump not because they particularly loved the candidates or took all their promises seriously - but because they had no other credible herald for their cause. When the mainstream media derided their rebellion and the left simply laughed it off, it only served as a battle cry. When tens of millions of Trump supporters were labeled as xenophobic and sexist deplorables who deserved no place in politics, it only pushed more moderates toward the fringe. Suddenly, rational people could see themselves voting for a politically inexperienced and brash billionaire - a guy who talks about cutting taxes for the rich, who wants to cozy up to Russia, and whose VP pick previously wasn't so sure about LGBT rights. I think it all happened not because of Mr. Trump's character traits or thoughtful political positions, and not because half of the country hates women and minorities. He won because he was the only one to promise to "drain the swamp" - and to promise hope, not handouts, to the lower middle class.

There is a risk that this election will prove to be a step back for civil rights, or that Mr. Trump's bold but completely untested economic policies will leave the world worse off; while not certain, it pains me to even contemplate this possibility. When we see injustice, we should fight tooth and nail. But for now, I am not swayed by the preemptively apocalyptic narrative on the left. Perhaps naively, I have faith in the benevolence of our compatriots and the strength of the institutions of - as cheesy as it sounds - one of the great nations of the world.

August 26, 2016

If you have not seen it yet, Parisa Tabriz penned a lengthy and insightful post about her experiences on what it takes to succeed in the field of information security.

My own experiences align pretty closely with Parisa's take, so if you are making your first steps down this path, I strongly urge you to give her post a good read. But if I had to sum up my lessons from close to two decades in the industry, I would probably boil them down to four simple rules:

Infosec is all about the mismatch between our intuition and the actual behavior of the systems we build. That makes it harmful to study the field as an abstract, isolated domain. To truly master it, dive into how computers work, then make a habit of asking yourself "okay, but what if assumption X does not hold true?" every step along the way.

Security is a protoscience. Think of chemistry in the early 19th century: a glorious and messy thing, chock-full of colorful personalities, unsolved mysteries, and snake oil salesmen. You need passion and humility to survive. Those who think they have all the answers are a danger to themselves and to people who put their faith in them.

People will trust you with their livelihoods, but will have no way to truly measure the quality of your work. Don't let them down: be painfully honest with yourself and work every single day to address your weaknesses. If you are not embarrassed by the views you held two years ago, you are getting complacent - and complacency kills.

It will feel that way, but you are not smarter than software engineers. Walk in their shoes for a while: write your own code, show it to the world, and be humiliated by all the horrible mistakes you will inevitably make. It will make you better at your job - and will turn you into a better person, too.

August 04, 2016

Up until mid-2010, any rogue website could get a good sense of your browsing habits by specifying a distinctive :visited CSS pseudo-class for any links on the page, rendering thousands of interesting URLs off-screen, and then calling the getComputedStyle API to figure out which pages appear in your browser's history.

After some deliberation, browser vendors have closed this loophole by disallowing almost all attributes in :visited selectors, spare for the fairly indispensable ability to alter foreground and background colors for such links. The APIs have been also redesigned to prevent the disclosure of this color information via getComputedStyle.

This workaround did not fully eliminate the ability to probe your browsing history, but limited it to scenarios where the user can be tricked into unwittingly feeding the style information back to the website one URL at a time. Several fairly convincing attacks have been demonstrated against patched browsers - my own 2013 entry can be found here - but they generally depended on the ability to solicit one click or one keypress per every URL tested. In other words, the whole thing did not scale particularly well.

Or at least, it wasn't supposed to. In 2014, I described a neat trick that exploited normally imperceptible color quantization errors within the browser, amplified by stacking elements hundreds of times, to implement an n-to-2n decoder circuit using just the background-color and opacity properties on overlaid <a href=...> elements to easily probe the browsing history of multiple URLs with a single click. To explain the basic principle, imagine wanting to test two links, and dividing the screen into four regions, like so:

Region #1 is lit only when both links are not visited (¬ link_a ∧ ¬ link_b),

Region #2 is lit only when link A is not visited but link B is visited (¬ link_a ∧ link_b),

Region #3 is lit only when link A is visited but link B is not (link_a ∧ ¬ link_b),

Region #4 is lit only when both links are visited (link_a ∧ link_b).

While the page couldn't directly query the visibility of the segments, we just had to convince the user to click the visible segment once to get the browsing history for both links, for example under the guise of dismissing a pop-up ad. (Of course, the attack could be scaled to far more than just 2 URLs.)

This problem was eventually addressed by browser vendors by simply improving the accuracy of color quantization when overlaying HTML elements; while this did not eliminate the risk, it made the attack far more computationally intensive, requiring the evil page to stack millions of elements to get practical results. Gave over? Well, not entirely. In the footnote of my 2014 article, I mentioned this:

"There is an upcoming CSS feature called mix-blend-mode, which permits non-linear mixing with operators such as multiply, lighten, darken, and a couple more. These operators make Boolean algebra much simpler and if they ship in their current shape, they will remove the need for all the fun with quantization errors, successive overlays, and such. That said, mix-blend-mode is not available in any browser today."

As you might have guessed, patience is a virtue! As of mid-2016, mix-blend-mode - a feature to allow advanced compositing of bitmaps, very similar to the layer blending modes available in photo-editing tools such as Photoshop and GIMP - is shipping in Chrome and Firefox. And as it happens, in addition to their intended purpose, these non-linear blending operators permit us to implement arbitrary Boolean algebra. For example, to implement AND, all we need to do is use multiply:

black (0) x black (0) = black (0)

black (0) x white (1) = black (0)

white (1) x black (0) = black (0)

white (1) x white (1) = white (1)

For a practical demo, click here. A single click in that whack-a-mole game will reveal the state of 9 visited links to the JavaScript executing on the page. If this was an actual game and if it continued for a bit longer, probing the state of hundreds or thousands of URLs would not be particularly hard to pull off.

May 11, 2016

The recent, highly publicized "ImageTragick" vulnerability had countless web developers scrambling to fix a remote code execution vector in ImageMagick - a popular bitmap manipulation tool commonly used to resize, transcode, or annotate user-supplied images on the Web. Whatever your take on "branded" vulnerabilities may be, the flaw certainly is notable for its ease of exploitation: it is an embarrassingly simple shell command injection bug reminiscent of the security weaknesses prevalent in the 1990s, and nearly extinct in core tools today. The issue also bears some parallels to the more far-reaching but equally striking Shellshock bug.

That said, I believe that the publicity that surrounded the flaw was squandered by failing to make one very important point: even with this particular RCE vector fixed, anyone using ImageMagick to process attacker-controlled images is likely putting themselves at a serious risk.

The problem is fairly simple: for all its virtues, ImageMagick does not appear to be designed with malicious inputs in mind - and has a long and colorful history of lesser-known but equally serious security flaws. For a single data point, look no further than the work done several months ago by Jodie Cunningham. Jodie fuzzed IM with a vanilla setup of afl-fuzz - and quickly identified about two dozen possibly exploitable security holes, along with countless denial of service flaws. A small sample of Jodie's findings can be found here.

Jodie's efforts probably just scratched the surface; after "ImageTragick", a more recent effort by Hanno Boeck uncovered even more bugs; from what I understand, Hanno's work also went only as far as using off-the-shelf fuzzing tools. You can bet that, short of a major push to redesign the entire IM codebase, the trickle won't stop any time soon.

And so, the advice sorely missing from the "ImageTragick" webpage is this:

If all you need to do is simple transcoding or thumbnailing of potentially untrusted images, don't use ImageMagick. Make a direct use of libpng, libjpeg-turbo, and giflib; for a robust way to use these libraries, have a look at the source code of Chromium or Firefox. The resulting implementation will be considerably faster, too.

If you have to use ImageMagick on untrusted inputs, consider sandboxing the code with seccomp-bpf or an equivalent mechanism that robustly restricts access to all user space artifacts and to the kernel attack surface. Rudimentary sandboxing technologies, such as chroot() or UID separation, are probably not enough.

If all other options fail, be zealous about limiting the set of image formats you actually pass down to IM. The bare minimum is to thoroughly examine the headers of the received files. It is also helpful to explicitly specify the input format when calling the utility, as to preempt auto-detection code. For command-line invocations, this can be done like so:

February 09, 2016

The nice thing about the control flow instrumentation used by American Fuzzy Lop is that it allows you to do much more than just, well, fuzzing stuff. For example, the suite has long shipped with a standalone tool called afl-tmin, capable of automatically shrinking test cases while still making sure that they exercise the same functionality in the targeted binary (or that they trigger the same crash). Another similar tool, afl-cmin, employed a similar trick to eliminate redundant files in any large testing corpora.

The latest release of AFL features another nifty new addition along these lines: afl-analyze. The tool takes an input file, sequentially flips bytes in this data stream, and then observes the behavior of the targeted binary after every flip. From this information, it can infer several things:

Classify some content as no-op blocks that do not elicit any changes to control flow (say, comments, pixel data, etc).

Checksums, magic values, and other short, atomically compared tokens where any bit flip causes the same change to program execution.

This gives us some remarkable and quick insights into the syntax of the file and the behavior of the underlying parser. It may sound too good to be true, but actually seems to work in practice. For a quick demo, let's see what afl-analyze has to say about running cut -d ' ' -f1 on a text file:

We see that cut really only cares about spaces and newlines. Interestingly, it also appears that the tool always tokenizes the entire line, even if it's just asked to return the first token. Neat, right?

Of course, the value of afl-analyze is greater for incomprehensible binary formats than for simple text utilities; perhaps even more so when dealing with black-box parsers (which can be analyzed thanks to the runtime QEMU instrumentation supported in AFL). To try out the tool's ability to deal with binaries, let's check out libpng:

This looks pretty damn good: we have two four-byte signatures, followed by chunk length, four-byte chunk name, chunk length, some image metadata, and then a comment section. Neat, right? All in a matter of seconds: no configuration needed and no knobs to turn.

Of course, the tool shipped just moments ago and is still very much experimental; expect some kinks. Field testing and feedback welcome!

January 14, 2016

It's a fairly systematic and level-headed approach to threat modeling and risk management, except not for computer systems - and instead, for real life. There's not much I can add on top of what's already said on the linked page; have a look, you will probably find it to be an interesting read.

October 02, 2015

In the wake of the tragic events in Roseburg, I decided to briefly return to the topic of looking at the US culture from the perspective of a person born in Europe. In particular, I wanted to circle back to the topic of firearms.

Contrary to popular beliefs, the United States has witnessed a dramatic decline in violence over the past 20 years. In fact, when it comes to most types of violent crime - say, robbery, assault, or rape - the country now compares favorably to the UK and many other OECD nations. But as I explored in my earlier posts, one particular statistic - homicide - is still registering about three times as high as in many other places within the EU.

The homicide epidemic in the United States has a complex nature and overwhelmingly affects ethnic minorities and other disadvantaged social groups; perhaps because of this, the phenomenon sees very little honest, public scrutiny. It is propelled into the limelight only in the wake of spree shootings and other sickening, seemingly random acts of terror; such incidents, although statistically insignificant, take a profound mental toll on the American society. At the same time, the effects of high-profile violence seem strangely short-lived: they trigger a series of impassioned political speeches, invariably focusing on the connection between violence and guns - but the nation soon goes back to business as usual, knowing full well that another massacre will happen soon, perhaps the very same year.

On the face of it, this pattern defies all reason - angering my friends in Europe and upsetting many brilliant and well-educated progressives in the US. They utter frustrated remarks about the all-powerful gun lobby and the spineless politicians, blaming the partisan gridlock for the failure to pass even the most reasonable and toothless gun control laws. I used to be in the same camp; today, I think the reality is more complex than that.

To get to the bottom of this mystery, it helps to look at the spirit of radical individualism and classical liberalism that remains the national ethos of the United States - and in fact, is enjoying a degree of resurgence unseen for many decades prior. In Europe, it has long been settled that many individual liberties - be it the freedom of speech or the natural right to self-defense - can be constrained to advance even some fairly far-fetched communal goals. On the old continent, such sacrifices sometimes paid off, and sometimes led to atrocities; but the basic premise of European collectivism is not up for serious debate. In America, the same notion certainly cannot be taken for granted today.

When it comes to firearm ownership in particular, the country is facing a fundamental choice between two possible realities:

A largely disarmed society that depends on the state to protect it from almost all harm, and where citizens are generally not permitted to own guns without presenting a compelling cause. In this model, adopted by many European countries, firearms tend to be less available to common criminals - simply by the virtue of limited supply and comparatively high prices in black market trade. At the same time, it can be argued that any nation subscribing to this doctrine becomes more vulnerable to foreign invasion or domestic terror, should its government ever fail to provide adequate protection to all citizens. Disarmament can also limit civilian recourse against illegitimate, totalitarian governments - a seemingly outlandish concern, but also a very fresh memory for many European countries subjugated not long ago under the auspices of the Soviet Bloc.

A well-armed society where firearms are available to almost all competent adults, and where the natural right to self-defense is subject to few constraints. This is the model currently employed in the United States, where it arises from the straightfoward, originalist interpretation of the Second Amendment - as recognized by roughly 75% of all Americans and affirmed by the Supreme Court. When following such a doctrine, a country will likely witness greater resiliency in the face of calamities or totalitarian regimes. At the same time, its citizens might have to accept some inherent, non-trivial increase in violent crime due to the prospect of firearms more easily falling into the wrong hands.

It seems doubtful that a viable middle-ground approach can exist in the United States. With more than 300 million civilian firearms in circulation, most of them in unknown hands, the premise of reducing crime through gun control would inevitably and critically depend on some form of confiscation; without such drastic steps, the supply of firearms to the criminal underground or to unfit individuals would not be disrupted in any meaningful way. Because of this, intellectual integrity requires us to look at many of the legislative proposals not only through the prism of their immediate utility, but also to give consideration to the societal model they are likely to advance.

And herein lies the problem: many of the current "common-sense" gun control proposals have very little merit when considered in isolation. There is scant evidence that reinstating the ban on military-looking semi-automatic rifles ("assault weapons"), or rolling out the prohibition on private sales at gun shows, would deliver measurable results. There is also no compelling reason to believe that ammo taxes, firearm owner liability insurance, mandatory gun store cameras, firearm-free school zones, bans on open carry, or federal gun registration can have any impact on violent crime. And so, the debate often plays out like this:

At the same time, by the virtue of making weapons more difficult, expensive, and burdensome to own, many of the legislative proposals floated by progressives would probably gradually erode the US gun culture; intentionally or not, their long-term outcome would be a society less passionate about firearms and more willing to follow in the footsteps of Australia or the UK. Only as we cross that line and confiscate hundreds of millions of guns, it's fathomable - yet still far from certain - that we would see a sharp drop in homicides.

This method of inquiry helps explain the visceral response from gun rights advocates: given the legislation's dubious benefits and its predicted long-term consequences, many pro-gun folks are genuinely worried that making concessions would eventually mean giving up one of their cherished civil liberties - and on some level, they are right.

Some feel that this argument is a fallacy, a tell tale invented by a sinister corporate "gun lobby" to derail the political debate for personal gain. But the evidence of such a conspiracy is hard to find; in fact, it seems that the progressives themselves often fan the flames. In the wake of Roseburg, both Barack Obama and Hillary Clinton came out praising the confiscation-based gun control regimes employed in Australia and the UK - and said that they would like the US to follow suit. Depending on where you stand on the issue, it was either an accidental display of political naivete, or the final reveal of their sinister plan. For the latter camp, the ultimate proof of a progressive agenda came a bit later: in response to the terrorist attack in San Bernardino, several eminent Democratic-leaning newspapers published scathing editorials demanding civilian disarmament while downplaying the attackers' connection to Islamic State.

Another factor that poisons the debate is that despite being highly educated and eloquent, the progressive proponents of gun control measures are often hopelessly unfamiliar with the very devices they are trying to outlaw:

I'm reminded of the widespread contempt faced by Senator Ted Stevens following his attempt to compare the Internet to a "series of tubes" as he was arguing against net neutrality. His analogy wasn't very wrong - it just struck a nerve as simplistic and out-of-date. My progressive friends did not react the same way when Representative Carolyn McCarthy - one of the key proponents of the ban on assault weapons - showed no understanding of the supposedly lethal firearm features she was trying to eradicate. Such bloopers are not rare, too; not long ago, Mr. Bloomberg, one of the leading progressive voices on gun control in America, argued against semi-automatic rifles without understanding how they differ from the already-illegal machine guns:

Yet another example comes Representative Diana DeGette, the lead sponsor of a "common-sense" bill that sought to prohibit the manufacture of magazines with capacity over 15 rounds. She defended the merits of her legislation while clearly not understanding how a magazine differs from ammunition - or that the former can be reused:

"I will tell you these are ammunition, they’re bullets, so the people who have those know they’re going to shoot them, so if you ban them in the future, the number of these high capacity magazines is going to decrease dramatically over time because the bullets will have been shot and there won’t be any more available."

Treating gun ownership with almost comical condescension has become vogue among a good number of progressive liberals. On a campaign stop in San Francisco, Mr. Obama sketched a caricature of bitter, rural voters who "cling to guns or religion or antipathy to people who aren't like them". Not much later, one Pulitzer Prize-winning columnist for The Washington Post spoke of the Second Amendment as "the refuge of bumpkins and yeehaws who like to think they are protecting their homes against imagined swarthy marauders desperate to steal their flea-bitten sofas from their rotting front porches". Many of the newspaper's readers probably had a good laugh - and then wondered why it has gotten so difficult to seek sensible compromise.

There are countless dubious and polarizing claims made by the supporters of gun rights, too; examples include a recent NRA-backed tirade by Dana Loesch denouncing the "godless left", or the constant onslaught of conspiracy theories spewed by Alex Jones and Glenn Beck. But when introducing new legislation, the burden of making educated and thoughtful arguments should rest on its proponents, not other citizens. When folks such as Bloomberg prescribe sweeping changes to the American society while demonstrating striking ignorance about the topics they want to regulate, they come across as elitist and flippant - and deservedly so.

Given how controversial the topic is, I think it's wise to start an open, national conversation about the European model of gun control and the risks and benefits of living in an unarmed society. But it's also likely that such a debate wouldn't last very long. Progressive politicians like to say that the dialogue is impossible because of the undue influence of the National Rifle Association - but as I discussed in my earlier blog posts, the organization's financial resources and power are often overstated: it does not even make it onto the list of top 100 lobbyists in Washington, and its support comes mostly from member dues, not from shadowy business interests or wealthy oligarchs. In reality, disarmament just happens to be a very unpopular policy in America today: the support for gun ownership is very strong and has been growing over the past 20 years - even though hunting is on the decline.

Perhaps it would serve the progressive movement better to embrace the gun culture - and then think of ways to curb its unwanted costs. Addressing inner-city violence, especially among the disadvantaged youth, would quickly bring the US homicide rate much closer to the rest of the highly developed world. But admitting the staggering scale of this social problem can be an uncomfortable and politically charged position to hold. For Democrats, it would be tantamount to singling out minorities. For Republicans, it would be just another expansion of the nanny state.

PS. If you are interested in a more systematic evaluation of the scale, the impact, and the politics of gun ownership in the United States, you may enjoy an earlier entry on this blog. Or, if you prefer to read my entire series comparing the life in Europe and in the US, try this link.

August 31, 2015

Our industry tends to glamorize vulnerability research, with a growing number of bug reports accompanied by flashy conference presentations, media kits, and exclusive interviews. But for all that grandeur, the public understands relatively little about the effort that goes into identifying and troubleshooting the hundreds of serious vulnerabilities that crop up every year in the software we all depend on. It certainly does not help that many of the commercial security testing products are promoted with truly bombastic claims - and that some of the most vocal security researchers enjoy the image of savant hackers, seldom talking about the processes and toolkits they depend on to get stuff done.

I figured it may make sense to change this. Several weeks ago, I started trawling through the list of public CVE assignments, and then manually compiling a list of genuine, high-impact flaws in commonly used software. I tried to follow three basic principles:

For pragmatic reasons, I focused on problems where the nature of the vulnerability and the identity of the researcher is easy to ascertain. For this reason, I ended up rejecting entries such as CVE-2015-2132 or CVE-2015-3799.

I focused on widespread software - e.g., browsers, operating systems, network services - skipping many categories of niche enterprise products, Wordpress add-ons, and so on. Good examples of rejected entries in this category include CVE-2015-5406 and CVE-2015-5681.

I skipped issues that appeared to be low impact, or where the credibility of the report seemed unclear. One example of a rejected submission is CVE-2015-4173.

To ensure that the data isn't skewed toward more vulnerable software, I tried to focus on research efforts, rather than on individual bugs; where a single reporter was credited for multiple closely related vulnerabilities in the same product within a narrow timeframe, I would use only one sample from the entire series of bugs.

For the qualifying CVE entries, I started sending out anonymous surveys to the researchers who reported the underlying issues. The surveys open with a discussion of the basic method employed to find the bug:

If "manual bug hunting" is selected, several additional options appear:

( ) I was reviewing the source code to check for flaws.
( ) I studied the binary using a disassembler, decompiler, or a tracing tool.
( ) I was doing black-box experimentation to see how the program behaves.
( ) I simply noticed that this bug is being exploited in the wild.
( ) I did something else: ____________________

Selecting "automated discovery" results in a different set of choices:

( ) I used a fuzzer.
( ) I ran a simple vulnerability scanner (e.g., Nessus).
( ) I used a source code analyzer (static analysis).
( ) I relied on symbolic or concolic execution.
( ) I did something else: ____________________

Researchers who relied on automated tools are also asked about the origins of the tool and the computing resources used:

Name of tool used (optional): ____________________
Where does this tool come from?
( ) I created it just for this project.
( ) It's an existing but non-public utility.
( ) It's a publicly available framework.
At what scale did you perform the experiments?
( ) I used 16 CPU cores or less.
( ) I employed more than 16 cores.

Regardless of the underlying method, the survey also asks every participant about the use of memory diagnostic tools:

Did you use any additional, automatic error-catching tools - like ASAN
or Valgrind - to investigate this issue?
( ) Yes. ( ) Nope!

...and about the lengths to which the reporter went to demonstrate the bug:

How far did you go to demonstrate the impact of the issue?
( ) I just pointed out the problematic code or functionality.
( ) I submitted a basic proof-of-concept (say, a crashing test case).
( ) I created a fully-fledged, working exploit.

It also touches on the communications with the vendor:

Did you coordinate the disclosure with the vendor of the affected
software?
( ) Yes. ( ) No.
How long have you waited before having the issue disclosed to the
public?
( ) I disclosed right away. ( ) Less than a week. ( ) 1-4 weeks.
( ) 1-3 months. ( ) 4-6 months. ( ) More than 6 months.
In the end, did the vendor address the issue as quickly as you would
have hoped?
( ) Yes. ( ) Nope.

...and the channel used to disclose the bug - an area where we have seen some stark changes over the past five years:

How did you disclose it? Select all options that apply:
[ ] I made a blog post about the bug.
[ ] I posted to a security mailing list (e.g., BUGTRAQ).
[ ] I shared the finding on a web-based discussion forum.
[ ] I announced it at a security conference.
[ ] I shared it on Twitter or other social media.
[ ] We made a press kit or reached out to a journalist.
[ ] Vendor released an advisory.

The survey ends with a question about the motivation and the overall amount of effort that went into this work:

What motivated you to look for this bug?
( ) It's just a hobby project.
( ) I received a scientific grant.
( ) I wanted to participate in a bounty program.
( ) I was doing contract work.
( ) It's a part of my full-time job.
How much effort did you end up putting into this project?
( ) Just a couple of hours.
( ) Several days.
( ) Several weeks or more.

So far, the response rate for the survey is approximately 80%; because I only started in August, I currently don't have enough answers to draw particularly detailed conclusions from the data set - this should change over the next couple of months. Still, I'm already seeing several well-defined if preliminary trends:

The use of fuzzers is ubiquitous (incidentally, of named projects, afl-fuzz leads the fray so far); the use of other automated tools, such as static analysis frameworks or concolic execution, appears to be unheard of - despite the undivided attention that such methods receive in academic settings.

Memory diagnostic tools, such as ASAN and Valgrind, are extremely popular - and are an untold success story of vulnerability research.

Most of public vulnerability research appears to be done by people who work on it full-time, employed by vendors; hobby work and bug bounties follow closely.

Only a small minority of serious vulnerabilities appear to be disclosed anywhere outside a vendor advisory, making it extremely dangerous to rely on press coverage (or any other casual source) for evaluating personal risk.

Of course, some security work happens out of public view; for example, some enterprises have well-established and meaningful security assurance programs that likely prevent hundreds of security bugs from ever shipping in the reviewed code. Since it is difficult to collect comprehensive and unbiased data about such programs, there is always some speculation involved when discussing the similarities and differences between this work and public security research.

Well, that's it! Watch this space for updates - and let me know if there's anything you'd change or add to the questionnaire.

July 15, 2015

With my previous entry, I wrapped up an impromptu series of articles that chronicled my childhood experiences in Poland and compared the culture I grew up with to the American society that I'm living in today. For the readers who want to be able to navigate the series without scrolling endlessly, I wanted to put together a quick table of contents. Here it goes.

The entry that started it all:

"On journeys" - a personal story recounting my travels from Poland to the US.

Oh, the places you won't go:

The politics of Poland - a retrospective look at the politics of a state emerging from under a communist rule,

This is the fourteenth article talking about Poland, Europe, and the United States. To explore the entire collection, start here.

This is destined to be the final entry in the series that opened with a chronicle of my journey from Poland to the United States, only to veer into some of the most interesting social differences between America and the old continent. There are many other topics I could still write about - anything from the school system, to religion, to the driving culture - but with my parental leave coming to an end, I decided to draw a line. I'm sure that this decision will come as a relief for those who read the blog for technical insights, rather than political commentary :-)

The final topic I wanted to talk about is something that truly irks some of my European friends: the belief, held deeply by many Americans, that their country is the proverbial "city upon a hill" - a shining beacon of liberty and righteousness, blessed by the maker with the moral right to shape the world - be it by flexing its economic and diplomatic muscles, or with its sheer military might.

It is an interesting phenomenon, and one that certainly isn't exclusive to the United States. In fact, expansive exceptionalism used to be a very strong theme in the European doctrine long before it emerged in other parts of the Western world. For one, it underpinned many of the British, French, Spanish, and Dutch colonial conquests over the past 500 years. The romanticized notion of Sonderweg played a menacing role in German political discourse, too - eventually culminating in the rise of the Nazi ideology and the onset of World War II. It wasn't until the defeat of the Third Reich when Europe, faced with unspeakable destruction and unprecedented loss of life, made a concerted effort to root out many of its nationalist sentiments and embrace a more harmonious, collective path as a single European community.

America, in a way, experienced the opposite: although it has always celebrated its own rejection of feudalism and monarchism - and in that sense, it had a robust claim to being a pretty unique corner of the world - the country largely shied away from global politics, participating only very reluctantly in World War I, then hoping to wait out World War II up until being attacked by Japan. Its conviction about its special role on the world stage has solidified only after it paid a tremendous price to help defeat the Germans, to stop the march of the Red Army through the continent, and to build a prosperous and peaceful Europe; given the remarkable significance of this feat, the post-war sentiments in America may be not hard to understand. In that way, the roots of American exceptionalism differed from its European predecessors, being fueled by a fairly pure sense of righteousness - and not by anger, by a sense of injury, or by territorial demands.

Of course, the new superpower has also learned that its military might has its limits, facing humiliating defeats in some of the proxy wars with the Soviets and seeing an endless spiral of violence in the Middle East. The voices predicting its imminent demise, invariably present from the earliest days of the republic, have grown stronger and more confident over the past 50 years. But the country remains a military and economic powerhouse; and in some ways, its trigger-happy politicians provide a counterbalance to the other superpowers' greater propensity to turn a blind eye to humanitarian crises and to genocide. It's quite possible that without the United States arming its allies and tempering the appetites of Russia, North Korea, or China, the world would have been a less happy place. It's just as likely that the Middle East would have been a happier one.

Some Europeans show indignation that Americans, with their seemingly know-it-all attitudes toward the rest of the world, still struggle to pinpoint Austria or Belgium on the map. It is certainly true that the media in the US pays little attention to the old continent. But deep down inside, European outlets don't necessarily fare a lot better, often focusing its international coverage on the silly and the formulaic: when in Europe, you are far more likely to hear about a daring rescue of a cat stuck on a tree in Wyoming, or about the Creation Museum in Kentucky, than you are to learn anything substantive about Obamacare. (And speaking of Wyoming and Kentucky, pinpointing these places on the map probably wouldn't be the European viewer's strongest feat). In the end, Europeans who think they understand the intricacies of US politics are probably about as wrong as the average American making sweeping generalizations about Europe.

And on that intentionally self-deprecating note, it's time to wrap the series up.

This is the thirteenth article in a short series about Poland, Europe, and the United States. To explore the entire series, start here.

In one of my earlier posts, I alluded to the pervasive faith in the American Dream: the national ethos of opportunity, self-sufficiency, and free enterprise that influences the political discourse in the United States. The egalitarian promise of the American Dream is simple: no matter who you are, hard work and ingenuity will surely allow you to achieve your dreams. From that, it follows that on your journey, you are not entitled to much; the government will be there to protect your freedom, but it will not give you a head start.

Unlike many of my peers, I suspect that there is truth to the cliche; the United States is a remarkably industrious nation and the home to many of the world's most innovative and fastest-growing businesses. It certainly treads ahead of European economies, still dominated by pre-war industrial conglomerates and former state monopolists, and weighed down by aging populations, highly regulated markets, and inflexible, out-of-control costs. America's mostly-self-made magnates, the likes of Elon Musk, Bill Gates, and Warren Buffett, are also far more likable and seemingly more human than Europe's stereotypical caste of aristocratic families and shadowy oligarchs.

On the flip side, the striking upward mobility of rags-to-riches icons such as Steve Jobs or Oprah Winfrey tends to be an exception, not a rule. Many scholars point out that parents' incomes are highly predictive of the incomes of their children - and that in the US, this effect is more pronounced than in some of the European states. Such studies can be misleading, because in less unequal EU societies, moving to a higher income quantile may confer no substantial change in the quality of life - but ultimately, there is no denying that people who are born into poor families will usually remain poor for the rest of their lives. And with the contemporary trends in outsourcing and industrial automation, the opportunities for unskilled blue collar labor - once a key stepping stone in the story of the American Dream - are shrinking fast.

In contrast with the United States, many in Europe reject Milton Friedman's views on consensual capitalism and hold that it is a basic human right to be able to live a good life or to have an honest and respectable job. This starts with the labor law: in much of the United States, firing an employee can happen in the blink of an eye, for almost any reason - or without giving a reason at all. In Europe, the employer will need a just cause and will go through a lengthy severance period; depending on the circumstances, the company may be also barred from hiring another person to do the same job. Employment benefits follow the same pattern; in the US, paid leave is largely up to employers to decide, with skilled workers being lured with packages that would make Europeans jealous - but many unskilled laborers, especially in the retail and restaurant business, getting the short end of that stick.

In Europe, enabling the disadvantaged to contribute to the society and to live fulfilling lives is also a matter of government policy, often implemented through sweeping wealth redistribution - or through public-sector employment orchestrated at a scale that rivals that of quasi-communist China and other authoritarian countries (for example, in France and Greece, about one in three jobs is run by the state). Such efforts tend to be more successful in small and wealthy Scandinavian countries, where the society can be engineered with more finesse. In many other parts of the continent, systemic, long-term poverty is still rampant, with the government being able to do little more than providing people with a lifetime of subsidized basic sustenance and squalor living conditions. Ultimately, when it comes to combating multi-generational poverty, financial aid administered by sprawling national bureaucracies is not always a cure-all.

Perhaps interestingly, the benefits that are most frequently described as inadequate in the US are not as strikingly different from what one would be entitled to in the EU. For example, the minimal wage is quite comparable; it is around $2.60 per hour in Poland, about $3.70 in Greece, some $9.30 in Germany, and in the ballpark of $10.00 in the UK. In the US, the national average hovers somewhere around $8.00, with some of the states with higher costs of living on track to raise it to $10.00 within a year or two; in fact, some progressive municipalities are aiming for $15.

Unemployment and retirement benefits, although certainly not lavish, also follow the same pattern. When it comes to unemployment in particular, in the States, workers are entitled to about half of their previous salary for up to six months - although that period has been routinely extended in times of economic calamity. In Europe, the figures are roughly comparable, with payments in the ballpark of 50-70% of your previous salary, typically extending for somewhere between 6 and 12 months. The main difference is that the upper limit for monthly benefits tends to be significantly lower in the US than in Europe, often putting far greater strain on single-income families in places with high cost of living. In France, the ceiling seems to be around $8,000 a month; in the US, you will probably see no more than $2,000.

Another overlooked dimension of this debate is the unique tradition of charitable giving in the United States - a phenomenon that allows private charities to provide extensive assistance to people in need. Such giving happens on a staggering scale, with citizens donating more than $350 billion a year - more than twenty times the amount donated in the UK. The bulk of that money goes to organization that provide food, shelter, and counseling to the poor. It is an interesting model, with its own share of benefits and trade-offs: private charities operate on a more local scale and have a far stronger incentive to spend money wisely and provide meaningful aid. On the flip side, their reach is not as universal - and the benefits are not guaranteed.

Many of the conservatives who preach the virtues of the American Dream vastly underestimate the pervasive and lasting consequences of being born into poverty or falling onto hard times; they also underestimate the role that unearned privilege and luck played in their own lives. The progressives often do no better, seeing European social democracies as a flawless role model, even in the midst of the enduring sovereign debt crisis in the eurozone; breathlessly reciting knock-off Marxist slogans; and portraying the rich as Mr. Burns-esque villains of unfathomable wealth, motivated by just two goals: to exploit the working class and to avoid paying taxes at any cost. In the end, helping the disadvantaged is a moral imperative - but many ideas sound better on a banner than when implemented as a government policy.

July 14, 2015

This is the twelfth article in a short series about Poland, Europe, and the United States. To explore the entire series, start here.

The American model of government is a complex beast. To a visitor from continental Europe, accustomed to the Napoleonic traditions of civil law and to the political realities of unitary states, the sight can be also a bit perplexing: after all, how does a country of this size prosper with a bitterly partisan, gridlocked Congress that repeatedly fails to even pass the budget on time? And how is it possible that, with an approval rating of 15%, the elected officials are not facing a wave of widespread social unrest?

I suspect that the key to solving this riddle lies in the fact that the United States is still very much a federation of self-governing states - and that most of the decisions that affect the lives of ordinary citizens are not made in Washington. Each and every state establishes its own criminal and civil law, levies its own taxes, runs its own welfare systems, and appoints its own judges - sometimes by popular vote. In fact, the states routinely confer far-reaching powers onto individual municipalities: for example, most towns and counties operate their own, completely autonomous police departments that respond to local officials, not to a career politician on the East Coast.

All this makes the government feel quite different from what you are likely to experience in Europe. Let's stick to law enforcement: in Poland and in some other European states, where the police are a part of a sprawling national bureaucracy, the citizens may have very few options for addressing concerns that do not rise to the level of national debate. In the US, dismantling the entire police force may seem trivial in comparison: the concerned citizens may need to get a local newspaper interested in their cause, then band together to recall the local official who is ultimately on the hook. Of course, the independence comes at a price: small, self-funded police departments can be quicker to adopt questionable practices that would not stand to broader scrutiny, such as racial profiling or the rash application of civil forfeiture.

When it comes to the role of the federal government, the picture is complicated. In principle, the constitution gives it only a couple of duties; for example, the feds control various aspects of interstate commerce, print money, maintain armed forces, and handle foreign affairs. Of course, over the years, their responsibilities have expanded considerably, with the legislators exploiting the vagueness of the concept of "interstate commerce" in all sorts of creative ways. Today, the ongoing debate about the appropriate boundaries of this practice fuels the partisan gridlock in Washington. Modern-day Republicans, swayed by the conservative Tea Party movement, argue that the feds should honor the vision of the Founding Fathers and not meddle in the affairs of the states. The Democratic party, taking notes from the vaguely leftist Occupy campaign, increasingly sees the federal government as a flexible tool for establishing country-wide standards of environmental protection, labor rights, welfare, gun control, education, and other progressive causes historically associated with European social democrats.

On that matter, the voters themselves seem to be split. In polls, a robust majority of Americans declare that their government regulates too many aspects of their lives, tries to solve too many problems, wields too much control, and is inherently less efficient and less fair than private enterprises; about two-thirds of respondents see the feds as more of a problem than a solution, and a shocking 50% believe that the apparatus poses an immediate and serious threat to civil liberties. Yet, despite holding views that would make Milton Friedman proud, when asked about specific programs and entitlements - be it defense spending or Medicare - most voters oppose budget cuts. Ultimately, the equally powerful distrust of big corporations, coupled with the allure of European-style welfare systems, often sends the public into the embrace of big-government progressives who promise to solve a growing range of societal ills using federal-level income redistribution and overarching legislative frameworks.

Either way, owing to the parties' newly-found tendency to pander to populist fringes and their inability to compromise, the dysfunctional Congress gets very little love from the average voter; but somewhat paradoxically, the representatives from each and every district are usually well-liked by their own constituents and get reelected with ease. Some blame gerrymandering, but a simpler explanation exists: most of the candidates have strong ties to the districts they represent, many of them having a track record as local politicians or successful businessmen. As a result, they understand what matters to their constituents and often meaningfully work to advance that agenda. They also live and die at the mercy of local newspapers, sometimes lending a hand to the voters who write or call them to resolve bureaucratic hurdles and address other everyday grievances. The practice of getting your representatives involved in such matters is almost unthinkable in Poland, where the slots on local ballots are traded by party officials - and are routinely handed out to people with little or no connection to the region they are supposed to represent.

With American political campaigns financed from private funds, it is often argued that the representatives in Congress are disproportionately influenced by the wealthy few and by a variety of organized lobby groups. This is likely true, although the disparity is at least partly offset by the public's fascination with human interest stories and the tendency to root for the common folk. Ultimately, even the most cynical congresspeople can afford to be persuaded by money only when it comes to the topics that their constituents are fairly indifferent to.

Beyond the legislative and executive branches of the government, some distinct undertones of self-governance are present in the US judicial system, too. The country borrows from the traditions of British common law, rather than the civil law system utilized in much of continental Europe. It embraces the significance of legal precedent and emphasizes humanist values over the strict application of legal codes, with remarkably broad powers vested in the judges and in the juries of peers - up to the notion of jury nullification. Ultimately, the system seeks to limit the consequences of the fallibility of legislators, who often struggle to properly consider all the implications of the laws they pass; it trades it for the increased risk of fallible courts - who bring in their own subconscious biases into the mix.

July 06, 2015

This is the eleventh article in a short series about Poland, Europe, and the United States. To explore the entire series, start here.

There are quite a few corners of the world where the ratio of immigrants to native-born citizens is remarkably high. Many of these places are small or rapidly growing countries - say, Monaco or Qatar. Some others, including several European states, just happen to be on the receiving end of transient, regional demographic shifts; for example, in the past decade, over 500,000 people moved from Poland to the UK. But on the list of foreigner-friendly destinations, the US deserves a special spot: it is an enduring home to by far the largest, most diverse, and quite possibly best-assimilated migrant population in the world.

The inner workings of the American immigration system are a fascinating mess - a tangle of complex regulation, of multiple overlapping bureaucracies, and of quite a few unique social norms. The bureaucratic machine itself is ruthlessly efficient, issuing several million non-tourist visas and processing over 700,000 naturalization applications every year. But the system is also marred by puzzling dysfunction: for example, it allows highly skilled foreign students to attend US universities, sometimes granting them scholarships - only to show many of them the door the day they graduate. It runs a restrictive H-1B visa program that ties foreign workers to their petitioning employers, preventing them from seeking better wages - thus artificially depressing the salaries of some citizen and permanent resident employees who now have to compete with H-1B captives. It also neglects the countless illegal immigrants who, with the tacit approval of legislators and business owners, prop up many facets of the economy - but are denied the ability to join the society even after decades of staying out of trouble and doing honest work.

Despite being fairly picky about the people it admits into its borders, in many ways, the United States is still an exceptionally welcoming country: very few other developed nations unconditionally bestow citizenship onto all children born on their soil, run immigration lotteries, or allow newly-naturalized citizens to invite their parents, siblings, and adult children over, no questions asked. At the same time, the US immigration system has a shameful history of giving credence to populist fears about alien cultures - and of implementing exclusionary policies that, at one time or another, targeted anyone from the Irish, to Poles, to Arabs, to people from many parts of Asia or Africa. Some pundits still find this sort of scaremongering fashionable, now seeing Mexico as the new threat to the national identity and to the American way of life. The claim made very little sense 15 years ago - and makes even less of it today, as the migration from the region has dropped precipitously and has been eclipsed by the inflow from other parts of the world.

The contradictions, the dysfunction, and the occasional prejudice aside, what always struck me about the United States is that immigration is simply a part of the nation's identity; the principle of welcoming people from all over the world and giving them a fair chance is an axiom that is seldom questioned in any serious way. When surveyed, around 80% Americans can identify their own foreign ancestry - and they often do this with enthusiasm and pride. Europe is very different, with national identity being a more binary affair; I always felt that over there, accepting foreigners is seen as a humanitarian duty, not an act of nation-building - and that this attitude makes it harder for the newcomers to truly integrate into the society.

In the US, as a consequence of treating contemporary immigrants as equals, many newcomers face a strong social pressure to make it on their own, to accept American values, and to adopt the American way of life; it is a powerful, implicit social contract that very few dare to willingly renege on. In contrast to this, post-war Europe approaches the matter differently, seeing greater moral value in letting the immigrants preserve their cultural identity and customs, with the state stepping in to help them jumpstart their new lives through a variety of education programs and financial benefits. It is a noble concept, although I'm not sure if the compassionate European approach always worked better than the more ruthless and pragmatic American method: in France and in the United Kingdom, massive migrant populations have been condemned to a life of exclusion and hopelessness, giving rise to social unrest and - in response - to powerful anti-immigrant sentiments and policies. I think this hasn't happened to nearly the same extent in the US, perhaps simply because the social contract is structured in a different way - but then, I know eminently reasonable folks who would disagree.

As for my own country of origin, it occupies an interesting spot. Historically a cosmopolitan nation, Poland has lost much of its foreign population and ethnic minorities to the horrors of World War II and to the policies implemented within the Soviet Bloc - eventually becoming one of the most culturally and ethnically homogeneous nations on the continent. Today, migrants comprise less than 1% of its populace, and most of them come from the neighboring, culturally similar Slavic states. Various flavors of xenophobia run deep in the society, playing right into the recent pan-European anti-immigration sentiments. As I'm writing this, Poland is fighting the European Commission tooth and nail not to take three thousand asylum seekers from Syria; many politicians and pundits want to first make sure that all the refugees are of Christian faith. For many Poles, reasonable concerns over non-assimilation and extremism blend with a wholesale distrust of foreign cultures.