lcamtuf's blog

March 03, 2018

Bug bounties end up in the news with some regularity, usually for the wrong reasons. I've been itching to write
about that for a while - but instead of dwelling on the mistakes of the bygone days, I figured it may be better to
talk about some of the ways to get vulnerability rewards right.

What do you get out of bug bounties?

There's plenty of differing views, but I like to think of such programs
simply as a bid on researchers' time. In the most basic sense, you get three benefits:

Improved ability to detect bugs in production before they become major incidents.

A comparatively unbiased feedback loop to help you prioritize and measure other security work.

A robust talent pipeline for when you need to hire.

What bug bounties don't offer?

You don't get anything resembling a comprehensive security program or a systematic assessment of your platforms.
Researchers end up looking for bugs that offer favorable effort-to-payoff ratios for their skills and given the
very imperfect information they have about your enterprise. In other words, you may end up with a hundred
people looking for XSS and just one person looking for RCE.

Your reward structure can steer them toward the targets and bugs you care about, but it's difficult to fully
eliminate this inherent skew. There's only so far you can jack up your top-tier rewards, and only so far you can
go lowering the bottom-tier ones.

Don't you have to outcompete the black market to get all the "good" bugs?

There is a free market price discovery component to it all: if you're not getting the engagement you
were hoping for, you should probably consider paying more.

That said, there are going to be researchers who'd rather hurt you than work for you, no matter how much you pay;
you don't have to win them over, and you don't have to outspend every authoritarian government or
every crime syndicate. A bug bounty is effective simply if it attracts enough eyeballs to make bugs statistically
harder to find, and reduces the useful lifespan of any zero-days in black market trade. Plus, most
researchers don't want their work to be used to crack down on dissidents in Egypt or Vietnam.

Another factor is that you're paying for different things: a black market buyer probably wants a reliable exploit
capable of delivering payloads, and then demands silence for months or years to come; a vendor-run
bug bounty program is usually perfectly happy with a reproducible crash and doesn't mind a researcher blogging
about their work.

In fact, while money is important, you will probably find out that it's not enough to retain your top talent;
many folks want bug bounties to be more than a business transaction, and find a lot of value in having a close
relationship with your security team, comparing notes, and growing together. Fostering that partnership can
be more important than adding another $10,000 to your top reward.

How do I prevent it all from going horribly wrong?

Bug bounties are an unfamiliar beast to most lawyers and PR folks, so it's a natural to be wary and try to plan
for every eventuality with pages and pages of impenetrable rules and fine-print legalese.

This is generally unnecessary: there is a strong self-selection bias, and almost every participant in a
vulnerability reward program will be coming to you in good faith. The more friendly, forthcoming, and
approachable you seem, and the more you treat them like peers, the more likely it is for your relationship to stay
positive. On the flip side, there is no faster way to make enemies than to make a security researcher feel that they
are now talking to a lawyer or to the PR dept.

Most people have strong opinions on disclosure policies; instead of imposing your own views, strive to patch reported bugs
reasonably quickly, and almost every reporter will play along. Demand researchers to cancel conference appearances,
take down blog posts, or sign NDAs, and you will sooner or later end up in the news.

But what if that's not enough?

As with any business endeavor, mistakes will happen; total risk avoidance is seldom the answer. Learn to sincerely
apologize for mishaps; it's not a sign of weakness to say "sorry, we messed up". And you will almost certainly not end
up in the courtroom for doing so.

It's good to foster a healthy and productive relationship with the community, so that they come to your defense when
something goes wrong. Encouraging people to disclose bugs and talk about their experiences is one way of accomplishing that.

What about extortion?

You should structure your program to naturally discourage bad behavior and make it stand out like a sore thumb.
Require bona fide reports with complete technical details before any reward decision is made by a panel of named peers;
and make it clear that you never demand non-disclosure as a condition of getting a reward.

To avoid researchers accidentally putting themselves in awkward situations, have clear rules around data exfiltration
and lateral movement: assure them that you will always pay based on the worst-case impact of their findings; in exchange,
ask them to stop as soon as they get a shell and never access any data that isn't their own.

So... are there any downsides?

Yep. Other than souring up your relationship with the community if you implement your program wrong, the other consideration
is that bug bounties tend to generate a lot of noise from well-meaning but less-skilled researchers.

When this happens, do not get frustrated and do not penalize such participants; instead, help them grow. Consider
publishing educational articles, giving advice on how to investigate and structure reports, or
offering free workshops every now and then.

The other downside is cost; although bug bounties tend to offer far more bang for your buck than your average penetration
test, they are more random. The annual expenses tend to be fairly predictable, but there is always
some possibility of having to pay multiple top-tier rewards in rapid succession. This is the kind of uncertainty that
many mid-level budget planners react badly to.

Finally, you need to be able to fix the bugs you receive. It would be nuts to prefer to not know about the
vulnerabilities in the first place - but once you invite the research, the clock starts ticking and you need to
ship fixes reasonably fast.

So... should I try it?

There are folks who enthusiastically advocate for bug bounties in every conceivable situation, and people who dislike them
with fierce passion; both sentiments are usually strongly correlated with the line of business they are in.

In reality, bug bounties are not a cure-all, and there are some ways to make them ineffectual or even dangerous.
But they are not as risky or expensive as most people suspect, and when done right, they can actually be fun for your
team, too. You won't know for sure until you try.

February 24, 2018

Product security is an interesting animal: it is a uniquely cross-disciplinary endeavor that spans policy, consulting,
process automation, in-depth software engineering, and cutting-edge vulnerability research. And in contrast to many
other specializations in our field of expertise - say, incident response or network security - we have virtually no
time-tested and coherent frameworks for setting it up within a company of any size.

In my previous post, I shared some thoughts
on nurturing technical organizations and cultivating the right kind of leadership within. Today, I figured it would
be fitting to follow up with several notes on what I learned about structuring product security work - and about actually
making the effort count.

The "comfort zone" trap

For security engineers, knowing your limits is a sought-after quality: there is nothing more dangerous than a security
expert who goes off script and starts dispensing authoritatively-sounding but bogus advice on a topic they know very
little about. But that same quality can be destructive when it prevents us from growing beyond our most familiar role: that of
a critic who pokes holes in other people's designs.

The role of a resident security critic lends itself all too easily to a sense of supremacy: the mistaken
belief that our cognitive skills exceed the capabilities of the engineers and product managers who come to us for help
- and that the cool bugs we file are the ultimate proof of our special gift. We start taking pride in the mere act
of breaking somebody else's software - and then write scathing but ineffectual critiques addressed to executives,
demanding that they either put a stop to a project or sign off on a risk. And hey, in the latter case, they better
brace for our triumphant "I told you so" at some later date.

Of course, escalations of this type have their place, but they need to be a very rare sight; when practiced routinely, they are a telltale
sign of a dysfunctional team. We might be failing to think up viable alternatives that are in tune with business or engineering needs; we might
be very unpersuasive, failing to communicate with other rational people in a language they understand; or it might be that our tolerance for risk
is badly out of whack with the rest of the company. Whatever the cause, I've seen high-level escalations where the security team
spoke of valiant efforts to resist inexplicably awful design decisions or data sharing setups; and where product leads in turn talked about
pressing business needs randomly blocked by obstinate security folks. Sometimes, simply having them compare their notes would be enough to arrive
at a technical solution - such as sharing a less sensitive subset of the data at hand.

To be effective, any product security program must be rooted in a partnership with the rest of the company, focused on helping them get stuff done
while eliminating or reducing security risks. To combat the toxic us-versus-them mentality, I found it helpful to have some team members with
software engineering backgrounds, even if it's the ownership of a small open-source project or so. This can broaden our horizons, helping us see
that we all make the same mistakes - and that not every solution that sounds good on paper is usable once we code it up.

Getting off the treadmill

All security programs involve a good chunk of operational work. For product security, this can be a combination of product launch reviews, design consulting requests, incoming bug reports, or compliance-driven assessments of some sort. And curiously, such reactive work also has the property of gradually expanding to consume all the available resources on a team: next year is bound to bring even more review requests, even more regulatory hurdles, and even more incoming bugs to triage and fix.

Being more tractable, such routine tasks are also more readily enshrined in SDLs, SLAs, and all kinds of other official documents that are often mistaken for a mission statement that justifies the existence of our teams. Soon, instead of explaining to a developer why they should fix a particular problem right away, we end up pointing them to page 17 in our severity classification guideline, which defines that "severity 2" vulnerabilities need to be resolved within a month. Meanwhile, another policy may be telling them that they need to run a fuzzer or a web application scanner for a particular number of CPU-hours - no matter whether it makes sense or whether the job is set up right.

To run a product security program that scales sublinearly, stays abreast of future threats, and doesn't erect bureaucratic speed bumps just for the sake of it, we need to recognize this inherent tendency for operational work to take over - and we need to reign it in. No matter what the last year's policy says, we usually don't need to be doing security reviews with a particular cadence or to a particular depth; if we need to scale them back 10% to staff a two-quarter project that fixes an important API and squashes an entire class of bugs, it's a short-term risk we should feel empowered to take.

As noted in my earlier post, I find contingency planning to be a valuable tool in this regard: why not ask ourselves how the team would cope if the workload went up another 30%, but bad financial results precluded any team growth? It's actually fun to think about such hypotheticals ahead of the time - and hey, if the ideas sound good, why not try them out today?

Living for a cause

It can be difficult to understand if our security efforts are structured and prioritized right; when faced with such uncertainty, it is natural to stick to the safe fundamentals - investing most of our resources into the very same things that everybody else in our industry appears to be focusing on today.

I think it's important to combat this mindset - and if so, we might as well tackle it head on. Rather than focusing on tactical objectives and policy documents, try to write down a concise mission statement explaining why you are a team in the first place, what specific business outcomes you are aiming for, how do you prioritize it, and how you want it all to change in a year or two. It should be a fluid narrative that reads right and that everybody on your team can take pride in; my favorite way of starting the conversation is telling folks that we could always have a new VP tomorrow - and that the VP's first order of business could be asking, "why do you have so many people here and how do I know they are doing the right thing?". It's a playful but realistic framing device that motivates people to get it done.

In general, a comprehensive product security program should probably start with the assumption that no matter how many resources we have at our disposal, we will never be able to stay in the loop on everything that's happening across the company - and even if we did, we're not going to be able to catch every single bug. It follows that one of our top priorities for the team should be making sure that bugs don't happen very often; a scalable way of getting there is equipping engineers with intuitive and usable tools that make it easy to perform common tasks without having to worry about security at all. Examples include standardized, managed containers for production jobs; safe-by-default APIs, such as strict contextual autoescaping for XSS or type safety for SQL; security-conscious style guidelines; or plug-and-play libraries that take care of common crypto or ACL enforcement tasks.

Of course, not all problems can be addressed on framework level, and not every engineer will always reach for the right tools. Because of this, the next principle that I found to be worth focusing on is containment and mitigation: making sure that bugs are difficult to exploit when they happen, or that the damage is kept in check. The solutions in this space can range from low-level enhancements (say, hardened allocators or seccomp-bpf sandboxes) to client-facing features such as browser origin isolation or Content Security Policy.

The usual consulting, review, and outreach tasks are an important facet of a product security program, but probably shouldn't be the sole focus of your team. It's also best to avoid undue emphasis on vulnerability showmanship: while valuable in some contexts, it creates a hypercompetitive environment that may be hostile to less experienced team members - not to mention, squashing individual bugs offers very limited value if the same issue is likely to be reintroduced into the codebase the next day. I like to think of security reviews as a teaching opportunity instead: it's a way to raise awareness, form partnerships with engineers, and help them develop lasting habits that reduce the incidence of bugs. Metrics to understand the impact of your work are important, too; if your engagements are seen mostly as a yet another layer of red tape, product teams will stop reaching out to you for advice.

The other tenet of a healthy product security effort requires us to recognize at a scale and given enough time, every defense mechanism is bound to fail - and so, we need ways to prevent bugs from turning into incidents. The efforts in this space may range from developing product-specific signals for the incident response and monitoring teams; to offering meaningful vulnerability reward programs and nourishing a healthy and respectful relationship with the research community; to organizing regular offensive exercises in hopes of spotting bugs before anybody else does.

Oh, one final note: an important feature of a healthy security program is the existence of multiple feedback loops that help you spot problems without the need to micromanage the organization and without being deathly afraid of taking chances. For example, the data coming from bug bounty programs, if analyzed correctly, offers a wonderful way to alert you to systemic problems in your codebase - and later on, to measure the impact of any remediation and hardening work.

My career is a different story. Over the past two decades and a change, I went from writing CGI scripts and setting up WAN routers for a chain of shopping malls, to doing pentests for institutional customers, to designing a series of network monitoring platforms and handling incident response for a big telco, to building and running the product security org for one of the largest companies in the world. It's been an interesting ride - and now that I'm on the hook for the well-being of about 100 folks across more than a dozen subteams around the world, I've been thinking a bit about the lessons learned along the way.

Of course, I'm a bit hesitant to write such a post: sometimes, your efforts pan out not because of your approach, but despite it - and it's possible to draw precisely the wrong conclusions from such anecdotes. Still, I'm very proud of the culture we've created and the caliber of folks working on our team. It happened through the work of quite a few talented tech leads and managers even before my time, but it did not happen by accident - so I figured that my observations may be useful for some, as long as they are taken with a grain of salt.

But first, let me start on a somewhat somber note: what nobody tells you is that one's level on the leadership ladder tends to be inversely correlated with several measures of happiness. The reason is fairly simple: as you get more senior, a growing number of people will come to you expecting you to solve increasingly fuzzy and challenging problems - and you will no longer be patted on the back for doing so. This should not scare you away from such opportunities, but it definitely calls for a particular mindset: your motivation must come from within. Look beyond the fight-of-the-day; find satisfaction in seeing how far your teams have come over the years.

With that out of the way, here's a collection of notes, loosely organized into three major themes.

The curse of a techie leader

Perhaps the most interesting observation I have is that for a person coming from a technical background, building a healthy team is first and foremost about the subtle art of letting go.

There is a natural urge to stay involved in any project you've started or helped improve; after all, it's your baby: you're familiar with all the nuts and bolts, and nobody else can do this job as well as you. But as your sphere of influence grows, this becomes a choke point: there are only so many things you could be doing at once. Just as importantly, the project-hoarding behavior robs more junior folks of the ability to take on new responsibilities and bring their own ideas to life. In other words, when done properly, delegation is not just about freeing up your plate; it's also about empowerment and about signalling trust.

Of course, when you hand your project over to somebody else, the new owner will initially be slower and more clumsy than you; but if you pick the new leads wisely, give them the right tools and the right incentives, and don't make them deathly afraid of messing up, they will soon excel at their new jobs - and be grateful for the opportunity.

A related affliction of many accomplished techies is the conviction that they know the answers to every question even tangentially related to their domain of expertise; that belief is coupled with a burning desire to have the last word in every debate. When practiced in moderation, this behavior is fine among peers - but for a leader, one of the most important skills to learn is knowing when to keep your mouth shut: people learn a lot better by experimenting and making small mistakes than by being schooled by their boss, and they often try to read into your passing remarks. Don't run an authoritarian camp focused on total risk aversion or perfectly efficient resource management; just set reasonable boundaries and exit conditions for experiments so that they don't spiral out of control - and be amazed by the results every now and then.

Death by planning

When nothing is on fire, it's easy to get preoccupied with maintaining the status quo. If your current headcount or budget request lists all the same projects as last year's, or if you ever find yourself ending an argument by deferring to a policy or a process document, it's probably a sign that you're getting complacent. In security, complacency usually ends in tears - and when it doesn't, it leads to burnout or boredom.

In my experience, your goal should be to develop a cadre of managers or tech leads capable of coming up with clever ideas, prioritizing them among themselves, and seeing them to completion without your day-to-day involvement. In your spare time, make it your mission to challenge them to stay ahead of the curve. Ask your vendor security lead how they'd streamline their work if they had a 40% jump in the number of vendors but no extra headcount; ask your product security folks what's the second line of defense or containment should your primary defenses fail. Help them get good ideas off the ground; set some mental success and failure criteria to be able to cut your losses if something does not pan out.

Of course, malfunctions happen even in the best-run teams; to spot trouble early on, instead of overzealous project tracking, I found it useful to encourage folks to run a data-driven org. I'd usually ask them to imagine that a brand new VP shows up in our office and, as his first order of business, asks "why do you have so many people here and how do I know they are doing the right things?". Not everything in security can be quantified, but hard data can validate many of your assumptions - and will alert you to unseen issues early on.

When focusing on data, it's important not to treat pie charts and spreadsheets as an art unto itself; if you run a security review process for your company, your CSAT scores are going to reach 100% if you just rubberstamp every launch request within ten minutes of receiving it. Make sure you're asking the right questions; instead of "how satisfied are you with our process", try "is your product better as a consequence of talking to us?"

Whenever things are not progressing as expected, it is a natural instinct to fall back to micromanagement, but it seldom truly cures the ill. It's probable that your team disagrees with your vision or its feasibility - and that you're either not listening to their feedback, or they don't think you'd care. It's good to assume that most of your employees are as smart or smarter than you; barking your orders at them more loudly or more frequently does not lead anyplace good. It's good to listen to them and either present new facts or work with them on a plan you can all get behind.

In some circumstances, all that's needed is honesty about the business trade-offs, so that your team feels like your "partner in crime", not a victim of circumstance. For example, we'd tell our folks that by not falling behind on basic, unglamorous work, we earn the trust of our VPs and SVPs - and that this translates into the independence and the resources we need to pursue more ambitious ideas without being told what to do; it's how we game the system, so to speak. Oh: leading by example is a pretty powerful tool at your disposal, too.

The human factor

I've come to appreciate that hiring decent folks who can get along with others is far more important than trying to recruit conference-circuit superstars. In fact, hiring superstars is a decidedly hit-and-miss affair: while certainly not a rule, there is a proportion of folks who put the maintenance of their celebrity status ahead of job responsibilities or the well-being of their peers.

For teams, one of the most powerful demotivators is a sense of unfairness and disempowerment. This is where tech-originating leaders can shine, because their teams usually feel that their bosses understand and can evaluate the merits of the work. But it also means you need to be decisive and actually solve problems for them, rather than just letting them vent. You will need to make unpopular decisions every now and then; in such cases, I think it's important to move quickly, rather than prolonging the uncertainty - but it's also important to sincerely listen to concerns, explain your reasoning, and be frank about the risks and trade-offs.

Whenever you see a clash of personalities on your team, you probably need to respond swiftly and decisively; being right should not justify being a bully. If you don't react to repeated scuffles, your best people will probably start looking for other opportunities: it's draining to put up with constant pie fights, no matter if the pies are thrown straight at you or if you just need to duck one every now and then.

More broadly, personality differences seem to be a much better predictor of conflict than any technical aspects underpinning a debate. As a boss, you need to identify such differences early on and come up with creative solutions. Sometimes, all you need is taking some badly-delivered but valid feedback and having a conversation with the other person, asking some questions that can help them reach the same conclusions without feeling that their worldview is under attack. Other times, the only path forward is making sure that some folks simply don't run into each for a while.

Finally, dealing with low performers is a notoriously hard but important part of the game. Especially within large companies, there is always the temptation to just let it slide: sideline a struggling person and wait for them to either get over their issues or leave. But this sends an awful message to the rest of the team; for better or worse, fairness is important to most. Simply firing the low performers is seldom the best solution, though; successful recovery cases are what sets great managers apart from the average ones.

Oh, one more thought: people in leadership roles have their allegiance divided between the company and the people who depend on them. The obligation to the company is more formal, but the impact you have on your team is longer-lasting and more intimate. When the obligations to the employer and to your team collide in some way, make sure you can make the right call; it might be one of the the most consequential decisions you'll ever make.

December 13, 2017

♪ Used to have a little now I have a lot
I'm still, I'm still Jenny from the block
chain ♪

For all that has been written about Bitcoin and its ilk, it is curious that the focus is almost solely what the cryptocurrencies are supposed to be. Technologists wax lyrical about the potential for blockchains to change almost every aspect of our lives. Libertarians and paleoconservatives ache for the return to "sound money" that can't be conjured up at the whim of a bureaucrat. Mainstream economists wag their fingers, proclaiming that a proper currency can't be deflationary, that it must maintain a particular velocity, or that the government must be able to nip crises of confidence in the bud. And so on.

Much of this may be true, but the proponents of cryptocurrencies should recognize that an appeal to consequences is not a guarantee of good results. The critics, on the other hand, would be best served to remember that they are drawing far-reaching conclusions about the effects of modern monetary policies based on a very short and tumultuous period in history.

In this post, my goal is to ditch most of the dogma, talk a bit about the origins of money - and then see how "crypto" fits the bill.

1. The prehistory of currencies

The emergence of money is usually explained in a very straightforward way. You know the story: a farmer raised a pig, a cobbler made a shoe. The cobbler needed to feed his family while the farmer wanted to keep his feet warm - and so they met to exchange the goods on mutually beneficial terms. But as the tale goes, the barter system had a fatal flaw: sometimes, a farmer wanted a cooking pot, a potter wanted a knife, and a blacksmith wanted a pair of pants. To facilitate increasingly complex, multi-step exchanges without requiring dozens of people to meet face to face, we came up with an abstract way to represent value - a shiny coin guaranteed to be accepted by every tradesman.

It is a nice parable, but it probably isn't very true. It seems far more plausible that early societies relied on the concept of debt long before the advent of currencies: an informal tally or a formal ledger would be used to keep track of who owes what to whom. The concept of debt, closely associated with one's trustworthiness and standing in the community, would have enabled a wide range of economic activities: debts could be paid back over time, transferred, renegotiated, or forgotten - all without having to engage in spot barter or to mint a single coin. In fact, such non-monetary, trust-based, reciprocal economies are still common in closely-knit communities: among families, neighbors, coworkers, or friends.

In such a setting, primitive currencies probably emerged simply as a consequence of having a system of prices: a cow being worth a particular number of chickens, a chicken being worth a particular number of beaver pelts, and so forth. Formalizing such relationships by settling on a single, widely-known unit of account - say, one chicken - would make it more convenient to transfer, combine, or split debts; or to settle them in alternative goods.

Contrary to popular belief, for communal ledgers, the unit of account probably did not have to be particularly desirable, durable, or easy to carry; it was simply an accounting tool. And indeed, we sometimes run into fairly unusual units of account even in modern times: for example, cigarettes can be the basis of a bustling prison economy even when most inmates don't smoke and there are not that many packs to go around.

2. The age of commodity money

In the end, the development of coinage might have had relatively little to do with communal trade - and far more with the desire to exchange goods with strangers. When dealing with a unfamiliar or hostile tribe, the concept of a chicken-denominated ledger does not hold up: the other side might be disinclined to honor its obligations - and get away with it, too. To settle such problematic trades, we needed a "spot" medium of exchange that would be easy to carry and authenticate, had a well-defined value, and a near-universal appeal. Throughout much of the recorded history, precious metals - predominantly gold and silver - proved to fit the bill.

In the most basic sense, such commodities could be seen as a tool to reconcile debts across societal boundaries, without necessarily replacing any local units of account. An obligation, denominated in some local currency, would be created on buyer's side in order to procure the metal for the trade. The proceeds of the completed transaction would in turn allow the seller to settle their own local obligations that arose from having to source the traded goods. In other words, our wondrous chicken-denominated ledgers could coexist peacefully with gold - and when commodity coinage finally took hold, it's likely that in everyday trade, precious metals served more as a useful abstraction than a precise store of value. A "silver chicken" of sorts.

Still, the emergence of commodity money had one interesting side effect: it decoupled the unit of debt - a "claim on the society", in a sense - from any moral judgment about its origin. A piece of silver would buy the same amount of food, whether earned through hard labor or won in a drunken bet. This disconnect remains a central theme in many of the debates about social justice and unfairly earned wealth.

3. The State enters the game

If there is one advantage of chicken ledgers over precious metals, it's that all chickens look and cluck roughly the same - something that can't be said of every nugget of silver or gold. To cope with this problem, we needed to shape raw commodities into pieces of a more predictable shape and weight; a trusted party could then stamp them with a mark to indicate the value and the quality of the coin.

At first, the task of standardizing coinage rested with private parties - but the responsibility was soon assumed by the State. The advantages of this transition seemed clear: a single, widely-accepted and easily-recognizable currency could be now used to settle virtually all private and official debts.

Alas, in what deserves the dubious distinction of being one of the earliest examples of monetary tomfoolery, some States succumbed to the temptation of fiddling with the coinage to accomplish anything from feeding the poor to waging wars. In particular, it would be common to stamp coins with the same face value but a progressively lower content of silver and gold. Perhaps surprisingly, the strategy worked remarkably well; at least in the times of peace, most people cared about the value stamped on the coin, not its precise composition or weight.

And so, over time, representative money was born: sooner or later, most States opted to mint coins from nearly-worthless metals, or print banknotes on paper and cloth. This radically new currency was accompanied with a simple pledge: the State offered to redeem it at any time for its nominal value in gold.

Of course, the promise was largely illusory: the State did not have enough gold to honor all the promises it had made. Still, as long as people had faith in their rulers and the redemption requests stayed low, the fundamental mechanics of this new representative currency remained roughly the same as before - and in some ways, were an improvement in that they lessened the insatiable demand for a rare commodity. Just as importantly, the new money still enabled international trade - using the underlying gold exchange rate as a reference point.

4. Fractional reserve banking and fiat money

For much of the recorded history, banking was an exceptionally dull affair, not much different from running a communal chicken
ledger of the old. But then, something truly marvelous happened in the 17th century: around that time, many European countries have witnessed
the emergence of fractional-reserve banks.

These private ventures operated according to a simple scheme: they accepted people's coin
for safekeeping, promising to pay a premium on every deposit made. To meet these obligations and to make a profit, the banks then
used the pooled deposits to make high-interest loans to other folks. The financiers figured out that under normal circumstances
and when operating at a sufficient scale, they needed only a very modest reserve - well under 10% of all deposited money - to be
able to service the usual volume and size of withdrawals requested by their customers. The rest could be loaned out.

The very curious consequence of fractional-reserve banking was that it pulled new money out of thin air.
The funds were simultaneously accounted for in the statements shown to the depositor, evidently available for withdrawal or
transfer at any time; and given to third-party borrowers, who could spend them on just about anything. Heck, the borrowers could
deposit the proceeds in another bank, creating even more money along the way! Whatever they did, the sum of all funds in the monetary
system now appeared much higher than the value of all coins and banknotes issued by the government - let alone the amount of gold
sitting in any vault.

Of course, no new money was being created in any physical sense: all that banks were doing was engaging in a bit of creative accounting - the sort of which would probably land you in jail if you attempted it today in any other comparably vital field of enterprise. If too many depositors were to ask for their money back, or if too many loans were to go bad, the banking system would fold. Fortunes would evaporate in a puff of accounting smoke, and with the disappearance of vast quantities of quasi-fictitious ("broad") money, the wealth of the entire nation would shrink.

In the early 20th century, the world kept witnessing just that; a series of bank runs and economic contractions forced the governments around the globe to act. At that stage, outlawing fractional-reserve banking was no longer politically or economically tenable; a simpler alternative was to let go of gold and move to fiat money - a currency implemented as an abstract social construct, with no predefined connection to the physical realm. A new breed of economists saw the role of the government not in trying to peg the value of money to an inflexible commodity, but in manipulating its supply to smooth out economic hiccups or to stimulate growth.

(Contrary to popular beliefs, such manipulation is usually not done by printing new banknotes; more sophisticated methods, such as lowering reserve requirements for bank deposits or enticing banks to invest its deposits into government-issued securities, are the preferred route.)

The obvious peril of fiat money is that in the long haul, its value is determined strictly by people's willingness to accept a piece of paper in exchange for their trouble; that willingness, in turn, is conditioned solely on their belief that the same piece of paper would buy them something nice a week, a month, or a year from now. It follows that a simple crisis of confidence could make a currency nearly worthless overnight. A prolonged period of hyperinflation and subsequent austerity in Germany and Austria was one of the precipitating factors that led to World War II. In more recent times, dramatic episodes of hyperinflation plagued the fiat currencies of Israel (1984), Mexico (1988), Poland (1990), Yugoslavia (1994), Bulgaria (1996), Turkey (2002), Zimbabwe (2009), Venezuela (2016), and several other nations around the globe.

For the United States, the switch to fiat money came relatively late, in 1971. To stop the dollar from plunging like a rock, the Nixon administration employed a clever trick: they ordered the freeze of wages and prices for the 90 days that immediately followed the move. People went on about their lives and paid the usual for eggs or milk - and by the time the freeze ended, they were accustomed to the idea that the "new", free-floating dollar is worth about the same as the old, gold-backed one. A robust economy and favorable geopolitics did the rest, and so far, the American adventure with fiat currency has been rather uneventful - perhaps except for the fact that the price of gold itself skyrocketed from $35 per troy ounce in 1971 to $850 in 1980 (or, from $210 to $2,500 in today's dollars).

Well, one thing did change: now better positioned to freely tamper with the supply of money, the regulators in accord with the bankers adopted a policy of creating it at a rate that slightly outstripped the organic growth in economic activity. They did this to induce a small, steady degree of inflation, believing that doing so would discourage people from hoarding cash and force them to reinvest it for the betterment of the society. Some critics like to point out that such a policy functions as a "backdoor" tax on savings that happens to align with the regulators' less noble interests; still, either way: in the US and most other developed nations, the purchasing power of any money kept under a mattress will drop at a rate of somewhere between 2 to 10% a year.

5. So what's up with Bitcoin?

Well... countless tomes have been written about the nature and the optimal characteristics of government-issued fiat currencies. Some heterodox economists, notably including Murray Rothbard, have also explored the topic of privately-issued, decentralized, commodity-backed currencies. But Bitcoin is a wholly different animal.

In essence, BTC is a global, decentralized fiat currency: it has no (recoverable) intrinsic value, no central authority to issue it or define its exchange rate, and it has no anchoring to any historical reference point - a combination that until recently seemed nonsensical and escaped any serious scrutiny. It does the unthinkable by employing three clever tricks:

It allows anyone to create new coins, but only by solving brute-force computational challenges that get more difficult as the time goes by,

It prevents unauthorized transfer of coins by employing public key cryptography to sign off transactions, with only the authorized holder of a coin knowing the correct key,

It prevents double-spending by using a distributed public ledger ("blockchain"), recording the chain of custody for coins in a tamper-proof way.

The blockchain is often described as the most important feature of Bitcoin, but in some ways, its importance is overstated. The idea of a currency that does not rely on a centralized transaction clearinghouse is what helped propel the platform into the limelight - mostly because of its novelty and the perception that it is less vulnerable to government meddling (although the government is still free to track down, tax, fine, or arrest any participants). On the flip side, the everyday mechanics of BTC would not be fundamentally different if all the transactions had to go through Bitcoin Bank, LLC.

A more striking feature of the new currency is the incentive structure surrounding the creation of new coins. The underlying design democratized the creation of new coins early on: all you had to do is leave your computer running for a while to acquire a number of tokens. The tokens had no practical value, but obtaining them involved no substantial expense or risk. Just as importantly, because the difficulty of the puzzles would only increase over time, the hope was that if Bitcoin caught on, latecomers would find it easier to purchase BTC on a secondary market than mine their own - paying with a more established currency at a mutually beneficial exchange rate.

The persistent publicity surrounding Bitcoin and other cryptocurrencies did the rest - and today, with the growing scarcity of coins and the rapidly increasing demand, the price of a single token hovers somewhere south of $15,000.

6. So... is it bad money?

Predicting is hard - especially the future. In some sense, a coin that represents a cryptographic proof of wasted CPU cycles is no better or worse than a currency that relies on cotton decorated with pictures of dead presidents. It is true that Bitcoin suffers from many implementation problems - long transaction processing times, high fees, frequent security breaches of major exchanges - but in principle, such problems can be overcome.

That said, currencies live and die by the lasting willingness of others to accept them in exchange for services or goods - and in that sense, the jury is still out. The use of Bitcoin to settle bona fide purchases is negligible, both in absolute terms and in function of the overall volume of transactions. In fact, because of the technical challenges and limited practical utility, some companies that embraced the currency early on are now backing out.

When the value of an asset is derived almost entirely from its appeal as an ever-appreciating investment vehicle, the situation has all the telltale signs of a speculative bubble. But that does not prove that the asset is destined to collapse, or that a collapse would be its end. Still, the built-in deflationary mechanism of Bitcoin - the increasing difficulty of producing new coins - is probably both a blessing and a curse.

It's going to go one way or the other; and when it's all said and done, we're going to celebrate the people who made the right guess. Because future is actually pretty darn easy to predict -- in retrospect.

December 10, 2017

Continuing the tradition of the previous post, here's a perfectly good bench:

The legs are 8/4 hard maple, cut into 2.3" (6 cm) strips and then glued together. The top is 4/4 domestic walnut, with an additional strip glued to the bottom to make it look thicker (because gosh darn, walnut is expensive).

Cut on a bandsaw, joined together with a biscuit joiner + glue, then sanded, that's about it. Still applying finish (nitrocellulose lacquer from a rattle can), but this was the last moment when I could snap a photo (about to get dark) and it basically looks like the final product anyway. Pretty simple but turned out nice.

November 04, 2017

I've been a DIYer all my adult life. Some of my non-software projects still revolve around computers, especially when they deal with CNC machining or electronics. But I've been also dabbling in woodworking for quite a while. I have not put that much effort into documenting my projects (say, cutting boards) - but I figured it's time to change that. It may inspire some folks to give a new hobby a try - or help them overcome a problem or two.

So, without further ado, here's the build log for a dining table I put together over the past two weekends or so. I think I turned out pretty nice:

May 04, 2017

"It's tough to make predictions, especially about the future."- variously attributed to Yogi Berra and Niels Bohr

Right. So let's say you are visited by transdimensional space aliens from outer space. There's some old-fashioned probing, but eventually, they get to the point. They outline a series of apocalyptic prophecies, beginning with the surprise 2032 election of Dwayne Elizondo Mountain Dew Herbert Camacho as the President of the United States, followed by a limited-scale nuclear exchange with the Grand Duchy of Ruritania in 2036, and culminating with the extinction of all life due to a series of cascading Y2K38 failures that start at an Ohio pretzel reprocessing plan. Long story short, if you want to save mankind, you have to warn others of what's to come.

But there's a snag: when you wake up in a roadside ditch in Alabama, you realize that nobody is going to believe your story! If you come forward, your professional and social reputation will be instantly destroyed. If you're lucky, the vindication of your claims will come fifteen years later; if not, it might turn out that you were pranked by some space alien frat boys who just wanted to have some cheap space laughs. The bottom line is, you need to be certain before you make your move. You figure this means staying mum until the Election Day of 2032.

But wait, this plan is also not very good! After all, how could your future self convince others that you knew about President Camacho all along? Well... if you work in information security, you are probably familiar with a neat solution: write down your account of events in a text file, calculate a cryptographic hash of this file, and publish the resulting value somewhere permanent. Fifteen years later, reveal the contents of your file and point people to your old announcement. Explain that you must have been in the possession of this very file back in 2017; otherwise, you would not have known its hash. Voila - a commitment scheme!

Although elegant, this approach can be risky: historically, the usable life of cryptographic hash functions seemed to hover at somewhere around 15 years - so even if you pick a very modern algorithm, there is a real risk that future advances in cryptanalysis could severely undermine the strength of your proof. No biggie, though! For extra safety, you could combine several independent hashing functions, or increase the computational complexity of the hash by running it in a loop. There are also some less-known hash functions, such as SPHINCS, that are designed with different trade-offs in mind and may offer longer-term security guarantees.

Of course, the computation of the hash is not enough; it needs to become an immutable part of the public record and remain easy to look up for years to come. There is no guarantee that any particular online publishing outlet is going to stay afloat that long and continue to operate in its current form. The survivability of more specialized and experimental platforms, such as blockchain-based notaries, seems even less clear. Thankfully, you can resort to another kludge: if you publish the hash through a large number of independent online venues, there is a good chance that at least one of them will be around in 2032.

(Offline notarization - whether of the pen-and-paper or the PKI-based variety - offers an interesting alternative. That said, in the absence of an immutable, public ledger, accusations of forgery or collusion would be very easy to make - especially if the fate of the entire planet is at stake.)

Even with this out of the way, there is yet another profound problem with the plan: a current-day scam artist could conceivably generate hundreds or thousands of political predictions, publish the hashes, and then simply discard or delete the ones that do not come true by 2032 - thus creating an illusion of prescience. To convince skeptics that you are not doing just that, you could incorporate a cryptographic proof of work into your approach, attaching a particular CPU time "price tag" to every hash. The future you could then claim that it would have been prohibitively expensive for the former you to attempt the "prediction spam" attack. But this argument seems iffy: a $1,000 proof may already be too costly for a lower middle class abductee, while a determined tech billionaire could easily spend $100,000 to pull off an elaborate prank on the entire world. Not to mention, massive CPU resources can be commandeered with little or no effort by the operators of large botnets and many other actors of this sort.

In the end, my best idea is to rely on an inherently low-bandwidth publication medium, rather than a high-cost one. For example, although a determined hoaxer could place thousands of hash-bearing classifieds in some of the largest-circulation newspapers, such sleigh-of-hand would be trivial for future sleuths to spot (at least compared to combing through the entire Internet for an abandoned hash). Or, as per an anonymous suggestion relayed by Thomas Ptacek: just tattoo the signature on your body, then post some post some pics; there are only so many places for a tattoo to go.

Still, what was supposed to be a nice, scientific proof devolved into a bunch of hand-wavy arguments and poorly-quantified probabilities. For the sake of future abductees: is there a better way?

It is also fun to challenge yourself to employ fuzzers in non-conventional ways. Two canonical examples are having your fuzzing target call abort() whenever two libraries that are supposed to implement the same algorithm produce different outputs when given identical input data; or when a library produces different outputs when asked to encode or decode the same data several times in a row.

Such tricks may sound fanciful, but they actually find interesting bugs. In one case, AFL-based equivalence fuzzing revealed a
bunch of fairly rudimentary flaws in common bignum libraries,
with some theoretical implications for crypto apps. Another time, output stability checks revealed long-lived issues in
IJG jpeg and other widely-used image processing libraries, leaking
data across web origins.

In one of my recent experiments, I decided to fuzz
brotli, an innovative compression library used in Chrome. But since it's been
already fuzzed for many CPU-years, I wanted to do it with a twist:
stress-test the compression routines, rather than the usually targeted decompression side. The latter is a far more fruitful
target for security research, because decompression normally involves dealing with well-formed inputs, whereas compression code is meant to
accept arbitrary data and not think about it too hard. That said, the low likelihood of flaws also means that the compression bits are a relatively unexplored surface that may be worth
poking with a stick every now and then.

In this case, the library held up admirably - spare for a handful of computationally intensive plaintext inputs
(that are now easy to spot due to the recent improvements to AFL).
But the output corpus synthesized by AFL, after being seeded just with a single file containing just "0", featured quite a few peculiar finds:

Nonsensical but undeniably English sentences:
them with them m with them with themselves,
in the fix the in the pin th in the tin,
amassize the the in the in the inhe@massive in,
he the themes where there the where there,
size at size at the tie.

The results are quite unexpected, given that they are just a product of randomly mutating a single-byte input file and observing the code coverage in a simple compression tool. The explanation is that brotli, in addition to more familiar binary coding methods, uses a static dictionary constructed by analyzing common types of web content. Somehow, by observing the behavior of the program, AFL was able to incrementally reconstruct quite a few of these hardcoded keywords - and then put them together in various semi-interesting ways. Not bad.

February 01, 2017

People who are accomplished in one field of expertise tend to believe that they can bring unique insights to just about any other debate.
I am as guilty as anyone: at one time or another, I aired my thoughts on anything from
CNC manufacturing, to
electronics, to
emergency preparedness, to
politics.
Today, I'm about to commit the same sin - but instead of pretending to speak from a position of authority, I wanted to share a more personal tale.

The author, circa 1995. The era of hand-crank computers and punch cards.

Back in my school days, I was that one really tall and skinny kid in the class. It wasn't trying to stay this way; I preferred computer games to sports, and my grandma's Polish cooking was heavy on potatoes, butter, chicken, dumplings, cream, and cheese. But that did not matter: I could eat what I wanted, as often as I wanted, and I still stayed in shape. This made me look down on chubby kids; if my reckless ways had little or no effect on my body, it followed that they had to be exceptionally lazy and must have lacked even the most basic form of self-control.

As I entered adulthood, my habits remained the same. I felt healthy and stayed reasonably active, walking to and from work every other day and hiking with friends whenever I could. But my looks started to change:

The author at a really exciting BlackHat party in 2002.

I figured it's just a part of growing up. But somewhere around my twentieth birthday, I stepped on a bathroom scale and typed the result into an online calculator. I was surprised to find out that my BMI was about 24 - pretty darn close to overweight.

"Pssh, you know how inaccurate these things are!", I exclaimed while searching online to debunk that whole BMI thing. I mean, sure, I had some belly fat - maybe a pizza or two too far - but nothing that wouldn't go away in time. Besides, I was doing fine, so what would be the point of submitting to the society's idea of the "right" weight?

It certainly helped that I was having a blast at work. I made a name for myself in the industry, published a fair amount of cool research, authored a book, settled down, bought a house, had a kid. It wasn't until the age of 26 that I strayed into a doctor's office for a routine checkup. When the nurse asked me about my weight, I blurted out "oh, 175 pounds, give or take". She gave me a funny look and asked me to step on the scale.

Turns out it was quite a bit more than 175 pounds. With a BMI of 27.1, I was now firmly into the "overweight" territory. Yeah yeah, the BMI metric was a complete hoax - but why did my passport photos look less flattering than before?

A random mugshot from 2007. Some people are just born big-boned, I think.

Well, damn. I knew what had to happen: from now on, I was going to start eating healthier foods. I traded Cheetos for nuts, KFC for sushi rolls, greasy burgers for tortilla wraps, milk smoothies for Jamba Juice, fries for bruschettas, regular sodas for diet. I'd even throw in a side of lettuce every now and then. It was bound to make a difference. I just wasn't gonna be one of the losers who check their weight every day and agonize over every calorie on their plate. (Weren't calories a scam, anyway? I think I read that on that cool BMI conspiracy site.)

By the time I turned 32, my body mass index hit 29. At that point, it wasn't just a matter of looking chubby. I could do the math: at that rate, I'd be in a real pickle in a decade or two - complete with a ~50% chance of developing diabetes or cardiovascular disease. This wouldn't just make me miserable, but also mess up the lives of my spouse and kids.

Presenting at Google TGIF in 2013. It must've been the unflattering light.

I wanted to get this over with right away, so I decided to push myself hard. I started biking to work, quite a strenuous ride. It felt good, but did not help: I would simply eat more to compensate and ended up gaining a few extra pounds. I tried starving myself. That worked, sure - only to be followed by an even faster rebound. Ultimately, I had to face the reality: I had a problem and I needed a long-term solution. There was no one weird trick to outsmart the calorie-counting crowd, no overnight cure.

I started looking for real answers. My world came crumbling down; I realized that a "healthy" burrito from Chipotle packed four times as many calories as a greasy burger from McDonald's. That a loaded fruit smoothie from Jamba Juice was roughly equal to two hot dogs with a side of mashed potatoes to boot. That a glass of apple juice fared worse than a can of Sprite, and that bruschetta wasn't far from deep-fried butter on a stick. It didn't matter if it was sugar or fat, bacon or kale. Familiar favorites were not better or worse than the rest. Losing weight boiled down to portion control - and sticking to it for the rest of my life.

It was a slow and humbling journey that spanned almost a year. I ended up losing around 70 lbs along the way. What shocked me is that it wasn't a painful experience; what held me back for years was just my own smugness, plus the folksy wisdom gleaned from the covers of glossy magazines.

Author with a tractor, 2017.

I'm not sure there is a moral to this story. I guess one lesson is: don't be a judgmental jerk. Sometimes, the simple things - the ones you think you have all figured out - prove to be a lot more complicated than they seem.

August 26, 2016

If you have not seen it yet, Parisa Tabriz penned a lengthy and insightful post about her experiences on what it takes to succeed in the field of information security.

My own experiences align pretty closely with Parisa's take, so if you are making your first steps down this path, I strongly urge you to give her post a good read. But if I had to sum up my lessons from close to two decades in the industry, I would probably boil them down to four simple rules:

Infosec is all about the mismatch between our intuition and the actual behavior of the systems we build. That makes it harmful to study the field as an abstract, isolated domain. To truly master it, dive into how computers work, then make a habit of asking yourself "okay, but what if assumption X does not hold true?" every step along the way.

Security is a protoscience. Think of chemistry in the early 19th century: a glorious and messy thing, chock-full of colorful personalities, unsolved mysteries, and snake oil salesmen. You need passion and humility to survive. Those who think they have all the answers are a danger to themselves and to people who put their faith in them.

People will trust you with their livelihoods, but will have no way to truly measure the quality of your work. Don't let them down: be painfully honest with yourself and work every single day to address your weaknesses. If you are not embarrassed by the views you held two years ago, you are getting complacent - and complacency kills.

It will feel that way, but you are not smarter than software engineers. Walk in their shoes for a while: write your own code, show it to the world, and be humiliated by all the horrible mistakes you will inevitably make. It will make you better at your job - and will turn you into a better person, too.

August 04, 2016

Up until mid-2010, any rogue website could get a good sense of your browsing habits by specifying a distinctive :visited CSS pseudo-class for any links on the page, rendering thousands of interesting URLs off-screen, and then calling the getComputedStyle API to figure out which pages appear in your browser's history.

After some deliberation, browser vendors have closed this loophole by disallowing almost all attributes in :visited selectors, spare for the fairly indispensable ability to alter foreground and background colors for such links. The APIs have been also redesigned to prevent the disclosure of this color information via getComputedStyle.

This workaround did not fully eliminate the ability to probe your browsing history, but limited it to scenarios where the user can be tricked into unwittingly feeding the style information back to the website one URL at a time. Several fairly convincing attacks have been demonstrated against patched browsers - my own 2013 entry can be found here - but they generally depended on the ability to solicit one click or one keypress per every URL tested. In other words, the whole thing did not scale particularly well.

Or at least, it wasn't supposed to. In 2014, I described a neat trick that exploited normally imperceptible color quantization errors within the browser, amplified by stacking elements hundreds of times, to implement an n-to-2n decoder circuit using just the background-color and opacity properties on overlaid <a href=...> elements to easily probe the browsing history of multiple URLs with a single click. To explain the basic principle, imagine wanting to test two links, and dividing the screen into four regions, like so:

Region #1 is lit only when both links are not visited (¬ link_a ∧ ¬ link_b),

Region #2 is lit only when link A is not visited but link B is visited (¬ link_a ∧ link_b),

Region #3 is lit only when link A is visited but link B is not (link_a ∧ ¬ link_b),

Region #4 is lit only when both links are visited (link_a ∧ link_b).

While the page couldn't directly query the visibility of the segments, we just had to convince the user to click the visible segment once to get the browsing history for both links, for example under the guise of dismissing a pop-up ad. (Of course, the attack could be scaled to far more than just 2 URLs.)

This problem was eventually addressed by browser vendors by simply improving the accuracy of color quantization when overlaying HTML elements; while this did not eliminate the risk, it made the attack far more computationally intensive, requiring the evil page to stack millions of elements to get practical results. Gave over? Well, not entirely. In the footnote of my 2014 article, I mentioned this:

"There is an upcoming CSS feature called mix-blend-mode, which permits non-linear mixing with operators such as multiply, lighten, darken, and a couple more. These operators make Boolean algebra much simpler and if they ship in their current shape, they will remove the need for all the fun with quantization errors, successive overlays, and such. That said, mix-blend-mode is not available in any browser today."

As you might have guessed, patience is a virtue! As of mid-2016, mix-blend-mode - a feature to allow advanced compositing of bitmaps, very similar to the layer blending modes available in photo-editing tools such as Photoshop and GIMP - is shipping in Chrome and Firefox. And as it happens, in addition to their intended purpose, these non-linear blending operators permit us to implement arbitrary Boolean algebra. For example, to implement AND, all we need to do is use multiply:

black (0) x black (0) = black (0)

black (0) x white (1) = black (0)

white (1) x black (0) = black (0)

white (1) x white (1) = white (1)

For a practical demo, click here. A single click in that whack-a-mole game will reveal the state of 9 visited links to the JavaScript executing on the page. If this was an actual game and if it continued for a bit longer, probing the state of hundreds or thousands of URLs would not be particularly hard to pull off.

May 11, 2016

The recent, highly publicized "ImageTragick" vulnerability had countless web developers scrambling to fix a remote code execution vector in ImageMagick - a popular bitmap manipulation tool commonly used to resize, transcode, or annotate user-supplied images on the Web. Whatever your take on "branded" vulnerabilities may be, the flaw certainly is notable for its ease of exploitation: it is an embarrassingly simple shell command injection bug reminiscent of the security weaknesses prevalent in the 1990s, and nearly extinct in core tools today. The issue also bears some parallels to the more far-reaching but equally striking Shellshock bug.

That said, I believe that the publicity that surrounded the flaw was squandered by failing to make one very important point: even with this particular RCE vector fixed, anyone using ImageMagick to process attacker-controlled images is likely putting themselves at a serious risk.

The problem is fairly simple: for all its virtues, ImageMagick does not appear to be designed with malicious inputs in mind - and has a long and colorful history of lesser-known but equally serious security flaws. For a single data point, look no further than the work done several months ago by Jodie Cunningham. Jodie fuzzed IM with a vanilla setup of afl-fuzz - and quickly identified about two dozen possibly exploitable security holes, along with countless denial of service flaws. A small sample of Jodie's findings can be found here.

Jodie's efforts probably just scratched the surface; after "ImageTragick", a more recent effort by Hanno Boeck uncovered even more bugs; from what I understand, Hanno's work also went only as far as using off-the-shelf fuzzing tools. You can bet that, short of a major push to redesign the entire IM codebase, the trickle won't stop any time soon.

And so, the advice sorely missing from the "ImageTragick" webpage is this:

If all you need to do is simple transcoding or thumbnailing of potentially untrusted images, don't use ImageMagick. Make a direct use of libpng, libjpeg-turbo, and giflib; for a robust way to use these libraries, have a look at the source code of Chromium or Firefox. The resulting implementation will be considerably faster, too.

If you have to use ImageMagick on untrusted inputs, consider sandboxing the code with seccomp-bpf or an equivalent mechanism that robustly restricts access to all user space artifacts and to the kernel attack surface. Rudimentary sandboxing technologies, such as chroot() or UID separation, are probably not enough.

If all other options fail, be zealous about limiting the set of image formats you actually pass down to IM. The bare minimum is to thoroughly examine the headers of the received files. It is also helpful to explicitly specify the input format when calling the utility, as to preempt auto-detection code. For command-line invocations, this can be done like so:

February 09, 2016

The nice thing about the control flow instrumentation used by American Fuzzy Lop is that it allows you to do much more than just, well, fuzzing stuff. For example, the suite has long shipped with a standalone tool called afl-tmin, capable of automatically shrinking test cases while still making sure that they exercise the same functionality in the targeted binary (or that they trigger the same crash). Another similar tool, afl-cmin, employed a similar trick to eliminate redundant files in any large testing corpora.

The latest release of AFL features another nifty new addition along these lines: afl-analyze. The tool takes an input file, sequentially flips bytes in this data stream, and then observes the behavior of the targeted binary after every flip. From this information, it can infer several things:

Classify some content as no-op blocks that do not elicit any changes to control flow (say, comments, pixel data, etc).

Checksums, magic values, and other short, atomically compared tokens where any bit flip causes the same change to program execution.

This gives us some remarkable and quick insights into the syntax of the file and the behavior of the underlying parser. It may sound too good to be true, but actually seems to work in practice. For a quick demo, let's see what afl-analyze has to say about running cut -d ' ' -f1 on a text file:

We see that cut really only cares about spaces and newlines. Interestingly, it also appears that the tool always tokenizes the entire line, even if it's just asked to return the first token. Neat, right?

Of course, the value of afl-analyze is greater for incomprehensible binary formats than for simple text utilities; perhaps even more so when dealing with black-box parsers (which can be analyzed thanks to the runtime QEMU instrumentation supported in AFL). To try out the tool's ability to deal with binaries, let's check out libpng:

This looks pretty damn good: we have two four-byte signatures, followed by chunk length, four-byte chunk name, chunk length, some image metadata, and then a comment section. Neat, right? All in a matter of seconds: no configuration needed and no knobs to turn.

Of course, the tool shipped just moments ago and is still very much experimental; expect some kinks. Field testing and feedback welcome!

January 14, 2016

It's a fairly systematic and level-headed approach to threat modeling and risk management, except not for computer systems - and instead, for real life. There's not much I can add on top of what's already said on the linked page; have a look, you will probably find it to be an interesting read.

October 02, 2015

In the wake of the tragic events in Roseburg, I decided to briefly return to the topic of looking at the US culture from the perspective of a person born in Europe. In particular, I wanted to circle back to the topic of firearms.

Contrary to popular beliefs, the United States has witnessed a dramatic decline in violence over the past 20 years. In fact, when it comes to most types of violent crime - say, robbery, assault, or rape - the country now compares favorably to the UK and many other OECD nations. But as I explored in my earlier posts, one particular statistic - homicide - is still registering about three times as high as in many other places within the EU.

The homicide epidemic in the United States has a complex nature and overwhelmingly affects ethnic minorities and other disadvantaged social groups; perhaps because of this, the phenomenon sees very little honest, public scrutiny. It is propelled into the limelight only in the wake of spree shootings and other sickening, seemingly random acts of terror; such incidents, although statistically insignificant, take a profound mental toll on the American society. At the same time, the effects of high-profile violence seem strangely short-lived: they trigger a series of impassioned political speeches, invariably focusing on the connection between violence and guns - but the nation soon goes back to business as usual, knowing full well that another massacre will happen soon, perhaps the very same year.

On the face of it, this pattern defies all reason - angering my friends in Europe and upsetting many brilliant and well-educated progressives in the US. They utter frustrated remarks about the all-powerful gun lobby and the spineless politicians, blaming the partisan gridlock for the failure to pass even the most reasonable and toothless gun control laws. I used to be in the same camp; today, I think the reality is more complex than that.

To get to the bottom of this mystery, it helps to look at the spirit of radical individualism and classical liberalism that remains the national ethos of the United States - and in fact, is enjoying a degree of resurgence unseen for many decades prior. In Europe, it has long been settled that many individual liberties - be it the freedom of speech or the natural right to self-defense - can be constrained to advance even some fairly far-fetched communal goals. On the old continent, such sacrifices sometimes paid off, and sometimes led to atrocities; but the basic premise of European collectivism is not up for serious debate. In America, the same notion certainly cannot be taken for granted today.

When it comes to firearm ownership in particular, the country is facing a fundamental choice between two possible realities:

A largely disarmed society that depends on the state to protect it from almost all harm, and where citizens are generally not permitted to own guns without presenting a compelling cause. In this model, adopted by many European countries, firearms tend to be less available to common criminals - simply by the virtue of limited supply and comparatively high prices in black market trade. At the same time, it can be argued that any nation subscribing to this doctrine becomes more vulnerable to foreign invasion or domestic terror, should its government ever fail to provide adequate protection to all citizens. Disarmament can also limit civilian recourse against illegitimate, totalitarian governments - a seemingly outlandish concern, but also a very fresh memory for many European countries subjugated not long ago under the auspices of the Soviet Bloc.

A well-armed society where firearms are available to almost all competent adults, and where the natural right to self-defense is subject to few constraints. This is the model currently employed in the United States, where it arises from the straightfoward, originalist interpretation of the Second Amendment - as recognized by roughly 75% of all Americans and affirmed by the Supreme Court. When following such a doctrine, a country will likely witness greater resiliency in the face of calamities or totalitarian regimes. At the same time, its citizens might have to accept some inherent, non-trivial increase in violent crime due to the prospect of firearms more easily falling into the wrong hands.

It seems doubtful that a viable middle-ground approach can exist in the United States. With more than 300 million civilian firearms in circulation, most of them in unknown hands, the premise of reducing crime through gun control would inevitably and critically depend on some form of confiscation; without such drastic steps, the supply of firearms to the criminal underground or to unfit individuals would not be disrupted in any meaningful way. Because of this, intellectual integrity requires us to look at many of the legislative proposals not only through the prism of their immediate utility, but also to give consideration to the societal model they are likely to advance.

And herein lies the problem: many of the current "common-sense" gun control proposals have very little merit when considered in isolation. There is scant evidence that reinstating the ban on military-looking semi-automatic rifles ("assault weapons"), or rolling out the prohibition on private sales at gun shows, would deliver measurable results. There is also no compelling reason to believe that ammo taxes, firearm owner liability insurance, mandatory gun store cameras, firearm-free school zones, bans on open carry, or federal gun registration can have any impact on violent crime. And so, the debate often plays out like this:

At the same time, by the virtue of making weapons more difficult, expensive, and burdensome to own, many of the legislative proposals floated by progressives would probably gradually erode the US gun culture; intentionally or not, their long-term outcome would be a society less passionate about firearms and more willing to follow in the footsteps of Australia or the UK. Only as we cross that line and confiscate hundreds of millions of guns, it's fathomable - yet still far from certain - that we would see a sharp drop in homicides.

This method of inquiry helps explain the visceral response from gun rights advocates: given the legislation's dubious benefits and its predicted long-term consequences, many pro-gun folks are genuinely worried that making concessions would eventually mean giving up one of their cherished civil liberties - and on some level, they are right.

Some feel that this argument is a fallacy, a tell tale invented by a sinister corporate "gun lobby" to derail the political debate for personal gain. But the evidence of such a conspiracy is hard to find; in fact, it seems that the progressives themselves often fan the flames. In the wake of Roseburg, both Barack Obama and Hillary Clinton came out praising the confiscation-based gun control regimes employed in Australia and the UK - and said that they would like the US to follow suit. Depending on where you stand on the issue, it was either an accidental display of political naivete, or the final reveal of their sinister plan. For the latter camp, the ultimate proof of a progressive agenda came a bit later: in response to the terrorist attack in San Bernardino, several eminent Democratic-leaning newspapers published scathing editorials demanding civilian disarmament while downplaying the attackers' connection to Islamic State.

Another factor that poisons the debate is that despite being highly educated and eloquent, the progressive proponents of gun control measures are often hopelessly unfamiliar with the very devices they are trying to outlaw:

I'm reminded of the widespread contempt faced by Senator Ted Stevens following his attempt to compare the Internet to a "series of tubes" as he was arguing against net neutrality. His analogy wasn't very wrong - it just struck a nerve as simplistic and out-of-date. My progressive friends did not react the same way when Representative Carolyn McCarthy - one of the key proponents of the ban on assault weapons - showed no understanding of the supposedly lethal firearm features she was trying to eradicate. Such bloopers are not rare, too; not long ago, Mr. Bloomberg, one of the leading progressive voices on gun control in America, argued against semi-automatic rifles without understanding how they differ from the already-illegal machine guns:

Yet another example comes Representative Diana DeGette, the lead sponsor of a "common-sense" bill that sought to prohibit the manufacture of magazines with capacity over 15 rounds. She defended the merits of her legislation while clearly not understanding how a magazine differs from ammunition - or that the former can be reused:

"I will tell you these are ammunition, they’re bullets, so the people who have those know they’re going to shoot them, so if you ban them in the future, the number of these high capacity magazines is going to decrease dramatically over time because the bullets will have been shot and there won’t be any more available."

Treating gun ownership with almost comical condescension has become vogue among a good number of progressive liberals. On a campaign stop in San Francisco, Mr. Obama sketched a caricature of bitter, rural voters who "cling to guns or religion or antipathy to people who aren't like them". Not much later, one Pulitzer Prize-winning columnist for The Washington Post spoke of the Second Amendment as "the refuge of bumpkins and yeehaws who like to think they are protecting their homes against imagined swarthy marauders desperate to steal their flea-bitten sofas from their rotting front porches". Many of the newspaper's readers probably had a good laugh - and then wondered why it has gotten so difficult to seek sensible compromise.

There are countless dubious and polarizing claims made by the supporters of gun rights, too; examples include a recent NRA-backed tirade by Dana Loesch denouncing the "godless left", or the constant onslaught of conspiracy theories spewed by Alex Jones and Glenn Beck. But when introducing new legislation, the burden of making educated and thoughtful arguments should rest on its proponents, not other citizens. When folks such as Bloomberg prescribe sweeping changes to the American society while demonstrating striking ignorance about the topics they want to regulate, they come across as elitist and flippant - and deservedly so.

Given how controversial the topic is, I think it's wise to start an open, national conversation about the European model of gun control and the risks and benefits of living in an unarmed society. But it's also likely that such a debate wouldn't last very long. Progressive politicians like to say that the dialogue is impossible because of the undue influence of the National Rifle Association - but as I discussed in my earlier blog posts, the organization's financial resources and power are often overstated: it does not even make it onto the list of top 100 lobbyists in Washington, and its support comes mostly from member dues, not from shadowy business interests or wealthy oligarchs. In reality, disarmament just happens to be a very unpopular policy in America today: the support for gun ownership is very strong and has been growing over the past 20 years - even though hunting is on the decline.

Perhaps it would serve the progressive movement better to embrace the gun culture - and then think of ways to curb its unwanted costs. Addressing inner-city violence, especially among the disadvantaged youth, would quickly bring the US homicide rate much closer to the rest of the highly developed world. But admitting the staggering scale of this social problem can be an uncomfortable and politically charged position to hold. For Democrats, it would be tantamount to singling out minorities. For Republicans, it would be just another expansion of the nanny state.

PS. If you are interested in a more systematic evaluation of the scale, the impact, and the politics of gun ownership in the United States, you may enjoy an earlier entry on this blog. Or, if you prefer to read my entire series comparing the life in Europe and in the US, try this link.

July 15, 2015

With my previous entry, I wrapped up an impromptu series of articles that chronicled my childhood experiences in Poland and compared the culture I grew up with to the American society that I'm living in today. For the readers who want to be able to navigate the series without scrolling endlessly, I wanted to put together a quick table of contents. Here it goes.

The entry that started it all:

"On journeys" - a personal story recounting my travels from Poland to the US.

Oh, the places you won't go:

The politics of Poland - a retrospective look at the politics of a state emerging from under a communist rule,

This is the fourteenth article talking about Poland, Europe, and the United States. To explore the entire collection, start here.

This is destined to be the final entry in the series that opened with a chronicle of my journey from Poland to the United States, only to veer into some of the most interesting social differences between America and the old continent. There are many other topics I could still write about - anything from the school system, to religion, to the driving culture - but with my parental leave coming to an end, I decided to draw a line. I'm sure that this decision will come as a relief for those who read the blog for technical insights, rather than political commentary :-)

The final topic I wanted to talk about is something that truly irks some of my European friends: the belief, held deeply by many Americans, that their country is the proverbial "city upon a hill" - a shining beacon of liberty and righteousness, blessed by the maker with the moral right to shape the world - be it by flexing its economic and diplomatic muscles, or with its sheer military might.

It is an interesting phenomenon, and one that certainly isn't exclusive to the United States. In fact, expansive exceptionalism used to be a very strong theme in the European doctrine long before it emerged in other parts of the Western world. For one, it underpinned many of the British, French, Spanish, and Dutch colonial conquests over the past 500 years. The romanticized notion of Sonderweg played a menacing role in German political discourse, too - eventually culminating in the rise of the Nazi ideology and the onset of World War II. It wasn't until the defeat of the Third Reich when Europe, faced with unspeakable destruction and unprecedented loss of life, made a concerted effort to root out many of its nationalist sentiments and embrace a more harmonious, collective path as a single European community.

America, in a way, experienced the opposite: although it has always celebrated its own rejection of feudalism and monarchism - and in that sense, it had a robust claim to being a pretty unique corner of the world - the country largely shied away from global politics, participating only very reluctantly in World War I, then hoping to wait out World War II up until being attacked by Japan. Its conviction about its special role on the world stage has solidified only after it paid a tremendous price to help defeat the Germans, to stop the march of the Red Army through the continent, and to build a prosperous and peaceful Europe; given the remarkable significance of this feat, the post-war sentiments in America may be not hard to understand. In that way, the roots of American exceptionalism differed from its European predecessors, being fueled by a fairly pure sense of righteousness - and not by anger, by a sense of injury, or by territorial demands.

Of course, the new superpower has also learned that its military might has its limits, facing humiliating defeats in some of the proxy wars with the Soviets and seeing an endless spiral of violence in the Middle East. The voices predicting its imminent demise, invariably present from the earliest days of the republic, have grown stronger and more confident over the past 50 years. But the country remains a military and economic powerhouse; and in some ways, its trigger-happy politicians provide a counterbalance to the other superpowers' greater propensity to turn a blind eye to humanitarian crises and to genocide. It's quite possible that without the United States arming its allies and tempering the appetites of Russia, North Korea, or China, the world would have been a less happy place. It's just as likely that the Middle East would have been a happier one.

Some Europeans show indignation that Americans, with their seemingly know-it-all attitudes toward the rest of the world, still struggle to pinpoint Austria or Belgium on the map. It is certainly true that the media in the US pays little attention to the old continent. But deep down inside, European outlets don't necessarily fare a lot better, often focusing its international coverage on the silly and the formulaic: when in Europe, you are far more likely to hear about a daring rescue of a cat stuck on a tree in Wyoming, or about the Creation Museum in Kentucky, than you are to learn anything substantive about Obamacare. (And speaking of Wyoming and Kentucky, pinpointing these places on the map probably wouldn't be the European viewer's strongest feat). In the end, Europeans who think they understand the intricacies of US politics are probably about as wrong as the average American making sweeping generalizations about Europe.

And on that intentionally self-deprecating note, it's time to wrap the series up.