In the years leading up to the Arab Spring, Islamist parties developed something of an obsession with the role of Western powers in supporting democracy in the Arab world—or, more likely, not supporting it. Islamists were fighting on two fronts: not just repressive regimes, but their international backers as well. The ghosts of Algeria lingered. In January 1992, Algeria’s largest Islamist party, the Islamic Salvation Front (FIS), found itself on the brink of an historic election victory—prompting fears that the military was preparing to move against the Islamists. In the tense days that followed, FIS leader Abdelkader Hachani addressed a crowd of supporters. “Victory is more dangerous than defeat,” he warned, urging them to exercise restraint to avoid giving the army a pretext for intervention. But it was too late. The staunchly secular military aborted the elections, launching a massive crackdown and plunging Algeria into a civil war that would claim more than 100,000 lives.

Related Story

That authoritarian regimes and activist militaries could count on American and European acquiescence (or even support)—as they did in 1992—made Arab regimes seem more durable than they actually were, and the task of unseating them more daunting. During the first and forgotten Arab Spring of 2004-5, Algeria repeatedly came up in my interviews with Muslim Brotherhood leaders in Egypt and Jordan. Perhaps over-learning the lessons of the past, Islamist parties across the region, despite their growing popularity, were careful and cautious. They made a habit of losing elections. In fact, they lost them on purpose. This ambivalence and even aversion to power prevented Islamists from playing the role that opposition parties are generally expected to play. It was better to wait, and so they did.

It’s been almost five years since the start of the Arab Spring, but one conversation still stands out to me, despite (or perhaps because of) everything that’s happened since. Just two months before the uprisings began, Egypt was experiencing what, at the time, seemed like an especially hopeless period. I was in the country for November elections that proved to be the most fraudulent in Egyptian history. After winning an unprecedented 88 seats in parliament in 2005, the Muslim Brotherhood wasn’t permitted by Hosni Mubarak’s regime to claim even one seat. But this movement, the mother of all Islamist movements, accepted its fate in stride. “The regimes won’t let us take power,” Hamdi Hassan, the head of the Brotherhood’s parliamentary bloc, told me during that doomed election campaign. What was the solution, then? I asked him. “The solution is in the ‘Brotherhood approach.’ We focus on the individual, then the family, then society.”

“In the lifespan of mankind, 80 years isn’t long,” he reasoned, referring to the time that had passed since the Brotherhood’s founding. “It’s like eight seconds.”

* * *

Events in the Middle East, and the policy debates surrounding them, tend to proceed in endless, disorienting loops. The Syrian civil war has gotten almost unimaginably worse since early 2012 (from 7,000 dead to 250,000), but we’re debating much the same thing weweredebating back then: to enact safe zones and no-fly zones in the country, or not to. Algerian Islamists were ascendant in 1991 and the military intervened to stop them; something eerily similar happened in 2013 after the Muslim Brotherhood came to power in Egypt through democratic elections. Where exactly is the line between inaction and complicity? The notion of neutrality, for a country as powerful as the United States, is illusory. Doing nothing or “doing no harm” means maintaining or reverting to the status quo, which in the Middle East is never neutral, due to America’s longstanding relationships with regional actors.

Before, during, and after the Arab Spring, one thing has remained constant in the Middle East: the outsized influence of outside powers. When the United States opts to remain disengaged—itself a conscious policy choice—others move to fill the void. The convenient fiction that foreign powers can do little to respond to the conflicts or “ancient hatreds” of the region belies nearly every major political development of the post-Arab Spring period. For a moment, though, it was nice to think that it wasn’t about the U.S., or at least that it didn’t have to be.

The first two uprisings in Tunisia and Egypt seemed to shatter the illusion that Arabs had to wait. Even if Western powers weren’t with them—and, at least at first, they weren’t—Arabs could bring about their own revolutions. In Tunisia, where little was at stake for the United States, senior officials were still saying that the U.S. was “not taking sides” as late as two days before Zine al-Abidine Ben Ali’s fall. After Ben Ali fled on January 14, 2011, taking refuge in Saudi Arabia, the Obama administration quickly adapted, expressing its support for the revolution. The U.S. could live without the Tunisian regime, but could it live without a staunch ally like Egypt’s Hosni Mubarak, a dogged opponent of Iran and a stalwart supporter of the Arab-Israeli peace process? Here too, Mubarak’s longtime ties with Western governments would prove insufficient in preserving his rule.

From the very start, there was a temptation to discount the importance of foreign powers in the Arab Spring. It became commonplace to hear some variation of the following: that the uprisings were a truly indigenous movement and that Arabs themselves did not want other countries to “interfere” in it—meddling that would, the thinking went, go against the very spirit of the revolutions. President Barack Obama and other U.S. officials repeatedly insisted that this was “not about America.” In reality, it was partly about America, not just because of the past U.S. role in backing Arab dictatorships, but because of the critical role it would continue to play in the region.

For better and worse, international actors influenced the first phase of the Arab Spring and, in several countries, defined it. In Libya, Yemen, and Syria, Western and regional powers in the Gulf played significant, even decisive roles. In the one stalled revolution—Bahrain’s—it was Saudi Arabia’s military intervention that quelled the uprising and kept the ruling family afloat. Even in Egypt, the 2011 uprising was effectively internationalized, with foreign media devoting countless hours to covering every turn and, in the process, putting the issue at the top of the Western policy agenda. The United States, making use of longstanding military-to-military ties, pressured the Egyptian army to refrain from using force against protesters.

Nor were Arab Spring protesters entirely inward-looking. While those who rose up were no doubt angry over the lack of “bread and freedom,” the third element—the demand for dignity—was more difficult to characterize. Here, Egypt’s pro-Western policies and perceived subservience to the United States figured prominently, including in the defining chant that echoed throughout Tahrir Square the night Mubarak fell: “You’re Egyptian—raise your head up high.” During my time in Tahrir, I heard numerous chants attacking Mubarak for being a lackey of the United States and Israel (one such chant claimed that the Egyptian president only understood one language: Hebrew).

In Egypt and Tunisia, what the United States did—and did not do—continued to matter well after the initial uprisings. Newly elected governments facing deteriorating economic conditions at home needed as much outside support as they could get, in the form of direct financial assistance, loans, trade, asset recovery, and private investment. Despite its struggling economy and budgetary constraints, the United States had an important role to play. It was a question of political will. In the first year and a half after the uprisings, the U.S. proposed or allocated only around $2.2 billion in new aid to Arab Spring-affected countries. (For the sake of comparison, the U.S. committed $128 billion in today’s dollars during the four years of the Marshall Plan in post-World War II Western Europe). But this wasn’t just, or even primarily, about money. More costly were the Obama administration’s decisions to disengage from Libya after a successful military intervention there, to do as little as possible in Syria, to indulge Nouri al-Maliki in Iraq as he cracked down on Sunni political forces, and to outsource policy on Yemen to Saudi Arabia.

In recent years, a growing academic literature has pointed to the role of international actors in bringing down autocrats, though the focus tends to be on non-Middle Eastern cases. In their 2010 book, the political scientists Steven Levitsky and Lucan Way provide extensive empirical support to what many have long argued. They write that “it was an externally driven shift in the cost of suppression, not changes in domestic conditions, that contributed most centrally to the demise of authoritarianism in the 1980s and 1990s.” Levitsky and Way find that “states’ vulnerability to Western democratizing pressure ... was often decisive.”

Where is the line between inaction and complicity? The notion of neutrality, for a country as powerful as the United States, is illusory.

In the Middle East, the critical role of foreign powers was confirmed, once again, during Egypt’s July 2013 military coup and its tragic aftermath. In the two and a half years leading up to the removal of President Mohamed Morsi of the Muslim Brotherhood, the United States failed to put any significant pressure on the Supreme Council of the Armed Forces (SCAF), which dominated—and corrupted—Egypt’s transition in those early, critical days after the revolution. The United States wagered that a military-led transition would facilitate (and manage) the democratization process while safeguarding American interests. SCAF, though, grew increasingly autocratic, culminating in one very bad week in June 2012 when the military and its allies dissolved parliament, reinstated martial law, and decreed a constitutional addendum stripping the presidency of many of its powers.

The precedent had been set: even the most egregious violations of the democratic process would receive little more than the usual, bland expressions of concern and disapproval. The unwillingness to pressure SCAF would make it all the more difficult for the U.S. to hold future Islamist-led governments, such as Morsi’s, to democratic standards. SCAF wasn’t elected. How, then, could Washington justify withholding U.S. assistance to Morsi’s administration—the country’s first democratically elected government?

After the July 3 coup and subsequent crackdown against the Brotherhood and other Islamists, the U.S. response was muted. Despite a legal obligation to suspend aid in the event of a coup, the Obama administration, along with most of Congress, insisted on the importance of maintaining the flow of military aid to Egypt. A month after the military’s intervention—and in the lead-up to its massacre of Morsi supporters near the Rabaa al-Adawiya mosque—Secretary of State John Kerry even appeared to endorse the coup, saying that the army was “in effect … restoring democracy” and averting civil war. Egyptian military officials wagered, rightly, that they could get away with what became, according to Human Rights Watch, the worst mass killing in modern Egyptian history—as well as one of the worst single-day mass killings in recent decades anywhere in the world.

Blood stains the ground near a poster of Mohamed Morsi after violent clashes between the Egyptian military and Morsi supporters. (Amr Abdallah Dalsh / Reuters)

America’s relative silence was no accident. To offer a strong, coherent response to the killings would have required a strategy, which would have required more, not less, involvement. This, however, would have been at cross-purposes with the entire thrust of the administration’s policy. Obama was engaged in a concerted effort to reduce its footprint in the Middle East. The phrase “leading from behind” quickly became a pejorative for Obama’s foreign-policy doctrine, but it captured a very real shift in America’s posture. The foreign-policy analysts Nina Hachigian and David Shorr called it the “Responsibility Doctrine,” a strategy of “prodding other influential nations … to help shoulder the burdens of fostering a stable, peaceful world order.” In pursuing this strategy in the Middle East, the United States left a power vacuum—and a proxy struggle. During Morsi’s year-long tenure, Qatar became the single largest foreign donor to Egypt, at over $5 billion (with Turkey contributing another $2 billion). Just days after the military moved against Morsi, it was Saudi Arabia, the United Arab Emirates, and Kuwait that pledged a massive $12 billion to the new military-appointed government.

The United States, along with the conservative Gulf monarchies and many others, also viewed Islamist parties with considerable suspicion. The Muslim Brotherhood had a long history of vehemently anti-Western and anti-Israel positions, including refusing to accept the Jewish state’s right to exist. (A few months after Egypt’s 2011 uprising, Morsi, who was particularly outspoken among Brotherhood leaders on these matters, shared his views on the 9/11 attacks with me. “When you come and tell me that the plane hit the tower like a knife in butter,” he said, shifting to English, “then you are insulting us. How did the plane cut through the steel like this? Something must have happened from the inside. It’s impossible.” In 2010, before he had any inkling of becoming president, Morsi, echoing classical anti-Semitic tropes, called Zionists “descendants of apes and pigs.”)

But what Morsi apparently believed and what he actually did in power constituted alternate universes. In 2006, the Brotherhood’s general guide, Mahdi Akef, told me angrily that “of course” the Brotherhood would cancel Egypt’s peace treaty with Israel if it ever had the chance. More pragmatic Islamists adopted a different tone, usually one of resignation. As one senior Brotherhood figure in Jordan put it to me: “If we must, we will always, at the very least, believe and long for the liberation of Palestine in our own hearts.”As I discuss in my book Temptations of Power, it is in the realm of foreign policy that the dissonance between ideology and practice is most striking but also the least surprising. Islamist parties in power simply cannot do the things they might like to do in an ideal world. The structure of the regional and international order won’t allow it. As long as Arab countries are dependent on Western powers for economic and political survival, there will be limits to how far elected governments, Islamist or otherwise, can go. (If that dependency were to weaken in the long run, Islamists would likely pursue a more ideological, assertive foreign policy. Ideology, to express itself, needs to be freed of its various constraints.)

Even the most egregious violations of democracy would receive little more than bland expressions of concern and disapproval from America.

Morsi, as president, was a product of this constrained context. While his foreign policy departed from Mubarak’s in significant ways, it was far from the wholesale shift that some of his supporters were hoping for. Morsi, for example, played an important role in brokering a resolution to the Gaza crisis of November 2012. He brought Egypt closer to Hamas, a U.S.-designated terrorist group, but he did so in a way that fell well short of fundamentally challenging the U.S.-led regional order. The model for Morsi was Turkey or Qatar—countries that were tied to the United States militarily and strategically but able and willing to establish themselves as independent, assertive regional powers, despite occasional (or increasingly frequent) American grumbling. America’s red lines were clear enough to Morsi, and they included respecting the Camp David peace treaty between Israel and Egypt and cooperating with Israel on security. Human rights and democracy were, as they had always been in Egypt, tertiary U.S. concerns.

* * *

The Arab Spring demonstrated the shortsightedness of the “stability paradigm”— the model of Arab governments doing the West’s bidding in return for the West overlooking the suppression of dissent—that had animated U.S. and European policy for a half-century. Regimes that once seemed resilient crumbled more quickly than anyone could have imagined. If there was a lesson to be learned, it was that human rights and democratic reform would need to be prioritized after the Obama administration had—hoping to distinguish itself from its predecessor—deemphasized their importance.

Almost five years later, however, it appears that Western governments have learned rather different lessons. The reorientation that many both in the region and within the foreign-policy community had hoped for did not come to pass. In most Arab countries, with the exception of Libya (and even then only briefly), the Obama administration was content to tinker around the margins of existing policies. This laissez-faire approach produced its own set of consequences.

The EU has the ability to embed countries within a thick regional order. No comparable mechanism exists in the Arab world.

The unwillingness or inability to use American leverage to pressure Arab governments, including those with Islamist leanings, came at a cost. The United States can provide a credible threat of sanction by suspending or canceling much-needed economic assistance. Such a punitive approach can backfire, of course, given the understandable sensitivities in the region about the interference of foreign powers. A better alternative is “positive conditionality”—providing economic and political incentives for governments to meet explicit, measurable benchmarks on democratic reform.

A model for what this might look like is (or was) Turkey. After coming to power in 2002, the Islamist-rooted Justice and Development Party (AKP) passed a series of consequential democratic reforms. The prospect of membership in the European Union helped incentivize the AKP to revise the penal code, ease restrictions on freedom of expression, rein in the power of the military, and expand rights for the country’s Kurdish minority. But when the threat of a military coup receded, and negotiations with the EU faltered, the AKP government seemed to lose interest in democratization, increasingly adopting illiberal and undemocratic practices.

The European Union has the ability to embed European countries within a thick regional order. No comparable mechanism exists in the Arab world. Yet the template is relevant for understanding how the United States might bind struggling democracies within a mutually beneficial regional order. In a sense, of course, it’s too late. America’s unwillingness to play such a role increased the likelihood that the Muslim Brotherhood, empowered by its conservative base and pressured by its Salafi competitors, would veer rightward and overreach, alienating old and new allies in the process. As demonstrated in Egypt, the governance failures of Islamist parties can have devastating effects on the course of a country’s democratic transition. That Islamists were, once again, ousted, repressed, and exiled from the democratic process brings us back to Algeria in 1992. The ghosts of Egypt—the Arab world’s most populous country and long a bellwether for the region—will linger, but this time for far longer and with greater consequences than those of Algeria. Abdelkader Hachani, it seems, was vindicated. Victory is more dangerous than defeat.

Most Popular

His paranoid style paved the road for Trumpism. Now he fears what’s been unleashed.

Glenn Beck looks like the dad in a Disney movie. He’s earnest, geeky, pink, and slightly bulbous. His idea of salty language is bullcrap.

The atmosphere at Beck’s Mercury Studios, outside Dallas, is similarly soothing, provided you ignore the references to genocide and civilizational collapse. In October, when most commentators considered a Donald Trump presidency a remote possibility, I followed audience members onto the set of The Glenn Beck Program, which airs on Beck’s website, theblaze.com. On the way, we passed through a life-size replica of the Oval Office as it might look if inhabited by a President Beck, complete with a portrait of Ronald Reagan and a large Norman Rockwell print of a Boy Scout.

“Well, you’re just special. You’re American,” remarked my colleague, smirking from across the coffee table. My other Finnish coworkers, from the school in Helsinki where I teach, nodded in agreement. They had just finished critiquing one of my habits, and they could see that I was on the defensive.

I threw my hands up and snapped, “You’re accusing me of being too friendly? Is that really such a bad thing?”

“Well, when I greet a colleague, I keep track,” she retorted, “so I don’t greet them again during the day!” Another chimed in, “That’s the same for me, too!”

Unbelievable, I thought. According to them, I’m too generous with my hellos.

When I told them I would do my best to greet them just once every day, they told me not to change my ways. They said they understood me. But the thing is, now that I’ve viewed myself from their perspective, I’m not sure I want to remain the same. Change isn’t a bad thing. And since moving to Finland two years ago, I’ve kicked a few bad American habits.

Why the ingrained expectation that women should desire to become parents is unhealthy

In 2008, Nebraska decriminalized child abandonment. The move was part of a "safe haven" law designed to address increased rates of infanticide in the state. Like other safe-haven laws, parents in Nebraska who felt unprepared to care for their babies could drop them off in a designated location without fear of arrest and prosecution. But legislators made a major logistical error: They failed to implement an age limitation for dropped-off children.

Within just weeks of the law passing, parents started dropping off their kids. But here's the rub: None of them were infants. A couple of months in, 36 children had been left in state hospitals and police stations. Twenty-two of the children were over 13 years old. A 51-year-old grandmother dropped off a 12-year-old boy. One father dropped off his entire family -- nine children from ages one to 17. Others drove from neighboring states to drop off their children once they heard that they could abandon them without repercussion.

Trinidad has the highest rate of Islamic State recruitment in the Western hemisphere. How did this happen?

This summer, the so-called Islamic State published issue 15 of its online magazine Dabiq. In what has become a standard feature, it ran an interview with an ISIS foreign fighter. “When I was around twenty years old I would come to accept the religion of truth, Islam,” said Abu Sa’d at-Trinidadi, recalling how he had turned away from the Christian faith he was born into.

At-Trinidadi, as his nom de guerre suggests, is from the Caribbean island of Trinidad and Tobago (T&T), a country more readily associated with calypso and carnival than the “caliphate.” Asked if he had a message for “the Muslims of Trinidad,” he condemned his co-religionists at home for remaining in “a place where you have no honor and are forced to live in humiliation, subjugated by the disbelievers.” More chillingly, he urged Muslims in T&T to wage jihad against their fellow citizens: “Terrify the disbelievers in their own homes and make their streets run with their blood.”

The same part of the brain that allows us to step into the shoes of others also helps us restrain ourselves.

You’ve likely seen the video before: a stream of kids, confronted with a single, alluring marshmallow. If they can resist eating it for 15 minutes, they’ll get two. Some do. Others cave almost immediately.

This “Marshmallow Test,” first conducted in the 1960s, perfectly illustrates the ongoing war between impulsivity and self-control. The kids have to tamp down their immediate desires and focus on long-term goals—an ability that correlates with their later health, wealth, and academic success, and that is supposedly controlled by the front part of the brain. But a new study by Alexander Soutschek at the University of Zurich suggests that self-control is also influenced by another brain region—and one that casts this ability in a different light.

A professor of cognitive science argues that the world is nothing like the one we experience through our senses.

As we go about our daily lives, we tend to assume that our perceptions—sights, sounds, textures, tastes—are an accurate portrayal of the real world. Sure, when we stop and think about it—or when we find ourselves fooled by a perceptual illusion—we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.

Should you drink more coffee? Should you take melatonin? Can you train yourself to need less sleep? A physician’s guide to sleep in a stressful age.

During residency, Iworked hospital shifts that could last 36 hours, without sleep, often without breaks of more than a few minutes. Even writing this now, it sounds to me like I’m bragging or laying claim to some fortitude of character. I can’t think of another type of self-injury that might be similarly lauded, except maybe binge drinking. Technically the shifts were 30 hours, the mandatory limit imposed by the Accreditation Council for Graduate Medical Education, but we stayed longer because people kept getting sick. Being a doctor is supposed to be about putting other people’s needs before your own. Our job was to power through.

The shifts usually felt shorter than they were, because they were so hectic. There was always a new patient in the emergency room who needed to be admitted, or a staff member on the eighth floor (which was full of late-stage terminally ill people) who needed me to fill out a death certificate. Sleep deprivation manifested as bouts of anger and despair mixed in with some euphoria, along with other sensations I’ve not had before or since. I remember once sitting with the family of a patient in critical condition, discussing an advance directive—the terms defining what the patient would want done were his heart to stop, which seemed likely to happen at any minute. Would he want to have chest compressions, electrical shocks, a breathing tube? In the middle of this, I had to look straight down at the chart in my lap, because I was laughing. This was the least funny scenario possible. I was experiencing a physical reaction unrelated to anything I knew to be happening in my mind. There is a type of seizure, called a gelastic seizure, during which the seizing person appears to be laughing—but I don’t think that was it. I think it was plain old delirium. It was mortifying, though no one seemed to notice.

“All the world has failed us,” a resident of the Syrian city of Aleppo told the BBC this week, via a WhatsApp audio message. “The city is dying. Rapidly by bombardment, and slowly by hunger and fear of the advance of the Assad regime.”

In recent weeks, the Syrian military, backed by Russian air power and Iran-affiliated militias, has swiftly retaken most of eastern Aleppo, the last major urban stronghold of rebel forces in Syria. Tens of thousands of besieged civilians are struggling to survive and escape the fighting, amid talk of a rebel retreat. One of the oldest continuously inhabited cities on earth, the city of the Silk Road and the Great Mosque, of muwashshah and kibbeh with quince, of the White Helmets and Omran Daqneesh, is poised to fall to Bashar al-Assad and his benefactors in Moscow and Tehran, after a savage four-year stalemate. Syria’s president, who has overseen a war that has left hundreds of thousands of his compatriots dead, will inherit a city robbed of its human potential and reduced to rubble.

Even in big cities like Tokyo, small children take the subway and run errands by themselves. The reason has a lot to do with group dynamics.

It’s a common sight on Japanese mass transit: Children troop through train cars, singly or in small groups, looking for seats.

They wear knee socks, polished patent-leather shoes, and plaid jumpers, with wide-brimmed hats fastened under the chin and train passes pinned to their backpacks. The kids are as young as 6 or 7, on their way to and from school, and there is nary a guardian in sight.

A popular television show called Hajimete no Otsukai, or My First Errand, features children as young as two or three being sent out to do a task for their family. As they tentatively make their way to the greengrocer or bakery, their progress is secretly filmed by a camera crew. The show has been running for more than 25 years.

A recent study shows that people who simply ate more fiber lost about as much weight as those who went on a complicated diet.

By this time of year, many peoples’ best-laid New Year’s Resolutions have died, just seven short weeks after they were born. One reason why it’s difficult to lose weight—the most common resolution—is that dieting is so confusing.

For instance, the American Heart Association's recommended diet is one of the most effective food plans out there. It’s also one of the most complicated. It requires, according to a recent study, “consuming vegetables and fruits; eating whole grains and high-fiber foods; eating fish twice weekly; consuming lean animal and vegetable proteins; reducing intake of sugary beverages; minimizing sugar and sodium intake; and maintaining moderate to no alcohol intake.” On top of that, adherents should derive half of their calories from carbs, a fifth from protein, and the rest from fat—except just 7 percent should be saturated fat. (Perhaps the goal is to keep people busy doing long division so they don't have time to eat food.)