“When hungry, eat your rice; when tired, close your eyes. Fools may laugh at me, but wise men will know what I mean.” — Lin-Chi
“When hungry, eat your rice; when tired, close your eyes. Fools may laugh at me, but wise men will know what I mean.” — Lin-Chi
Video: Grizzly Bear vs. Hay Bale
Adam Victor Brandizzi
Olha, os leads são puro clickbait mas o conteúdo é deveras interessante. O episódio do Radiolab deve ser bom.
* * *
This isn't another story about that dress, or at least, not really.
It's about the way that humans see the world and how until we have a way to describe something, even something so fundamental as a color, we may not even notice that it's there.
Until relatively recently in human history, "blue" didn't exist, not in the way we think of it.
As the delightful Radiolab episode "Colors" describes, ancient languages didn't have a word for blue — not Greek, not Chinese, not Japanese, not Hebrew. And without a word for the color, there is evidence that they may not have seen it at all.
In "The Odyssey," Homer famously describes the "wine-dark sea." But why "wine-dark" and not deep blue or green?
In 1858 a scholar named William Gladstone, who later became the prime minister of Great Britain, noticed that this wasn't the only strange color description. Though the poet spends page after page describing the intricate details of clothing, armor, weaponry, facial features, animals, and more, his references to color are strange. Iron and sheep are violet; honey is green.
So Gladstone decided to count the color references in the book. And while black is mentioned almost 200 times and white about 100, other colors are rare. Red is mentioned fewer than 15 times, and yellow and green fewer than 10. Gladstone started looking at other ancient Greek texts and noticed the same thing — there was never anything described as "blue." The word didn't even exist.
It seemed the Greeks lived in a murky and muddy world, devoid of color, mostly black and white and metallic, with occasional flashes of red or yellow.
Gladstone thought this was perhaps something unique to the Greeks, but a philologist named Lazarus Geiger followed up on his work and noticed this was true across cultures.
He studied Icelandic sagas, the Koran, ancient Chinese stories, and an ancient Hebrew version of the Bible. Of Hindu Vedic hymns, he wrote: "These hymns, of more than ten thousand lines, are brimming with descriptions of the heavens. Scarcely any subject is evoked more frequently. The sun and reddening dawn's play of color, day and night, cloud and lightning, the air and ether, all these are unfolded before us, again and again ... but there is one thing no one would ever learn from these ancient songs ... and that is that the sky is blue."
There was no blue, not in the way that we know the color — it wasn't distinguished from green or darker shades.
Geiger looked to see when "blue" started to appear in languages and found an odd pattern all over the world.
Every language first had a word for black and for white, or dark and light. The next word for a color to come into existence — in every language studied around the world — was red, the color of blood and wine.
After red, historically, yellow appears, and later, green (though in a couple of languages, yellow and green switch places). The last of these colors to appear in every language is blue.
The only ancient culture to develop a word for blue was the Egyptians — and as it happens, they were also the only culture that had a way to produce a blue dye.
If you think about it, blue doesn't appear much in nature — there are almost no blue animals, blue eyes are rare, and blue flowers are mostly human creations. There is, of course, the sky, but is that really blue? As we've seen from Geiger's work, even scriptures that contemplate the heavens continuously still do not necessarily see it as "blue."
Russell Mondy/FlickrIs the sky really blue? What does that mean? In fact, one researcher that Radiolab spoke with — Guy Deutscher, author of "Through the Language Glass: Why the World Looks Different in Other Languages," tried a casual experiment with that. In theory, one of children's first questions is, "Why is the sky blue?" So he raised his daughter while being careful to never describe the color of the sky to her, and then one day asked her what color she saw when she looked up.
Alma, Deutscher's daughter, had no idea. The sky was colorless. Eventually she decided it was white, and later on, eventually blue. So blue was not the first thing she saw or gravitated toward, though it is where she settled in the end.
This part gets a little complicated, because we do not know exactly what was going through Homer's brain when he described the wine-dark sea and the violet sheep — but we do know that ancient Greeks and others in the ancient world had the same biology and therefore same capability to see color that we do.
But do you really see something if you don't have a word for it?
A researcher named Jules Davidoff traveled to Namibia to investigate this, where he conducted an experiment with the Himba tribe, which speaks a language that has no word for blue or distinction between blue and green.
Vidipedia/Himba color experimentNamibian tribe member participating in a research project. When shown a circle with 11 green squares and one blue, they could not pick out which one was different from the others — or those who could see a difference took much longer and made more mistakes than would make sense to us, who can clearly spot the blue square.
But the Himba have more words for types of green than we do in English.
When looking at a circle of green squares with only one slightly different shade, they could immediately spot the different one. Can you?
Vidipedia/Himba Colour ExperimentWhich square is the outlier? For most of us, that's harder.
This was the unique square:
Vidipedia/Himba Colour Experiment Davidoff says that without a word for a color, without a way of identifying it as different, it is much harder for us to notice what is unique about it — even though our eyes are physically seeing the blocks it in the same way.
So before blue became a common concept, maybe humans saw it. But it seems they did not know they were seeing it.
If you see something yet can't see it, does it exist? Did colors come into existence over time? Not technically, but our ability to notice them may have ...
For more fascinating information about colors, including information on how some "super-seeing" women may see colors in the sky that most of us have never dreamed of, check out the full Radiolab episode.
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
Explanation: It is not every day that such an interesting cloud photobombs your image. The original plan was to photograph a rare angular conjunction of Mars and Venus that occurred a week and a half ago, with the added bonus of a crescent Moon and the International Space Station (ISS) both passing nearby. Unfortunately, on Madeira Island, Portugal, this event was clouded out. During the next day, however, a spectacular lenticular cloud appeared before sunset, so the industrious astrophotographer quickly formulated a new plan. A close look at the resulting image reveals the Moon visible toward the left of the frame, while underneath, near the bottom, are the famous planets with Venus being the brighter. It was the unexpected lenticular cloud, though, perhaps looking like some sort of futuristic spaceship, that stole the show. The setting Sun illuminated the stationary cloud (and everything else) from the bottom, setting up an intricate pattern of shadows, layers, and brightly illuminated regions, all seen evolving in a corresponding video. Mars and Venus will next appear this close on the sky in late August, but whether any place on Earth will catch them behind such a photogenic cloud is unknown.Follow APOD on: Facebook, Google Plus, or Twitter
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U.
(photo via evilsanta)
Adam Victor Brandizzi
Jornalista decide criar seu próprio cliente de email. Ficou massa (a história)
Last summer, I reached peak email despair.
I started using email when I was 6 years old to keep in touch with my grandmother, and I’ve used it ever since—to boss around group project members in high school, to stay abreast of campus events in college, and to send job applications after graduation. I would get more email than I sent by far, but I was always able to keep my inbox trim.
But when I joined Slate as the magazine’s interactives editor, the floodgates opened. Emails from my boss. From my boss’s boss. From public relations firms. Endless reply-all chains, useful information buried midthread. To deal with the hundreds of messages I received every day, I tried every email application I could find: Gmail, Microsoft Outlook, Apple Mail, Mozilla Thunderbird, and a variety of lesser-known apps. I set up vast systems of mailbox rules, invented elaborate organizational methods, and followed complex workflows. Still, whenever I tried to deal with email, my eyes would glaze over, and my brain would turn to sludge.
I was email-depressed. You may well be too. Lots of people are, but email is not on the laundry list of things that people routinely complain about, like weather or allergies or public transit. Nobody asks, “How was the email today?” And nobody replies, “Awful, just awful. New York just has the worst email.” But ask people directly, and you’ll see there’s an epidemic of email depression. Last year, Slate conducted an internal survey about email, asking editors and writers, “How do you feel about the amount of emails in your inbox?” The responses included: “Exhausted.” “Overwhelmed.” “Alarmed.” “Fucked.”
I assumed email, like most tech products, is a sleek creature, optimized by years of development to swim speedily through the Internet. I was wrong.
Someday, I’m sure, we’ll communicate through direct brain-to-brain interfaces; email will seem, to our great zombie hive mind, like cave drawings do to us now, and the primitiveness of the past will inspire only pity and amusement. But for now, email is as certain as death and taxes. Like death, it’s inevitable, despite the miracles of modern technology. Like taxes, it’s universally bemoaned yet unquestionably necessary. It’s so integral to our 21st-century lives that unplugging from it has become a test of endurance and will, like a marathon, or a quest for spiritual relief—a vow of digital silence.
Why does email have to be such a Sisyphean slog for the billions of people who use it? It was this question that swirled through my brain one late night as I gazed into the ever-growing pile of email in my inbox. An email from Slate’s editor in chief, asking me about the status of a project. A flight confirmation. An email from a job applicant. A press release from the company that constantly emails me about Olivia Munn’s red-carpet looks. My inbox seemed like a library havocked by a tornado, with torn pages scattered helter-skelter across the room. I spent as much time searching for messages in that mess as reading them. Organizing them seemed like a futile fight against entropy, and by the time I was done, I barely felt like responding to them—after all, responding would just yield more email. My email felt broken. If only there were a way to fix it.
Maybe I could, I realized. Could I use the same technologies that I employ every day to create maps and widgets for Slate to make my own email application? One that would make email easy to organize, intuitive to use, something other than awful? So I embarked on the biggest technical challenge I’ve ever undertaken: I set out to build the perfect email application for me. The task would take me months, plunge me into email’s Rube Goldberg infrastructure, and drive me slightly bananas. There’s a fundamental conflict at the heart of email, between what it was designed to do decades ago and how we use it today. That conflict is the reason pretty much every email program is fatally flawed. The only way to “fix” email might be to change how we approach it entirely.
My mission: Build an email client for my laptop that would make email as enjoyable as possible, mixing and matching my favorite features from a variety of other apps and adding a few of my own. This program would be the perfect client for me—not for every user, not for my co-workers, and not for strangers. I’d call it SlateMail.
But the Internet couldn’t build the whole app for me. I’d have to stitch these components together into a usable solution. So I decided to dive right into the code. Needless to say, I spent my first couple days moving in the wrong direction. That’s because I started out assuming that email, like most tech products, is a sleek creature, optimized by years of evolutionary development to swim speedily through the great sea that is the Internet.
I was wrong.
Email is a dinosaur.
It's hard to remember that, because email didn't become mainstream until the turn of the century. But in fact it’s old enough that, like agriculture or fire, its origin is not precisely identifiable. Who deserves credit for it—or blame—is a contentious subject rooted in a semantic squabble about what types of electronic communication constitute “email.” If email is simply communication between two computer users, then it would be more appropriate to say that email simply started happening, at least as far back as the early 1960s. Then, users of a machine at MIT interacting via remote terminals would routinely leave files in shared directories with titles like “to Jack” or “to Steve.” In 1965, a user formalized this system with the “mail” command, which may be the first formal association of digital messaging to actual mail.
An academic journal about the history of computing tells the rest of the story: A group of computer scientists proposed the first standards in 1973, the year of Watergate and Roe v. Wade, making email's standards older than the average American. These first standards established little but the “from,” “subject,” and “date” headers that we’re all familiar with in email today. Computer scientists spent the rest of the decade arguing over minute aspects of email. Should email headers be machine-readable or human-readable? Should real names appear in the “to” field? Should the “date” field use a 12-hour or 24-hour standard? It was also in the 1970s that the “@” convention came into use and “reply” functions made it easier to respond to received emails. A computer scientist wrote the second set of standards in 1982, the same year another published the bedrock protocols of today’s worldwide Internet. Those standards defined email as it exists today and have been revised only slightly since.
The mailman wouldn’t dump hundreds of messages into your mailbox every day.
These standards, and protocols whose names you know from the error messages you see when your email doesn’t work (POP? IMAP? SMTP?), are managed by an international group of technologists called the Internet Engineering Task Force. They ensure that email’s two subsystems—providers and clients—play well together. Email providers are services that manage the exchange of email, such as Google, Apple iCloud, and Exchange servers. Email clients are the applications with which people read and compose emails. Apple Mail, Mozilla Thunderbird, Microsoft Outlook, and any mail apps on your phone: all email clients. Email clients that are associated with an email provider often bear its name—“Gmail” could mean either Google’s email client or Google’s email service, or both—but they’re two separate things. You could, for example, use Gmail the provider without Gmail the client at all. (If you read your Gmail on your iPhone’s default mail program, you already do.)
When Twitter wanted to add images to tweets, it just waved a magic wand, and bam! Suddenly everyone could tweet images. That’s not how change happens with email. Nobody “owns” email. There is no one at the top of the command chain that can change it overnight; there is no command chain. That’s powerful: It means any device that can connect to the Internet can also send and receive email. You could wash your hands of Google and Apple and Microsoft and turn your own computer into a mail server. But this decentralization also means there are many cooks in the kitchen: the providers, the clients, and the standardizers. As a result, change arrives more slowly than we’ve come to expect from other technologies.
The problem with email that became painfully clear to me while developing SlateMail is that there’s an incongruity between what it was made for and how we use it today. The mailman wouldn’t dump hundreds of messages into your mailbox every day. You wouldn’t compose a letter to everyone in your office asking if anybody would like to get burritos for lunch. But nevertheless, in crucial ways email still resembles slow, formal, encumbering, physical mail—even though it has displaced many other forms of communication: the phone call, the meeting, the fax, the pager, the desk fly-by. Due to email’s decentralization, clients, providers, and standardizers can only graft new functionalities onto the existing model. Thus, though innovation in email is happening, it’s characterized by features balancing cunningly and sometimes haphazardly atop an antiquated system—features that attempt to either restore email to its original metaphor or evolve it into something else entirely.
Imagine two people playing chess over the phone. One of them makes her first move, saying, “Pawn to C4.”
The other, making his own move, takes a deep breath and says: “A1: rook. B1: knight. C1: bishop. D1: queen. E1: king. F1: bishop. G1: knight. H1: rook. A2: pawn. B2: pawn. C2: blank. D2: pawn. E2: pawn. F2: pawn. G2: pawn. H2: pawn. A3: blank. B3: blank. C3: blank. D3: blank. E3: blank. F3: blank. G3: blank. H3: blank. A4: blank. B4: blank. C4: pawn. D4: blank. E4: blank. F4: blank. G4: blank. H4: blank. A5: blank. B5: blank. C5: blank. D5: pawn. E5: blank. F5: blank. G5: blank. H5: blank. A6: blank. B6: blank. C6: blank. D6: blank. E6: blank. F6: blank. G6: blank. H6: blank. A7: pawn. B7: pawn. C7: pawn. D7: blank. E7: pawn. F7: pawn. G7: pawn. H7: pawn. A8: rook. B8: knight. C8: bishop. D8: queen. E8: king. F8: bishop. G8: knight. H8: rook.”
Player 2, to make his move, has identified the state of every square on the board. What's wrong with this picture? It involves a lot of unnecessary communication. The players only need to convey how they’re changing the board, not the state of every square. A single statement, “Pawn to D5,” would suffice. Otherwise, the game would drag on for days.
Email is like this verbose chess player.
Imagine a hellish world in which every utterance must include a reiteration of all those preceding it. Welcome to email!
The first task I had to solve in building my email client was to synchronize email on my computer with the email on the email server. Virtually every desktop email client performs this sync every minute or so, so I expected there to be some common, efficient way of conducting this process. I expected my client could simply ask the server, “What’s changed since I last spoke with you?” The server would respond, “Well, I've got three new emails for you; here they are.” The client would tell the server, “These four emails here are now deleted, and these two emails here have now been read.” But in fact email doesn’t work this way.
Instead, the server acts like the verbose chess player. To properly sync via the standard protocol known as IMAP, the email client must query the server for a list of email IDs and their states (whether they’re read or unread), compare those with the IDs and states it already has, ask for the emails it's missing, delete whatever extra emails it's still holding on to, and ensure the states of the emails match. The amount of unnecessary data moving between the server and client as a result of these roundabout processes is hilarious. Your email client must download the statuses of potentially tens of thousands of email IDs, simply to reflect a single change. (Or no change!) It works, sure, but it also means that email clients move slower, demand more data, and siphon more bandwidth than they ought to.
Now, most of the time, you the user may not notice how inefficiently your email client is operating. But the syncing problem illustrates why email as a technology is slow to evolve: The framework on which you can make a new email program is overbuilt and clumsy, but it doesn’t change until enough providers and clients agree to do something about it. For example, in 2008 a member of the IETF developed a protocol called QRESYNC, which would allow the client to simply ask the server what’s changed instead of downloading a bunch of superfluous data. But many major providers, including Gmail and Outlook, still don’t support QRESYNC (pronounced CUE-ree-sink); neither does my work email provider. Why should they? Few clients support it—and why should those clients support it if the providers don’t? Nobody moves unless everyone pedals together. So SlateMail would remain frustratingly inefficient.
When Gmail launched as a public beta in 2007, one of its more remarkable features was the “conversation.” This feature groups related messages together by default so that you, the user, see each email in the context of a larger thread. In Gmail and many clients that implement it today, conversation threading also means that messages in the same thread appear as one single, collapsed email in your inbox, so your inbox is not clogged with 20 individual emails about the same subject. It’s a key innovation in email, a feature that brings the medium closer to how we’ve begun to use it: as a platform for democratic discussions involving several participants, like a virtual, ongoing brainstorming meeting. I would never use SlateMail without it. So I went digging into email’s underlying machinery to find a way to fetch a thread.
As it turns out, an email does not contain any metadata that explicitly identify its thread. As far as email’s protocols are concerned, the universal ingredients of email include mailboxes, keywords, and flags, but no conversations. There was no way for my email app to ask my email provider for all the messages in a conversation. Even so, Gmail and now other clients were somehow making conversation threading work. How?
As it turns out, Gmail wasn’t the first client to group related emails together like this. The feature dates back at least to old clients from the ’90s, like Netscape Mail, Grendel, Evolution, and Balsa. Without any conversation information from the IMAP protocol, these clients instead inferred it from the email’s content or metadata. The thread might be revealed by something as simple as a shared subject line, or an email might contain metadata indicating the message ID of the email to which it’s responding. Given all the messages in an account, a client could compile threads by tracing replies back to their original messages. The process involves an elaborate algorithm to account for the variety of ways clients use these hidden headers. It wasn’t until 2008 that this algorithm found its way into IMAP as a proposed extension to the protocol. This extension, called THREAD, would make it the responsibility of the email server to thread emails. Unfortunately, as with QRESYNC, neither Gmail nor Outlook support THREAD on their IMAP servers, so I had to implement the algorithm in my own client.
Then there’s the question of how those threaded messages are displayed. Here's a game for your next road trip: Carry on a conversation in which every participant must, after making her own original statement, repeat everything that the previous person said in its entirety. Continue as long as you can. Between two people, it might go something like this:
“What exit am I getting off on?”
“According to the map, Exit 45. You said, ‘What exit am I getting off on?’ ”
“I think it’s coming up. You said, ‘According to the map, Exit 45. You said, “What exit am I getting off on?” ’ ”
You just missed Exit 45.
Now imagine an alternate reality in which you had to talk like this, a hellish world in which every utterance must include a reiteration of all those preceding it. Welcome to email!
When you hit the reply button in your email client, it automatically quotes the messages to which you are responding. Your reply thus contains two messages: Your new, original message and the full text of the message to which you are responding. That is, unless you are writing a reply to a reply. In that case, your reply contains three messages. Individual messages grow progressively fatter as the thread expands.
Because all these superfluous quoted messages would pollute threaded views, modern apps have taken to collapsing quoted messages so it’s easier for the user to scan a thread. But this process often fails, causing a thread to explode in an unsightly mass of quoted messages. Useless but dangerous, the quoted message is the appendix of email, overlooked until it bursts. Why do some threads go mad?
Because emails don’t appear to follow any single convention on how to treat quoted messages. In emails sent from Gmail's client, for example, a quoted message starts with a line like this:
On Wed, Feb 18, 2015 at 12:41 PM, Chris Kirk <firstname.lastname@example.org> wrote:
In emails sent from Outlook’s Web client, a quoted email starts with a horizontal rule, followed by something like this:
From: Kirk, Chris
Sent: Monday, February 23, 2015 10:30 AM
To: Dan Kois
Subject: Re: Fresca
Other clients simply start quoted messages with this:
From: Chris Kirk
Sent: Monday, February 23, 2015 10:30 AM
To: Dan Kois
Subject: Re: Fresca
Such variability makes it nearly impossible to teach an email client to quickly recognize where the original message ends and the quoted message begins. Google even patented its method. It’s tricky business; if your method is too conservative, users will get annoyed by the mess of quoted messages below their emails; if it’s too liberal, it will eat precious lines of the new message.
To build the process on SlateMail, I looked, as I often do, to Stack Overflow, a community for coders to help one another with development challenges. I found only ill omens. When one user asked the community how to properly collapse quotes, a user warned, “You're in for a world of hurt.” A Facebook engineer explained on Quora that there is no clean solution. Bah, what would a real engineer with years of professional experience know anyway? I tried to make a clean solution. And I failed. I tweaked my solution. It still failed. Whatever method I tried, it would either gobble up chunks of new emails or be blind to huge sections of quote-gunk. I eventually gave up and resolved to have every message be automatically clipped to the first hundred or so characters except for the newest one. The user can just click a message to collapse or expand it. It's far from perfect—I hate clicking more than I need to—but it’ll get me by.
After three weeks of coding, I had a semifunctional email client. It could receive, display, and send email. Features I considered crucial, such as email threading and quoted message collapsing, worked somewhat reliably. There were other common features I wanted to add, features that most users would consider non-negotiable, such as search functionality and email address auto-completion. I knew such features could come with time, though. What I wanted to do is skip ahead and try to make SlateMail different. I wanted to make it an application that wouldn’t just receive, display, and send email, but help me deal with email.
When I think of managing email, I'm reminded of the famous scene from I Love Lucy, in which Lucy and Ethel are tasked with wrapping pieces of chocolate trundling along a conveyor belt. They begin confidently. “Well this is easy,” Lucy says. “Yeah, we can handle this OK,” Ethel replies. But the chocolates arrive faster and faster, and soon enough Lucy and Ethel are frantically shoving them in their mouths in a desperate, futile attempt to stay apace. That's how email felt to me, except you can't eat it when it starts to overwhelm you.
For example: I often find myself completely losing track of something that urgently requires my response, or struggling for several minutes to find a single email from a few weeks ago. That’s because email was not designed with the assumption that anyone would get hundreds of messages a day. To a computer, a direct email from your boss looks the same as your water bill. It's on you to put it where it needs to be. IMAP provides only two tools for this: mailboxes and flags. You're expected to sort your messages into mailboxes as if you're sorting paperwork into folders, and/or flag them with a limited set of colors. Do this long enough, however, and you'll soon end up like I did, rifling through an elaborate hierarchy to find the proper folder. The more complex your system of organization, the longer it takes you to organize or find each individual email, and the less useful the system becomes. Eventually, the time you spend maintaining the system outweighs the time it saves you.
What if it worked differently? What if we thought of a great workflow first and built an email application around it?
In 2011, Alex Obenauer and Josh Milas, two undergraduates at Virginia Tech, did just that. For a design class, Obenauer submitted a paper about an email client based on the idea of treating your inbox not like an inbox but like a to-do list. After all, every email in your inbox demands something from you: your brief attention, your reply, or some action like paying your water bill. After a successful Kickstarter campaign, they built Mail Pilot, an email client integrated tightly with this to-do list idea. In Mail Pilot, to mark an email as complete, you only need to hit the space bar, and the email will be literally crossed out on screen before disappearing into the “completed” folder. Most impressively, “deferring” an email allows you to select a date at which it will appear again at the top of your inbox—a feature that you can't replicate on any mainstream clients without third-party extensions (e.g. Boomerang for Gmail or MailTags for Apple Mail).
What if we thought of a great workflow first and built email an application around it?
Mail Pilot achieves all this by using the limited architecture of email in creative ways. Marking an email “complete” merely moves it to a special mailbox Mail Pilot automatically creates for completed messages. When the user defers an email to a future date, Mail Pilot creates a folder for that date, and when that date rolls around, it automatically displays the messages in that folder alongside the messages in your inbox. I replicated this feature in SlateMail: Selecting a thread and hitting the D key (for “done”) completes it, and hitting the S key (for “schedule”) defers it to a specific date.
But, still, I wanted more.
After people read and act on their emails, I’ve found that they fall into two categories: searchers or sorters. A sorter carefully divides his mail into different folders so that he can quickly find messages later. A searcher, meanwhile, thinks of her email as an infinite repository, and she has faith she’ll remember enough about an email in the future to find it once again when it’s needed using her client’s search function. Sorting requires discipline, and searching requires a sharp memory. I have neither.
Deep inside, I want to be a sorter. That’s because I spend far too much time every day querying my inbox for old emails. Searching for a message is a tedious process of trial and error; sometimes, all I can remember is the sender. But sorting never lasts that long; folders soon become subfolders and subfolders become sub-subfolders. The more elaborate my sorting system, the longer I needed to sort new mail—and the more likely I’d eventually have to search for that message anyway when I couldn’t tease out which sub-sub-subfolder I placed it in. Thus, I’ve drifted between sorting and searching, never truly fitting into either.
Maybe SlateMail could help.
Almost all of my important work-related communication can be divided into discrete projects. This essay is a project, completely independent from each of the interactive widgets I make. But I often feel like all the various threads that belong to a single project are floating in space. After taking projects off the back burner, I’ve found myself lost, struggling to remember what the last decisions and communications related to the project were.
So I developed an organizational concept called, yes, the “project”: a word or phrase that you can assign to a thread to connect it with others, regardless of whether they are in “done,” “open,” or “scheduled” mailboxes. Say my boss emails me to ask me when I’m finally going to send him my final draft of this story, and I want to associate that email with the other emails we’ve exchanged about this project. If I were relying on mailboxes, I would have to scan a list of boxes for my “SlateMail” mailbox. With SlateMail’s projects, I can just hit P, type “SlateMail,” and it’s instantly grouped with other emails I’ve tagged with “SlateMail.” No need to scan an ever-growing hierarchy for the correct folder. No laborious dragging and dropping. Just P, type, enter. Later, you just type a project name to pull up all the threads contained within it. Projects makes sorting easier and searching faster.
“Big deal!” you say. “You can do basically the same thing with Gmail’s labels.” Unlike labels, however, I’ve woven projects tightly in my application’s design. When I click an email that I’ve previously organized into a project, for example, I’ll see not only the messages in its thread but also a project pane displaying that thread in the context of all the threads in the same project, and which threads are open, scheduled, and done. What's more, I’ll see all the project's attachments for quick access.
So, how am I pulling this off within the limiting confines of IMAP? I’m not. The project data is stored only in SlateMail, not on the email server. For my needs now, that’s fine. Later, I can take inspiration from MailTags, an extension to Apple Mail that allows you to tag threads. MailTags saves tagging data by creating a new mailbox for each new tag and copying messages to it. This allows you to view all the messages belonging to a tag from a traditional mail client. It also means that if your computer blows up, all that tagging information is backed up on the IMAP server. Eventually, I can modify SlateMail to store project data in the same way.
My projects concept would help me wrap the chocolates more efficiently, but what if I could also slow down the conveyor belt? Often a developing reply-all storm has yielded all the useful information it’s ever going to; from here on out it’s just jokes and complications. So I borrowed Gmail’s “mute” function. Select a thread, hit M, and never see the thread pop up in your inbox again. Then, I gave myself a final treat: Select an email, hit B for “block,” and never hear from that sender again. Take that, Olivia Munn’s red-carpet looks!
At long last, I had built an email client of my dreams.
It was awful.
Bugs are inevitable. The more you code, the more bugs you’ll have. So it was no surprise that when I finally started to use SlateMail as my email client, it did weird things. It would crash in the middle of syncing. After I sent an email, I couldn’t select any others. The IMAP connection would randomly hang up, as if the server was just throwing up its hands and saying, “I can’t work with this thing anymore.”
Getting all of SlateMail’s parts to play together became my singular obsession. For a week I did little but sleep, eat, and debug. Sometimes, I could fix a bug in moments. In other cases, SlateMail would crash without an error message, and I would have to go through my code line by line to isolate the problem, a trial-and-error process spanning days. Because I relied so heavily on technology other people had built, there was a lot going on in my own app that I didn’t truly understand. I tracked one bug to a single line deep in the code in a programming language I don’t even know. In a few cases, glitches seemed so cryptic to me that instead of fixing them I added routines so the program would keep behaving normally in spite of them.
Screenshot of SlateMail code
I found the process frustrating, humbling, and strangely fun. I imagined I was on CSI—but instead of a person, it was my app dead on the floor, and I had to find out who killed it and how.
By trying to use my own app, I also realized how much I had forgotten to put into it. There’s a whole host of functions and rules that we expect from even the simplest email programs. We create new mailboxes. We select multiple messages and delete or move them together. When we select an email, navigate to a different mailbox, then come back, we expect the same email to be selected. When we look at our tree of mailboxes, we expect to see “Inbox” at the top, even though it doesn’t come first alphabetically.
I imagined I was on CSI—but instead of a person, it was my app dead on the floor, and I had to find out who killed it and how.
By that point SlateMail had taken over my life to a far greater extent than email ever had. Whenever I found time to think, I coded in my head. I improved the syncer on a walk to the grocery store. I built the mailbox tree standing on the subway. Miraculously, the pieces started to come together. After a few days, I could use SlateMail for an entire minute without something going horribly wrong. Then, two minutes, three minutes, four minutes!
Finally, enough parts worked that I could give SlateMail’s productivity features a true test run. Like Mail Pilot, Dropbox’s Mailbox, and Google’s new Inbox, SlateMail is designed around a workflow for new email. When an email arrives, you either take care of it and mark it as complete or defer it. The once-mythical Inbox Zero has become a daily experience for me using SlateMail. It’s a relief to know that the status of all my new emails is documented information, that each email is anchored with a completion status instead of floating in space. Theoretically, I’ll never forget to respond to an email again, and knowing that is tranquilizing. Every time I hit D and watch an email disappear from my inbox, I get a gratifying little endorphin rush. I did something today!
My projects feature has also been really useful. Instead of rifling through a monstrous folder hierarchy, trying to find that one folder where an email belonged, I tap P and type the project name. It’s the perfect sorting mechanism for a power user.
More importantly, SlateMail provides a broad view of a project that you don’t get in mainstream clients. With SlateMail’s project pane, I can see all the threads in a project, their completion states, and their attachments, all in one place. Let’s say I put this essay on the back burner and return to it in a couple of weeks. Where did I leave off? To whom do I still need to respond? Where is the most recent draft? Before SlateMail, I would rely on trial-and-error searching. Search emails from my boss. Search emails from my boss with attachments. Search emails from my boss directly to me with attachments older than two weeks. Too much work! Now, it’s as simple as clicking “projects” and typing “SlateMail.” My mail app finally reflects how I organize work mentally.
But my favorite feature by far is blocking senders. Sure, with most clients you can make mailbox rules or filters to weed people out, but SlateMail reduces that command to a single keystroke, and that makes me feel tyrannically powerful. When marketing scoundrels send me junk mail without the courtesy of an unsubscribe link, I simply press B to send them to the dungeons, never to be heard from again.
Nothing makes you appreciate the complexity of a technology, whether a toaster or an email application, like an attempt to build it yourself. In my quest to build a perfect email client for me, I saw how challenging it is to implement many of the features that we now take for granted, and harder still to invent new, better features.
For the end user, email's old infrastructure means slower, heavier apps and unreliable processes. But the larger consequence is hidden: Email apps are so hard to build correctly that they're rarely built at all. It's been more than a month, but SlateMail is still laughably buggy, and it’s missing features most people consider crucial, like search. I’m not sure that the application even works with most major email providers. If it does, there are a hundred opportunities for under-the-hood optimizations. How long would it take me to build a minimum viable product for most users, something that could put up even token competition in the email space? Years. But I promised to release SlateMail ready or not, so I hereby open it in all of its monstrosity to the world! Brave beta testers can download it here, and fellow developers can view my abominable code here. I’m eager to hear what people think of the product in its nascent state.
Email isn't a dragon to be slain. It's an old beast of burden, and we've abused it by throwing the whole spectrum of human communication on its tired back.
The barrier to entry is high, and as email evolves, it will only get higher. Email clients must avoid what software developers call “breaking changes”—modifications that wouldn't be backward-compatible, that would, in the case of email, put some clients or providers out in the cold. Everything has to work with everything else, even the ancient clients and providers people still use from a decade ago. “It's the plumbing in Manhattan,” Dave Baggett, founder of the email app Inky, told me. “You can complain all you want, but it's not going to be ripped out. It just can't. There are too many dependencies on it.” My email client will treat my email despair, but I don’t believe it can be cured. Email will never be what we want it to be. Reading through these protocols, writing all this code, I've realized why.
The problem isn’t email’s laborious syncing processes, or the exploding threads, or the medium’s off-putting formality, or even the volume of email we get. It’s that we’re changing faster than email can. We now expect our communication to be frictionless—we expect that we need merely to twitch to be understood. We expect our everyday apps to improve not once each decade but rather every month. The problem isn’t email. The problem is us.
Email isn't a dragon to be slain. It's an old beast of burden, and we've abused it by throwing the whole spectrum of human communication on its tired back. What if saving this loyal creature doesn't mean radically transforming it but merely easing its load? Maybe the dream of an email-free future isn't dead; maybe it just means a future in which email is merely a sliver of our communication rather than the whole pie. Already, Facebook and Snapchat and texting have relieved email's burden in the social sphere. Now, relief may be coming to the business world, where email yet remains the go-to tool for workplace communication.
Last year, Slate started to use Slack, one of a spate of new team collaboration tools. Like similar apps, such as Hall, Twoodo, and Hipchat, Slack is an instant messaging tool designed for the workplace. It allows a company to set up persistent chat rooms for specific topics of communication, called channels; the whole enchilada is only visible to Slate staffers, not anyone from the outside world. Now, staffers don't send out emails to indicate their whereabouts each morning; they simply post a few words in the #whereabouts channel. If New York staffers want to go out to lunch with co-workers, they simply alert the #slateny channel. Staffers can also chat directly with each other and create private chat rooms.
Screenshot courtesy of Slack
Many at Slate were skeptical at first. Wouldn't this just fracture our communication, force us to check two windows instead of one? I don't know precisely how much Slate's email volume has dropped since we started using Slack, but the word plummeted comes to mind. Mailing lists where panda-related debates once raged are now largely silent. Pithy retorts and bite-sized asks (and raging panda debates!) now belong to the Slack realm. Email, meanwhile, looks like mail again: the channel of the lengthy, semiformal message. I still hash out my projects in so many email threads, but now, when I need, say, a yes or a no from my boss or a URL from a developer, I don't draft a letter replete with salutations and signatures. “Email doesn’t disappear,” James Sherrett, Slack’s director of accounts, told me. “It just becomes a component of your communication.”
Tools like Slack won’t supplant email or cure email depression. But they offer a treatment and inspire a suggestion: As the providers and clients and standardizers nudge email into the future, maybe we should do a better job of fitting the tool to the task. Sure, that means experimenting with email apps and workflows to find the perfect combination for you or even, if you’re crazy, developing your own email client. But far more important is to lean a little less on this 50-year-old technology when so many other options are available to us: social media, team collaboration tools, even auditory corporeal interfacing (i.e., talking). The next time you’re about to send a quick email, don’t. Instead of forcing your contact to sync, thread, and triage, save email for the messages that fit the medium. Maybe together we’ll all find email happiness. How lovely the email was today!
Photos by Juliana Jiménez Jaramillo. Illustrations by Alex Eben Meyer. Layout compiled by Holly Allen.
Riddle me this…
"I received a birthday card. Thanks, ‘grampa’." -papasmurf61
Adam Victor Brandizzi
O céu é a única coisa boa de Los Angeles.
Last night’s sunset
#marvista #losangeles #california #sunset #cloudporn #lastory #potd (at Mar Vista, Los Angeles)
People will stare. Make it worth their while → Zuhair Murad Haute Couture | S/S ‘14
Imagine that (for some reason involving cultural tradition, family pressure, or a shotgun) you suddenly have to get married. Fortunately, there are two candidates. One is charming and a lion in bed but an idiot about money. The other has a reliable income and fantastic financial sense but is, on the other fronts, kind of meh. Which would you choose?
Sound like six of one, half-dozen of the other? Many would say so. But that can change when a third person is added to the mix. Suppose candidate number three has a meager income and isn’t as financially astute as choice number two. For many people, what was once a hard choice becomes easy: They’ll pick the better moneybags, forgetting about the candidate with sex appeal. On the other hand, if the third wheel is a schlumpier version of attractive number one, then it’s the sexier choice that wins in a landslide. This is known as the “decoy effect”—whoever gets an inferior competitor becomes more highly valued.
The decoy effect is just one example of people being swayed by what mainstream economists have traditionally considered irrelevant noise. After all, their community has, for a century or so, taught that the value you place on a thing arises from its intrinsic properties combined with your needs and desires. It is only recently that economics has reconciled with human psychology. The result is the booming field of behavioral economics, pioneered by Daniel Kahneman, a psychologist at Princeton University, and his longtime research partner, the late Amos Tversky, who was at Stanford University.
It’s all about leveraging the unconscious factors that drive 95 percent of consumer decision-making.
It has created a large and growing list of ways that humans diverge from economic rationality. Researchers have found that all sorts of logically inconsequential circumstances—rain, sexual arousal (induced and assessed by experimenters with Saran-wrapped laptops), or just the number “67” popping up in conversation—can alter the value we assign to things. For example, with “priming effects,” irrelevant or unconsciously processed information prompts people to assign value by association (seeing classrooms and lockers makes people slightly more likely to support school funding). With “framing effects,” the way a choice is presented affects people’s evaluation: Kahneman and Tversky famously found that people prefer a disease-fighting policy that saves 400 out of 600 people to a policy that lets 200 people die, though logically the two are the same. While mainstream economists are still wrestling with these ideas, outside of academe there is little debate: The behaviorists have won.
Yet for all their revolutionary impact, even as the behaviorists have overturned the notion that our information processing is economically rational, they still suggest that it should be economically rational. When they describe human decision-making processes that don’t conform to economic theory, they speak of “mistakes”—what Kahneman often calls “systematic errors.” Only by accepting that economic models of rationality lead to “correct” decisions, can you say that human thought-processes lead to “wrong” ones.
But what if the economists—both old-school and behavioral—are wrong? What if our illogical and economically erroneous thinking processes often lead to the best possible outcome? Perhaps our departures from economic orthodoxy are a feature, not a bug. If so, we’d need to throw out the assumption that our thinking is riddled with mistakes. The practice of sly manipulation, based on the idea that the affected party doesn’t or can’t know what’s going on, would need to be replaced with a rather different, and better, goal: self knowledge.The behaviorist: Daniel Kahneman pioneered behavioral economics, together with the late Amos Tversky.Photo by Sean Gallup/Getty Images for Burda Media
Nowadays, that fast-food company trying to get you to eat fries and the Health Department trying to get you not to both use techniques drawn from behavioral economics—framing and priming you to make certain choices. Welcome to the new era of “choice architecture,” “behavioral marketing,” and “nudge.”
It’s guiding a new approach to advertising and marketing based on a simplified version of behavioral economics—which holds that people are inconsistent, illogical, easily influenced, and seldom aware of why they choose what they choose. “It’s all about leveraging the unconscious factors that drive 95 percent of consumer decision-making, and the best way to do that is through behavioral economics,” wrote Michele Fabrizi, president of the advertising agency MARC USA, in a recent issue of Advertising Age. Behavioral approaches are also infusing management. According to John Balz, Chief Behavioral Officer of a software firm called Opower, as many as 20 percent of Fortune 500 companies now have someone responsible for bringing behavioral science perspectives to business decisions and operations.
Governments are also jumping on the behavioral science bandwagon. In fact, the massive interest in behavioral economics shown by private enterprise has become an argument in favor of the “choice architecture” now used by both governments and do-gooder organizations. With companies working so hard to get you to eat potato chips, drink beer, and spend all your money right now, goes the argument, it’s the government’s duty to use similar techniques to get you to make better choices. This approach, dubbed “Nudge” by economist Richard Thaler and law professor Cass Sunstein in their 2008 book of that title, marks an even bigger change for politics than for business. Citizens of democracies are used to government telling them what they should do, and offering carrots, or sticks, to get compliance. They aren’t so used to government trying to change people’s behavior without being noticed.
The premise of many nudge tactics—that people’s thinking processes are full of mistakes—may be wrong.
For nudge policies to work, experts have to identify an undesirable outcome, then figure out how to use our own decision-making “errors” to “nudge” us toward a better choice. For example, Sunstein writes, “Many people think about their future selves in the same way they think about strangers, but if you show people pictures of what they’ll look like in twenty years, they get a lot more interested in saving for retirement.” In other words, helping people who make mistakes requires experts who know “better.”
Today, nudge is truly a global phenomenon. According to Mark Whitehead, a geographer at Aberystwyth University in Wales, and his colleagues, nudge policies are in place in 136 of the world’s 196 nations. And in 51 of those nations, including China, the United States, Australia, and countries in West Africa, East Africa, and Western Europe, the approach is being directed by a nationwide, centralized authority. From 2009 to 2012, Sunstein himself was head of the White House Office of Information and Regulatory Affairs, which oversees all U.S. government regulations. The United Kingdom has a “nudge squad” (Thaler is a consultant), as does the U.S. government.
This is how a debate among psychologists and economists has come to have cultural significance. A government that assumes people make mistakes needs expert to describe the “correct” decisions, which doesn’t always happen. That’s an odd position for a democratic government to be in, argues the philosopher Mark D. White, a philosopher at the College of Staten Island who is a skeptic of nudge tactics. “Yes, business does it,” he says, “but here is the difference: Everybody knows business tries to manipulate you. We don’t expect the government to manipulate us. That’s not the role that most of us assign to our government.” People who lost their retirement savings in the financial crisis of 2008 might be forgiven for wishing they had not been nudged to invest so much.
What’s more, the premise of many nudge tactics—that people’s thinking processes are full of mistakes—may be wrong.
At the heart of the notion that human decision-making is error-prone is the conviction that aspects of choice should divide neatly into a relevant signal (what does it cost? how much does it matter to me?) and irrelevant noise (does the way it was presented sound hopeful or sad? Is it sunny outside?). But defining what information is relevant isn’t as obvious as it may sound.
What is “noise” in a one-off choice (pick A or B) can be relevant information when an organism is repeatedly chasing resources in changeable and uncertain conditions, notes Donald Hantula, a psychology professor who directs the Decision Making Laboratory at Temple University. If you read research that emphasizes “mistakes,” he says, “one of the conclusions you would come to is that human beings are just stupid. As a lifelong member of the species, I have a little bit of a problem with that.”
Susceptibility to the “decoy effect” is just one of a number of “irrational” decision strategies that have stood the test of evolutionary time, which suggests these strategies have advantages. In fact, studies with all sorts of animals have found they make the same economic “mistakes” as humans. Even the slime mold—a collective of unicellular organisms that join together to form a slug-like whole—is susceptible to the decoy effect. Tanya Latty and Madeleine Beekman, biologists at the University of Sydney, have shown that when faced with a choice between a rich oatmeal-and-agar mix under bright lights (which interferes with their cell biology) and a less nutritious mix in comfortable darkness, slime molds showed no strong preference. They act a bit like diners comparing a loud, unpleasant restaurant with great food to a nice, quiet place with a blah menu—could go either way. (In fact, slime molds being what they are, some headed in both directions at once.) But when the researchers added a worse dark option (the equivalent of a quiet joint with really bad food), the slime molds dropped the bright option and chose the better of the two dark ones. “When you start to see the same fundamental problems solved by all kind of species it begins to suggest there’s some sort of common mechanism,” says Hantula.
Laurie Santos, a professor of psychology and cognitive science at Yale University, has found some mechanisms that might be shared primarily by primates. Like the typical person—who prefers to start with a $10 bill and try to win a second, rather than starting with two bills and risk losing one—capuchins preferred to start with a grape and try for another, rather than risk losing two grapes they already held. They were, in behavioral jargon, “loss averse.” Santos suspects this trait might have evolved because monkeys, like most primates, need to keep track of social relationships.Mutually Beneficial: Snow monkeys grooming each other in a natural hot spring near Yudanaka, Japan. Shutterstock / David Evison
Monkeys track social relationships by grooming one another. If you’re a monkey exchanging this favor with others, and another monkey does more for you than you did for him, it’s a small gain for you, and not a problem. “But if somebody failed to groom you as much as you did them yesterday,” Santos says, “that’s a big problem and you should have a mechanism to get upset about that.” In primates, loss aversion may be a helpful rule-of-thumb rooted in monitoring of social tit-for-tat. “It’s possible that what we’re seeing in the context of these classic heuristics are actually strategies built for something else,” says Santos. “That doesn’t mean they’re bad or errors.” As Gerd Gigerenzer, Director at the Max Planck Institute for Human Development, in Germany, puts it, a strategy may be “ecologically rational”—the most successful method for solving real problems overall—even if it violates assumptions about rational decision-making.
If I am saving you from turning the wrong way when you are lost, what difference does it make if you don’t know I am steering you?
Thinking about humans as social animals casts a different light on some famous framing effects often cited as examples of human information-processing “error”: Why can’t we recognize that saving 400 people out of 600 is the same as letting 200 people die? One good reason might be that even though the cases are logically identical, they aren’t socially identical—one choice is framed in a way that emphasizes death, and the other, life. In many real-life situations, that difference is relevant information. Even as we share some supposedly error-prone methods of decision-making with very distant relatives like the slime mold, other “mistakes” may reflect the particular needs of primates. In fact, some of our mistakes may be peculiar to our species alone.
Santos, who studies primates to understand the evolution of the human mind, notes that monkeys—even our close cousins, chimpanzees—are not as attuned to social cues, the consequence being that they act more like rational agents. For example, in experiments with a simple puzzle—open this box, take out a prize—Victoria Horner and Andrew Whiten showed both human children and young chimpanzees how to solve the puzzle. In one set of trials, the box’s transparent plastic sides were covered by black tape. In the other, the tape was gone, making it obvious that several of the experimenters’ steps were unnecessary. The chimps reacted to this by skipping the useless steps. But 80 percent of the human children stuck with the more complicated procedure. They were “over-imitating”—sticking more closely to what a person showed them than necessary. Humans, especially children, often make the “mistake” of over-imitating when they learn new skills. While over-imitation is not always the most efficient or rational way to solve a problem, it may be a way to pass on crucial information—including custom, etiquette, tradition, and ritual—that we need to be human.
Suppose, though, there was a seat at the table for researchers who accept that people depart from economic rationality, but don’t call that departure a form of error. Would their model make a practical difference?
There are signs that it could. Here is why: Policymakers who assume people are constantly making mistakes must strive to correct “errors” by subtly rearranging choices, but policymakers who don’t assume people make mistakes can instead orient their efforts to bringing people’s attention to their own mental processes. Once aware of these mental processes, people can then do what they will. It’s a subtle but important difference. The first approach nudges people toward a pre-chosen goal; the second informs people about the workings of their own minds, so they can better achieve whatever goals they desire.
Consider this nudge-like strategy, designed by Hantula. About a year ago, he and some colleagues devised a procedural change in the cafeteria of a hospital. “If you ask people at 7:30 in the morning if they want a healthy lunch or something less healthy, they’ll say ‘I choose a healthy lunch’,” says Hantula. “The problem is, at about noon, when they walk up to the cafeteria and the guy at the counter is flipping burgers, they say, ‘I’ll take the burger!’ ” Hantula and his colleagues gave workers a new option. “We were able to get the software that ran their cafeteria and their billing system reprogrammed so that people could pre-order and pay for their lunch at 7:30 in the morning.” By moving their “choice point” from lunchtime to an hour when people preferred a healthier lunch, a significant percentage of workers ended up eating healthier food.
This might sound like a typical nudge. But it is subtly different from those that seek to get people to exercise, invest in retirement, or get medical check-ups without their noticing. In Hantula’s study, the intervention isn’t covert. It calls attention to the decision it’s attempting to influence. “If you approach it from the perspective of ‘oh, these people are stupid, so let’s arrange things so we use their stupidity against them,’ that’s one thing,” Hantula says. “This is different. It’s not sneaky at all.” If your goal is to eat healthier, here’s a way you can reach that goal.
To assume that people make mistakes all the time is to assume that they needn’t be aware that a government or business is trying to change their behavior. After all, if I am saving you from turning the wrong way when you are lost, what difference does it make if you don’t know I am steering you? This outlook has led to a widespread preference among nudgers for “moving in imperceptible steps that do not create backlash or resistance” as Andreas Souvaliotis, the executive chairman of Social Change Rewards, recently said in an interview in Forbes. Remove the assumption that people are inescapably mistaken, though, and the justification for this kind of sneakiness disappears. And that suggests a whole different direction for nudge. Call it Nudge 2.0: Corporate messages and government policies that increase awareness about how the mind works, rather than depending on people not to notice. As it is with the slime mold or the capuchin monkey, the decoy effect might be irresistible to us all. But at least we can understand what we’re doing when we succumb to it.
Adam Victor Brandizzi
This is by far the best article for laymen so far.
What is the Islamic State?
Where did it come from, and what are its intentions? The simplicity of these questions can be deceiving, and few Western leaders seem to know the answers. In December, The New York Times published confidential comments by Major General Michael K. Nagata, the Special Operations commander for the United States in the Middle East, admitting that he had hardly begun figuring out the Islamic State’s appeal. “We have not defeated the idea,” he said. “We do not even understand the idea.” In the past year, President Obama has referred to the Islamic State, variously, as “not Islamic” and as al-Qaeda’s “jayvee team,” statements that reflected confusion about the group, and may have contributed to significant strategic errors.
The group seized Mosul, Iraq, last June, and already rules an area larger than the United Kingdom. Abu Bakr al-Baghdadi has been its leader since May 2010, but until last summer, his most recent known appearance on film was a grainy mug shot from a stay in U.S. captivity at Camp Bucca during the occupation of Iraq. Then, on July 5 of last year, he stepped into the pulpit of the Great Mosque of al-Nuri in Mosul, to deliver a Ramadan sermon as the first caliph in generations—upgrading his resolution from grainy to high-definition, and his position from hunted guerrilla to commander of all Muslims. The inflow of jihadists that followed, from around the world, was unprecedented in its pace and volume, and is continuing.
Our ignorance of the Islamic State is in some ways understandable: It is a hermit kingdom; few have gone there and returned. Baghdadi has spoken on camera only once. But his address, and the Islamic State’s countless other propaganda videos and encyclicals, are online, and the caliphate’s supporters have toiled mightily to make their project knowable. We can gather that their state rejects peace as a matter of principle; that it hungers for genocide; that its religious views make it constitutionally incapable of certain types of change, even if that change might ensure its survival; and that it considers itself a harbinger of—and headline player in—the imminent end of the world.
The Islamic State, also known as the Islamic State of Iraq and al-Sham (ISIS), follows a distinctive variety of Islam whose beliefs about the path to the Day of Judgment matter to its strategy, and can help the West know its enemy and predict its behavior. Its rise to power is less like the triumph of the Muslim Brotherhood in Egypt (a group whose leaders the Islamic State considers apostates) than like the realization of a dystopian alternate reality in which David Koresh or Jim Jones survived to wield absolute power over not just a few hundred people, but some 8 million.
We have misunderstood the nature of the Islamic State in at least two ways. First, we tend to see jihadism as monolithic, and to apply the logic of al‑Qaeda to an organization that has decisively eclipsed it. The Islamic State supporters I spoke with still refer to Osama bin Laden as “Sheikh Osama,” a title of honor. But jihadism has evolved since al-Qaeda’s heyday, from about 1998 to 2003, and many jihadists disdain the group’s priorities and current leadership.
Bin Laden viewed his terrorism as a prologue to a caliphate he did not expect to see in his lifetime. His organization was flexible, operating as a geographically diffuse network of autonomous cells. The Islamic State, by contrast, requires territory to remain legitimate, and a top-down structure to rule it. (Its bureaucracy is divided into civil and military arms, and its territory into provinces.)
We are misled in a second way, by a well-intentioned but dishonest campaign to deny the Islamic State’s medieval religious nature. Peter Bergen, who produced the first interview with bin Laden in 1997, titled his first book Holy War, Inc. in part to acknowledge bin Laden as a creature of the modern secular world. Bin Laden corporatized terror and franchised it out. He requested specific political concessions, such as the withdrawal of U.S. forces from Saudi Arabia. His foot soldiers navigated the modern world confidently. On Mohammad Atta’s last full day of life, he shopped at Walmart and ate dinner at Pizza Hut.
Nearly all the Islamic State’s decisions adhere to what it calls, on its billboards, license plates, and coins, “the Prophetic methodology.”
There is a temptation to rehearse this observation—that jihadists are modern secular people, with modern political concerns, wearing medieval religious disguise—and make it fit the Islamic State. In fact, much of what the group does looks nonsensical except in light of a sincere, carefully considered commitment to returning civilization to a seventh-century legal environment, and ultimately to bringing about the apocalypse.
The most-articulate spokesmen for that position are the Islamic State’s officials and supporters themselves. They refer derisively to “moderns.” In conversation, they insist that they will not—cannot—waver from governing precepts that were embedded in Islam by the Prophet Muhammad and his earliest followers. They often speak in codes and allusions that sound odd or old-fashioned to non-Muslims, but refer to specific traditions and texts of early Islam.
To take one example: In September, Sheikh Abu Muhammad al-Adnani, the Islamic State’s chief spokesman, called on Muslims in Western countries such as France and Canada to find an infidel and “smash his head with a rock,” poison him, run him over with a car, or “destroy his crops.” To Western ears, the biblical-sounding punishments—the stoning and crop destruction—juxtaposed strangely with his more modern-sounding call to vehicular homicide. (As if to show that he could terrorize by imagery alone, Adnani also referred to Secretary of State John Kerry as an “uncircumcised geezer.”)
But Adnani was not merely talking trash. His speech was laced with theological and legal discussion, and his exhortation to attack crops directly echoed orders from Muhammad to leave well water and crops alone—unless the armies of Islam were in a defensive position, in which case Muslims in the lands of kuffar, or infidels, should be unmerciful, and poison away.
The reality is that the Islamic State is Islamic. Very Islamic. Yes, it has attracted psychopaths and adventure seekers, drawn largely from the disaffected populations of the Middle East and Europe. But the religion preached by its most ardent followers derives from coherent and even learned interpretations of Islam.
Virtually every major decision and law promulgated by the Islamic State adheres to what it calls, in its press and pronouncements, and on its billboards, license plates, stationery, and coins, “the Prophetic methodology,” which means following the prophecy and example of Muhammad, in punctilious detail. Muslims can reject the Islamic State; nearly all do. But pretending that it isn’t actually a religious, millenarian group, with theology that must be understood to be combatted, has already led the United States to underestimate it and back foolish schemes to counter it. We’ll need to get acquainted with the Islamic State’s intellectual genealogy if we are to react in a way that will not strengthen it, but instead help it self-immolate in its own excessive zeal.Control of territory is an essential precondition for the Islamic State’s authority in the eyes of its supporters. This map, adapted from the work of the Institute for the Study of War, shows the territory under the caliphate’s control as of January 15, along with areas it has attacked. Where it holds power, the state collects taxes, regulates prices, operates courts, and administers services ranging from health care and education to telecommunications.
In November, the Islamic State released an infomercial-like video tracing its origins to bin Laden. It acknowledged Abu Musa’b al Zarqawi, the brutal head of al‑Qaeda in Iraq from roughly 2003 until his killing in 2006, as a more immediate progenitor, followed sequentially by two other guerrilla leaders before Baghdadi, the caliph. Notably unmentioned: bin Laden’s successor, Ayman al Zawahiri, the owlish Egyptian eye surgeon who currently heads al‑Qaeda. Zawahiri has not pledged allegiance to Baghdadi, and he is increasingly hated by his fellow jihadists. His isolation is not helped by his lack of charisma; in videos he comes across as squinty and annoyed. But the split between al-Qaeda and the Islamic State has been long in the making, and begins to explain, at least in part, the outsize bloodlust of the latter.
Zawahiri’s companion in isolation is a Jordanian cleric named Abu Muhammad al Maqdisi, 55, who has a fair claim to being al-Qaeda’s intellectual architect and the most important jihadist unknown to the average American newspaper reader. On most matters of doctrine, Maqdisi and the Islamic State agree. Both are closely identified with the jihadist wing of a branch of Sunnism called Salafism, after the Arabic al salaf al salih, the “pious forefathers.” These forefathers are the Prophet himself and his earliest adherents, whom Salafis honor and emulate as the models for all behavior, including warfare, couture, family life, even dentistry.
The Islamic State awaits the army of “Rome,” whose defeat at Dabiq, Syria, will initiate the countdown to the apocalypse.
Maqdisi taught Zarqawi, who went to war in Iraq with the older man’s advice in mind. In time, though, Zarqawi surpassed his mentor in fanaticism, and eventually earned his rebuke. At issue was Zarqawi’s penchant for bloody spectacle—and, as a matter of doctrine, his hatred of other Muslims, to the point of excommunicating and killing them. In Islam, the practice of takfir, or excommunication, is theologically perilous. “If a man says to his brother, ‘You are an infidel,’ ” the Prophet said, “then one of them is right.” If the accuser is wrong, he himself has committed apostasy by making a false accusation. The punishment for apostasy is death. And yet Zarqawi heedlessly expanded the range of behavior that could make Muslims infidels.
Maqdisi wrote to his former pupil that he needed to exercise caution and “not issue sweeping proclamations of takfir” or “proclaim people to be apostates because of their sins.” The distinction between apostate and sinner may appear subtle, but it is a key point of contention between al-Qaeda and the Islamic State.
Denying the holiness of the Koran or the prophecies of Muhammad is straightforward apostasy. But Zarqawi and the state he spawned take the position that many other acts can remove a Muslim from Islam. These include, in certain cases, selling alcohol or drugs, wearing Western clothes or shaving one’s beard, voting in an election—even for a Muslim candidate—and being lax about calling other people apostates. Being a Shiite, as most Iraqi Arabs are, meets the standard as well, because the Islamic State regards Shiism as innovation, and to innovate on the Koran is to deny its initial perfection. (The Islamic State claims that common Shiite practices, such as worship at the graves of imams and public self-flagellation, have no basis in the Koran or in the example of the Prophet.) That means roughly 200 million Shia are marked for death. So too are the heads of state of every Muslim country, who have elevated man-made law above Sharia by running for office or enforcing laws not made by God.
Following takfiri doctrine, the Islamic State is committed to purifying the world by killing vast numbers of people. The lack of objective reporting from its territory makes the true extent of the slaughter unknowable, but social-media posts from the region suggest that individual executions happen more or less continually, and mass executions every few weeks. Muslim “apostates” are the most common victims. Exempted from automatic execution, it appears, are Christians who do not resist their new government. Baghdadi permits them to live, as long as they pay a special tax, known as the jizya, and acknowledge their subjugation. The Koranic authority for this practice is not in dispute.Musa Cerantonio, an Australian preacher reported to be one of the Islamic State’s most influential recruiters, believes it is foretold that the caliphate will sack Istanbul before it is beaten back by an army led by the anti-Messiah, whose eventual death— when just a few thousand jihadists remain—will usher in the apocalypse. (Paul Jeffers/Fairfax Media)
Centuries have passed since the wars of religion ceased in Europe, and since men stopped dying in large numbers because of arcane theological disputes. Hence, perhaps, the incredulity and denial with which Westerners have greeted news of the theology and practices of the Islamic State. Many refuse to believe that this group is as devout as it claims to be, or as backward-looking or apocalyptic as its actions and statements suggest.
Their skepticism is comprehensible. In the past, Westerners who accused Muslims of blindly following ancient scriptures came to deserved grief from academics—notably the late Edward Said—who pointed out that calling Muslims “ancient” was usually just another way to denigrate them. Look instead, these scholars urged, to the conditions in which these ideologies arose—the bad governance, the shifting social mores, the humiliation of living in lands valued only for their oil.
Without acknowledgment of these factors, no explanation of the rise of the Islamic State could be complete. But focusing on them to the exclusion of ideology reflects another kind of Western bias: that if religious ideology doesn’t matter much in Washington or Berlin, surely it must be equally irrelevant in Raqqa or Mosul. When a masked executioner says Allahu akbar while beheading an apostate, sometimes he’s doing so for religious reasons.
Many mainstream Muslim organizations have gone so far as to say the Islamic State is, in fact, un-Islamic. It is, of course, reassuring to know that the vast majority of Muslims have zero interest in replacing Hollywood movies with public executions as evening entertainment. But Muslims who call the Islamic State un-Islamic are typically, as the Princeton scholar Bernard Haykel, the leading expert on the group’s theology, told me, “embarrassed and politically correct, with a cotton-candy view of their own religion” that neglects “what their religion has historically and legally required.” Many denials of the Islamic State’s religious nature, he said, are rooted in an “interfaith-Christian-nonsense tradition.”
Every academic I asked about the Islamic State’s ideology sent me to Haykel. Of partial Lebanese descent, Haykel grew up in Lebanon and the United States, and when he talks through his Mephistophelian goatee, there is a hint of an unplaceable foreign accent.
According to Haykel, the ranks of the Islamic State are deeply infused with religious vigor. Koranic quotations are ubiquitous. “Even the foot soldiers spout this stuff constantly,” Haykel said. “They mug for their cameras and repeat their basic doctrines in formulaic fashion, and they do it all the time.” He regards the claim that the Islamic State has distorted the texts of Islam as preposterous, sustainable only through willful ignorance. “People want to absolve Islam,” he said. “It’s this ‘Islam is a religion of peace’ mantra. As if there is such a thing as ‘Islam’! It’s what Muslims do, and how they interpret their texts.” Those texts are shared by all Sunni Muslims, not just the Islamic State. “And these guys have just as much legitimacy as anyone else.”
All Muslims acknowledge that Muhammad’s earliest conquests were not tidy affairs, and that the laws of war passed down in the Koran and in the narrations of the Prophet’s rule were calibrated to fit a turbulent and violent time. In Haykel’s estimation, the fighters of the Islamic State are authentic throwbacks to early Islam and are faithfully reproducing its norms of war. This behavior includes a number of practices that modern Muslims tend to prefer not to acknowledge as integral to their sacred texts. “Slavery, crucifixion, and beheadings are not something that freakish [jihadists] are cherry-picking from the medieval tradition,” Haykel said. Islamic State fighters “are smack in the middle of the medieval tradition and are bringing it wholesale into the present day.”Our failure to appreciate the essential differences between ISIS and al-Qaeda has led to dangerous decisions.
The Koran specifies crucifixion as one of the only punishments permitted for enemies of Islam. The tax on Christians finds clear endorsement in the Surah Al-Tawba, the Koran’s ninth chapter, which instructs Muslims to fight Christians and Jews “until they pay the jizya with willing submission, and feel themselves subdued.” The Prophet, whom all Muslims consider exemplary, imposed these rules and owned slaves.
Leaders of the Islamic State have taken emulation of Muhammad as strict duty, and have revived traditions that have been dormant for hundreds of years. “What’s striking about them is not just the literalism, but also the seriousness with which they read these texts,” Haykel said. “There is an assiduous, obsessive seriousness that Muslims don’t normally have.”
Before the rise of the Islamic State, no group in the past few centuries had attempted more-radical fidelity to the Prophetic model than the Wahhabis of 18th‑century Arabia. They conquered most of what is now Saudi Arabia, and their strict practices survive in a diluted version of Sharia there. Haykel sees an important distinction between the groups, though: “The Wahhabis were not wanton in their violence.” They were surrounded by Muslims, and they conquered lands that were already Islamic; this stayed their hand. “ISIS, by contrast, is really reliving the early period.” Early Muslims were surrounded by non-Muslims, and the Islamic State, because of its takfiri tendencies, considers itself to be in the same situation.
If al-Qaeda wanted to revive slavery, it never said so. And why would it? Silence on slavery probably reflected strategic thinking, with public sympathies in mind: when the Islamic State began enslaving people, even some of its supporters balked. Nonetheless, the caliphate has continued to embrace slavery and crucifixion without apology. “We will conquer your Rome, break your crosses, and enslave your women,” Adnani, the spokesman, promised in one of his periodic valentines to the West. “If we do not reach that time, then our children and grandchildren will reach it, and they will sell your sons as slaves at the slave market.”
In October, Dabiq, the magazine of the Islamic State, published “The Revival of Slavery Before the Hour,” an article that took up the question of whether Yazidis (the members of an ancient Kurdish sect that borrows elements of Islam, and had come under attack from Islamic State forces in northern Iraq) are lapsed Muslims, and therefore marked for death, or merely pagans and therefore fair game for enslavement. A study group of Islamic State scholars had convened, on government orders, to resolve this issue. If they are pagans, the article’s anonymous author wrote,
Yazidi women and children [are to be] divided according to the Shariah amongst the fighters of the Islamic State who participated in the Sinjar operations [in northern Iraq] … Enslaving the families of the kuffar [infidels] and taking their women as concubines is a firmly established aspect of the Shariah that if one were to deny or mock, he would be denying or mocking the verses of the Koran and the narrations of the Prophet … and thereby apostatizing from Islam.
Tens of thousands of foreign Muslims are thought to have immigrated to the Islamic State. Recruits hail from France, the United Kingdom, Belgium, Germany, Holland, Australia, Indonesia, the United States, and many other places. Many have come to fight, and many intend to die.
Peter R. Neumann, a professor at King’s College London, told me that online voices have been essential to spreading propaganda and ensuring that newcomers know what to believe. Online recruitment has also widened the demographics of the jihadist community, by allowing conservative Muslim women—physically isolated in their homes—to reach out to recruiters, radicalize, and arrange passage to Syria. Through its appeals to both genders, the Islamic State hopes to build a complete society.
In November, I traveled to Australia to meet Musa Cerantonio, a 30-year-old man whom Neumann and other researchers had identified as one of the two most important “new spiritual authorities” guiding foreigners to join the Islamic State. For three years he was a televangelist on Iqraa TV in Cairo, but he left after the station objected to his frequent calls to establish a caliphate. Now he preaches on Facebook and Twitter.
Cerantonio—a big, friendly man with a bookish demeanor—told me he blanches at beheading videos. He hates seeing the violence, even though supporters of the Islamic State are required to endorse it. (He speaks out, controversially among jihadists, against suicide bombing, on the grounds that God forbids suicide; he differs from the Islamic State on a few other points as well.) He has the kind of unkempt facial hair one sees on certain overgrown fans of The Lord of the Rings, and his obsession with Islamic apocalypticism felt familiar. He seemed to be living out a drama that looks, from an outsider’s perspective, like a medieval fantasy novel, only with real blood.
Last June, Cerantonio and his wife tried to emigrate—he wouldn’t say to where (“It’s illegal to go to Syria,” he said cagily)—but they were caught en route, in the Philippines, and he was deported back to Australia for overstaying his visa. Australia has criminalized attempts to join or travel to the Islamic State, and has confiscated Cerantonio’s passport. He is stuck in Melbourne, where he is well known to the local constabulary. If Cerantonio were caught facilitating the movement of individuals to the Islamic State, he would be imprisoned. So far, though, he is free—a technically unaffiliated ideologue who nonetheless speaks with what other jihadists have taken to be a reliable voice on matters of the Islamic State’s doctrine.
We met for lunch in Footscray, a dense, multicultural Melbourne suburb that’s home to Lonely Planet, the travel-guide publisher. Cerantonio grew up there in a half-Irish, half-Calabrian family. On a typical street one can find African restaurants, Vietnamese shops, and young Arabs walking around in the Salafi uniform of scraggly beard, long shirt, and trousers ending halfway down the calves.
Cerantonio explained the joy he felt when Baghdadi was declared the caliph on June 29—and the sudden, magnetic attraction that Mesopotamia began to exert on him and his friends. “I was in a hotel [in the Philippines], and I saw the declaration on television,” he told me. “And I was just amazed, and I’m like, Why am I stuck here in this bloody room?”
The last caliphate was the Ottoman empire, which reached its peak in the 16th century and then experienced a long decline, until the founder of the Republic of Turkey, Mustafa Kemal Atatürk, euthanized it in 1924. But Cerantonio, like many supporters of the Islamic State, doesn’t acknowledge that caliphate as legitimate, because it didn’t fully enforce Islamic law, which requires stonings and slavery and amputations, and because its caliphs were not descended from the tribe of the Prophet, the Quraysh.
Baghdadi spoke at length of the importance of the caliphate in his Mosul sermon. He said that to revive the institution of the caliphate—which had not functioned except in name for about 1,000 years—was a communal obligation. He and his loyalists had “hastened to declare the caliphate and place an imam” at its head, he said. “This is a duty upon the Muslims—a duty that has been lost for centuries … The Muslims sin by losing it, and they must always seek to establish it.” Like bin Laden before him, Baghdadi spoke floridly, with frequent scriptural allusion and command of classical rhetoric. Unlike bin Laden, and unlike those false caliphs of the Ottoman empire, he is Qurayshi.
The caliphate, Cerantonio told me, is not just a political entity but also a vehicle for salvation. Islamic State propaganda regularly reports the pledges of baya’a (allegiance) rolling in from jihadist groups across the Muslim world. Cerantonio quoted a Prophetic saying, that to die without pledging allegiance is to die jahil (ignorant) and therefore die a “death of disbelief.” Consider how Muslims (or, for that matter, Christians) imagine God deals with the souls of people who die without learning about the one true religion. They are neither obviously saved nor definitively condemned. Similarly, Cerantonio said, the Muslim who acknowledges one omnipotent god and prays, but who dies without pledging himself to a valid caliph and incurring the obligations of that oath, has failed to live a fully Islamic life. I pointed out that this means the vast majority of Muslims in history, and all who passed away between 1924 and 2014, died a death of disbelief. Cerantonio nodded gravely. “I would go so far as to say that Islam has been reestablished” by the caliphate.
I asked him about his own baya’a, and he quickly corrected me: “I didn’t say that I’d pledged allegiance.” Under Australian law, he reminded me, giving baya’a to the Islamic State was illegal. “But I agree that [Baghdadi] fulfills the requirements,” he continued. “I’m just going to wink at you, and you take that to mean whatever you want.”
To be the caliph, one must meet conditions outlined in Sunni law—being a Muslim adult man of Quraysh descent; exhibiting moral probity and physical and mental integrity; and having ’amr, or authority. This last criterion, Cerantonio said, is the hardest to fulfill, and requires that the caliph have territory in which he can enforce Islamic law. Baghdadi’s Islamic State achieved that long before June 29, Cerantonio said, and as soon as it did, a Western convert within the group’s ranks—Cerantonio described him as “something of a leader”—began murmuring about the religious obligation to declare a caliphate. He and others spoke quietly to those in power and told them that further delay would be sinful.Social-media posts from the Islamic State suggest that executions happen more or less continually.
Cerantonio said a faction arose that was prepared to make war on Baghdadi’s group if it delayed any further. They prepared a letter to various powerful members of ISIS, airing their displeasure at the failure to appoint a caliph, but were pacified by Adnani, the spokesman, who let them in on a secret—that a caliphate had already been declared, long before the public announcement. They had their legitimate caliph, and at that point there was only one option. “If he’s legitimate,” Cerantonio said, “you must give him the baya’a.”
After Baghdadi’s July sermon, a stream of jihadists began flowing daily into Syria with renewed motivation. Jürgen Todenhöfer, a German author and former politician who visited the Islamic State in December, reported the arrival of 100 fighters at one Turkish-border recruitment station in just two days. His report, among others, suggests a still-steady inflow of foreigners, ready to give up everything at home for a shot at paradise in the worst place on Earth.Bernard Haykel, the foremost secular authority on the Islamic State’s ideology, believes the group is trying to re-create the earliest days of Islam and is faithfully reproducing its norms of war. “There is an assiduous, obsessive seriousness” about the group’s dedication to the text of the Koran, he says. (Peter Murphy)
In London, aweek before my meal with Cerantonio, I met with three ex-members of a banned Islamist group called Al Muhajiroun (The Emigrants): Anjem Choudary, Abu Baraa, and Abdul Muhid. They all expressed desire to emigrate to the Islamic State, as many of their colleagues already had, but the authorities had confiscated their passports. Like Cerantonio, they regarded the caliphate as the only righteous government on Earth, though none would confess having pledged allegiance. Their principal goal in meeting me was to explain what the Islamic State stands for, and how its policies reflect God’s law.
Choudary, 48, is the group’s former leader. He frequently appears on cable news, as one of the few people producers can book who will defend the Islamic State vociferously, until his mike is cut. He has a reputation in the United Kingdom as a loathsome blowhard, but he and his disciples sincerely believe in the Islamic State and, on matters of doctrine, speak in its voice. Choudary and the others feature prominently in the Twitter feeds of Islamic State residents, and Abu Baraa maintains a YouTube channel to answer questions about Sharia.
Since September, authorities have been investigating the three men on suspicion of supporting terrorism. Because of this investigation, they had to meet me separately: communication among them would have violated the terms of their bail. But speaking with them felt like speaking with the same person wearing different masks. Choudary met me in a candy shop in the East London suburb of Ilford. He was dressed smartly, in a crisp blue tunic reaching nearly to his ankles, and sipped a Red Bull while we talked.
Before the caliphate, “maybe 85 percent of the Sharia was absent from our lives,” Choudary told me. “These laws are in abeyance until we have khilafa”—a caliphate—“and now we have one.” Without a caliphate, for example, individual vigilantes are not obliged to amputate the hands of thieves they catch in the act. But create a caliphate, and this law, along with a huge body of other jurisprudence, suddenly awakens. In theory, all Muslims are obliged to immigrate to the territory where the caliph is applying these laws. One of Choudary’s prize students, a convert from Hinduism named Abu Rumaysah, evaded police to bring his family of five from London to Syria in November. On the day I met Choudary, Abu Rumaysah tweeted out a picture of himself with a Kalashnikov in one arm and his newborn son in the other. Hashtag: #GenerationKhilafah.
The caliph is required to implement Sharia. Any deviation will compel those who have pledged allegiance to inform the caliph in private of his error and, in extreme cases, to excommunicate and replace him if he persists. (“I have been plagued with this great matter, plagued with this responsibility, and it is a heavy responsibility,” Baghdadi said in his sermon.) In return, the caliph commands obedience—and those who persist in supporting non-Muslim governments, after being duly warned and educated about their sin, are considered apostates.
Choudary said Sharia has been misunderstood because of its incomplete application by regimes such as Saudi Arabia, which does behead murderers and cut off thieves’ hands. “The problem,” he explained, “is that when places like Saudi Arabia just implement the penal code, and don’t provide the social and economic justice of the Sharia—the whole package—they simply engender hatred toward the Sharia.” That whole package, he said, would include free housing, food, and clothing for all, though of course anyone who wished to enrich himself with work could do so.
Abdul Muhid, 32, continued along these lines. He was dressed in mujahideen chic when I met him at a local restaurant: scruffy beard, Afghan cap, and a wallet outside of his clothes, attached with what looked like a shoulder holster. When we sat down, he was eager to discuss welfare. The Islamic State may have medieval-style punishments for moral crimes (lashes for boozing or fornication, stoning for adultery), but its social-welfare program is, at least in some aspects, progressive to a degree that would please an MSNBC pundit. Health care, he said, is free. (“Isn’t it free in Britain, too?,” I asked. “Not really,” he said. “Some procedures aren’t covered, such as vision.”) This provision of social welfare was not, he said, a policy choice of the Islamic State, but a policy obligation inherent in God’s law.Anjem Choudary, London’s most notorious defender of the Islamic State, says crucifixion and beheading are sacred requirements. (Tal Cohen/Reuters)
All Muslims acknowledge that God is the only one who knows the future. But they also agree that he has offered us a peek at it, in the Koran and in narrations of the Prophet. The Islamic State differs from nearly every other current jihadist movement in believing that it is written into God’s script as a central character. It is in this casting that the Islamic State is most boldly distinctive from its predecessors, and clearest in the religious nature of its mission.
In broad strokes, al-Qaeda acts like an underground political movement, with worldly goals in sight at all times—the expulsion of non-Muslims from the Arabian peninsula, the abolishment of the state of Israel, the end of support for dictatorships in Muslim lands. The Islamic State has its share of worldly concerns (including, in the places it controls, collecting garbage and keeping the water running), but the End of Days is a leitmotif of its propaganda. Bin Laden rarely mentioned the apocalypse, and when he did, he seemed to presume that he would be long dead when the glorious moment of divine comeuppance finally arrived. “Bin Laden and Zawahiri are from elite Sunni families who look down on this kind of speculation and think it’s something the masses engage in,” says Will McCants of the Brookings Institution, who is writing a book about the Islamic State’s apocalyptic thought.
During the last years of the U.S. occupation of Iraq, the Islamic State’s immediate founding fathers, by contrast, saw signs of the end times everywhere. They were anticipating, within a year, the arrival of the Mahdi—a messianic figure destined to lead the Muslims to victory before the end of the world. McCants says a prominent Islamist in Iraq approached bin Laden in 2008 to warn him that the group was being led by millenarians who were “talking all the time about the Mahdi and making strategic decisions” based on when they thought the Mahdi was going to arrive. “Al-Qaeda had to write to [these leaders] to say ‘Cut it out.’ ”
For certain true believers—the kind who long for epic good-versus-evil battles—visions of apocalyptic bloodbaths fulfill a deep psychological need. Of the Islamic State supporters I met, Musa Cerantonio, the Australian, expressed the deepest interest in the apocalypse and how the remaining days of the Islamic State—and the world—might look. Parts of that prediction are original to him, and do not yet have the status of doctrine. But other parts are based on mainstream Sunni sources and appear all over the Islamic State’s propaganda. These include the belief that there will be only 12 legitimate caliphs, and Baghdadi is the eighth; that the armies of Rome will mass to meet the armies of Islam in northern Syria; and that Islam’s final showdown with an anti-Messiah will occur in Jerusalem after a period of renewed Islamic conquest.
The Islamic State has attached great importance to the Syrian city of Dabiq, near Aleppo. It named its propaganda magazine after the town, and celebrated madly when (at great cost) it conquered Dabiq’s strategically unimportant plains. It is here, the Prophet reportedly said, that the armies of Rome will set up their camp. The armies of Islam will meet them, and Dabiq will be Rome’s Waterloo or its Antietam.
“Dabiq is basically all farmland,” one Islamic State supporter recently tweeted. “You could imagine large battles taking place there.” The Islamic State’s propagandists drool with anticipation of this event, and constantly imply that it will come soon. The state’s magazine quotes Zarqawi as saying, “The spark has been lit here in Iraq, and its heat will continue to intensify … until it burns the crusader armies in Dabiq.” A recent propaganda video shows clips from Hollywood war movies set in medieval times—perhaps because many of the prophecies specify that the armies will be on horseback or carrying ancient weapons.
Now that it has taken Dabiq, the Islamic State awaits the arrival of an enemy army there, whose defeat will initiate the countdown to the apocalypse. Western media frequently miss references to Dabiq in the Islamic State’s videos, and focus instead on lurid scenes of beheading. “Here we are, burying the first American crusader in Dabiq, eagerly waiting for the remainder of your armies to arrive,” said a masked executioner in a November video, showing the severed head of Peter (Abdul Rahman) Kassig, the aid worker who’d been held captive for more than a year. During fighting in Iraq in December, after mujahideen (perhaps inaccurately) reported having seen American soldiers in battle, Islamic State Twitter accounts erupted in spasms of pleasure, like overenthusiastic hosts or hostesses upon the arrival of the first guests at a party.
The Prophetic narration that foretells the Dabiq battle refers to the enemy as Rome. Who “Rome” is, now that the pope has no army, remains a matter of debate. But Cerantonio makes a case that Rome meant the Eastern Roman empire, which had its capital in what is now Istanbul. We should think of Rome as the Republic of Turkey—the same republic that ended the last self-identified caliphate, 90 years ago. Other Islamic State sources suggest that Rome might mean any infidel army, and the Americans will do nicely.After mujahideen reported having seen American soldiers in battle, Islamic State Twitter accounts erupted in spasms of pleasure, like overenthusiastic hosts upon the arrival of the first guests at a party.
After its battle in Dabiq, Cerantonio said, the caliphate will expand and sack Istanbul. Some believe it will then cover the entire Earth, but Cerantonio suggested its tide may never reach beyond the Bosporus. An anti-Messiah, known in Muslim apocalyptic literature as Dajjal, will come from the Khorasan region of eastern Iran and kill a vast number of the caliphate’s fighters, until just 5,000 remain, cornered in Jerusalem. Just as Dajjal prepares to finish them off, Jesus—the second-most-revered prophet in Islam—will return to Earth, spear Dajjal, and lead the Muslims to victory.
“Only God knows” whether the Islamic State’s armies are the ones foretold, Cerantonio said. But he is hopeful. “The Prophet said that one sign of the imminent arrival of the End of Days is that people will for a long while stop talking about the End of Days,” he said. “If you go to the mosques now, you’ll find the preachers are silent about this subject.” On this theory, even setbacks dealt to the Islamic State mean nothing, since God has preordained the near-destruction of his people anyway. The Islamic State has its best and worst days ahead of it.Abu Bakr al-Baghdadi was declared caliph by his followers last summer. The establishment of a caliphate awakened large sections of Koranic law that had lain dormant, and required those Muslims who recognized the caliphate to immigrate. (Associated Press)
The ideological purity of the Islamic State has one compensating virtue: it allows us to predict some of the group’s actions. Osama bin Laden was seldom predictable. He ended his first television interview cryptically. CNN’s Peter Arnett asked him, “What are your future plans?” Bin Laden replied, “You’ll see them and hear about them in the media, God willing.” By contrast, the Islamic State boasts openly about its plans—not all of them, but enough so that by listening carefully, we can deduce how it intends to govern and expand.
In London, Choudary and his students provided detailed descriptions of how the Islamic State must conduct its foreign policy, now that it is a caliphate. It has already taken up what Islamic law refers to as “offensive jihad,” the forcible expansion into countries that are ruled by non-Muslims. “Hitherto, we were just defending ourselves,” Choudary said; without a caliphate, offensive jihad is an inapplicable concept. But the waging of war to expand the caliphate is an essential duty of the caliph.
Choudary took pains to present the laws of war under which the Islamic State operates as policies of mercy rather than of brutality. He told me the state has an obligation to terrorize its enemies—a holy order to scare the shit out of them with beheadings and crucifixions and enslavement of women and children, because doing so hastens victory and avoids prolonged conflict.
Choudary’s colleague Abu Baraa explained that Islamic law permits only temporary peace treaties, lasting no longer than a decade. Similarly, accepting any border is anathema, as stated by the Prophet and echoed in the Islamic State’s propaganda videos. If the caliph consents to a longer-term peace or permanent border, he will be in error. Temporary peace treaties are renewable, but may not be applied to all enemies at once: the caliph must wage jihad at least once a year. He may not rest, or he will fall into a state of sin.
One comparison to the Islamic State is the Khmer Rouge, which killed about a third of the population of Cambodia. But the Khmer Rouge occupied Cambodia’s seat at the United Nations. “This is not permitted,” Abu Baraa said. “To send an ambassador to the UN is to recognize an authority other than God’s.” This form of diplomacy is shirk, or polytheism, he argued, and would be immediate cause to hereticize and replace Baghdadi. Even to hasten the arrival of a caliphate by democratic means—for example by voting for political candidates who favor a caliphate—is shirk.
It’s hard to overstate how hamstrung the Islamic State will be by its radicalism. The modern international system, born of the 1648 Peace of Westphalia, relies on each state’s willingness to recognize borders, however grudgingly. For the Islamic State, that recognition is ideological suicide. Other Islamist groups, such as the Muslim Brotherhood and Hamas, have succumbed to the blandishments of democracy and the potential for an invitation to the community of nations, complete with a UN seat. Negotiation and accommodation have worked, at times, for the Taliban as well. (Under Taliban rule, Afghanistan exchanged ambassadors with Saudi Arabia, Pakistan, and the United Arab Emirates, an act that invalidated the Taliban’s authority in the Islamic State’s eyes.) To the Islamic State these are not options, but acts of apostasy.
The United States and its allies have reacted to the Islamic State belatedly and in an apparent daze. The group’s ambitions and rough strategic blueprints were evident in its pronouncements and in social-media chatter as far back as 2011, when it was just one of many terrorist groups in Syria and Iraq and hadn’t yet committed mass atrocities. Adnani, the spokesman, told followers then that the group’s ambition was to “restore the Islamic caliphate,” and he evoked the apocalypse, saying, “There are but a few days left.” Baghdadi had already styled himself “commander of the faithful,” a title ordinarily reserved for caliphs, in 2011. In April 2013, Adnani declared the movement “ready to redraw the world upon the Prophetic methodology of the caliphate.” In August 2013, he said, “Our goal is to establish an Islamic state that doesn’t recognize borders, on the Prophetic methodology.” By then, the group had taken Raqqa, a Syrian provincial capital of perhaps 500,000 people, and was drawing in substantial numbers of foreign fighters who’d heard its message.
If we had identified the Islamic State’s intentions early, and realized that the vacuum in Syria and Iraq would give it ample space to carry them out, we might, at a minimum, have pushed Iraq to harden its border with Syria and preemptively make deals with its Sunnis. That would at least have avoided the electrifying propaganda effect created by the declaration of a caliphate just after the conquest of Iraq’s third-largest city. Yet, just over a year ago, Obama told The New Yorker that he considered ISIS to be al-Qaeda’s weaker partner. “If a jayvee team puts on Lakers uniforms that doesn’t make them Kobe Bryant,” the president said.
Our failure to appreciate the split between the Islamic State and al-Qaeda, and the essential differences between the two, has led to dangerous decisions. Last fall, to take one example, the U.S. government consented to a desperate plan to save Peter Kassig’s life. The plan facilitated—indeed, required—the interaction of some of the founding figures of the Islamic State and al-Qaeda, and could hardly have looked more hastily improvised.Given everything we know about the Islamic State, continuing to slowly bleed it appears the best of bad military options.
It entailed the enlistment of Abu Muhammad al Maqdisi, the Zarqawi mentor and al-Qaeda grandee, to approach Turki al-Binali, the Islamic State’s chief ideologue and a former student of Maqdisi’s, even though the two men had fallen out due to Maqdisi’s criticism of the Islamic State. Maqdisi had already called for the state to extend mercy to Alan Henning, the British cabbie who had entered Syria to deliver aid to children. In December, The Guardian reported that the U.S. government, through an intermediary, had asked Maqdisi to intercede with the Islamic State on Kassig’s behalf.
Maqdisi was living freely in Jordan, but had been banned from communicating with terrorists abroad, and was being monitored closely. After Jordan granted the United States permission to reintroduce Maqdisi to Binali, Maqdisi bought a phone with American money and was allowed to correspond merrily with his former student for a few days, before the Jordanian government stopped the chats and used them as a pretext to jail Maqdisi. Kassig’s severed head appeared in the Dabiq video a few days later.
Maqdisi gets mocked roundly on Twitter by the Islamic State’s fans, and al‑Qaeda is held in great contempt for refusing to acknowledge the caliphate. Cole Bunzel, a scholar who studies Islamic State ideology, read Maqdisi’s opinion on Henning’s status and thought it would hasten his and other captives’ death. “If I were held captive by the Islamic State and Maqdisi said I shouldn’t be killed,” he told me, “I’d kiss my ass goodbye.”
Kassig’s death was a tragedy, but the plan’s success would have been a bigger one. A reconciliation between Maqdisi and Binali would have begun to heal the main rift between the world’s two largest jihadist organizations. It’s possible that the government wanted only to draw out Binali for intelligence purposes or assassination. (Multiple attempts to elicit comment from the FBI were unsuccessful.) Regardless, the decision to play matchmaker for America’s two main terrorist antagonists reveals astonishingly poor judgment.
Chastened by our earlier indifference, we are now meeting the Islamic State via Kurdish and Iraqi proxy on the battlefield, and with regular air assaults. Those strategies haven’t dislodged the Islamic State from any of its major territorial possessions, although they’ve kept it from directly assaulting Baghdad and Erbil and slaughtering Shia and Kurds there.
Some observers have called for escalation, including several predictable voices from the interventionist right (Max Boot, Frederick Kagan), who have urged the deployment of tens of thousands of American soldiers. These calls should not be dismissed too quickly: an avowedly genocidal organization is on its potential victims’ front lawn, and it is committing daily atrocities in the territory it already controls.
One way to un-cast the Islamic State’s spell over its adherents would be to overpower it militarily and occupy the parts of Syria and Iraq now under caliphate rule. Al‑Qaeda is ineradicable because it can survive, cockroach-like, by going underground. The Islamic State cannot. If it loses its grip on its territory in Syria and Iraq, it will cease to be a caliphate. Caliphates cannot exist as underground movements, because territorial authority is a requirement: take away its command of territory, and all those oaths of allegiance are no longer binding. Former pledges could of course continue to attack the West and behead their enemies, as freelancers. But the propaganda value of the caliphate would disappear, and with it the supposed religious duty to immigrate and serve it. If the United States were to invade, the Islamic State’s obsession with battle at Dabiq suggests that it might send vast resources there, as if in a conventional battle. If the state musters at Dabiq in full force, only to be routed, it might never recover.Abu Baraa, who maintains a YouTube channel about Islamic law, says the caliph, Baghdadi, cannot negotiate or recognize borders, and must continually make war, or he will remove himself from Islam.
And yet the risks of escalation are enormous. The biggest proponent of an American invasion is the Islamic State itself. The provocative videos, in which a black-hooded executioner addresses President Obama by name, are clearly made to draw America into the fight. An invasion would be a huge propaganda victory for jihadists worldwide: irrespective of whether they have given baya’a to the caliph, they all believe that the United States wants to embark on a modern-day Crusade and kill Muslims. Yet another invasion and occupation would confirm that suspicion, and bolster recruitment. Add the incompetence of our previous efforts as occupiers, and we have reason for reluctance. The rise of ISIS, after all, happened only because our previous occupation created space for Zarqawi and his followers. Who knows the consequences of another botched job?
Given everything we know about the Islamic State, continuing to slowly bleed it, through air strikes and proxy warfare, appears the best of bad military options. Neither the Kurds nor the Shia will ever subdue and control the whole Sunni heartland of Syria and Iraq—they are hated there, and have no appetite for such an adventure anyway. But they can keep the Islamic State from fulfilling its duty to expand. And with every month that it fails to expand, it resembles less the conquering state of the Prophet Muhammad than yet another Middle Eastern government failing to bring prosperity to its people.
The humanitarian cost of the Islamic State’s existence is high. But its threat to the United States is smaller than its all too frequent conflation with al-Qaeda would suggest. Al-Qaeda’s core is rare among jihadist groups for its focus on the “far enemy” (the West); most jihadist groups’ main concerns lie closer to home. That’s especially true of the Islamic State, precisely because of its ideology. It sees enemies everywhere around it, and while its leadership wishes ill on the United States, the application of Sharia in the caliphate and the expansion to contiguous lands are paramount. Baghdadi has said as much directly: in November he told his Saudi agents to “deal with the rafida [Shia] first … then al-Sulul [Sunni supporters of the Saudi monarchy] … before the crusaders and their bases.”Musa Cerantonio and Anjem Choudary could mentally shift from contemplating mass death to discussing the virtues of Vietnamese coffee, with apparent delight in each.
The foreign fighters (and their wives and children) have been traveling to the caliphate on one-way tickets: they want to live under true Sharia, and many want martyrdom. Doctrine, recall, requires believers to reside in the caliphate if it is at all possible for them to do so. One of the Islamic State’s less bloody videos shows a group of jihadists burning their French, British, and Australian passports. This would be an eccentric act for someone intending to return to blow himself up in line at the Louvre or to hold another chocolate shop hostage in Sydney.
A few “lone wolf” supporters of the Islamic State have attacked Western targets, and more attacks will come. But most of the attackers have been frustrated amateurs, unable to immigrate to the caliphate because of confiscated passports or other problems. Even if the Islamic State cheers these attacks—and it does in its propaganda—it hasn’t yet planned and financed one. (The Charlie Hebdo attack in Paris in January was principally an al‑Qaeda operation.) During his visit to Mosul in December, Jürgen Todenhöfer interviewed a portly German jihadist and asked whether any of his comrades had returned to Europe to carry out attacks. The jihadist seemed to regard returnees not as soldiers but as dropouts. “The fact is that the returnees from the Islamic State should repent from their return,” he said. “I hope they review their religion.”
Properly contained, the Islamic State is likely to be its own undoing. No country is its ally, and its ideology ensures that this will remain the case. The land it controls, while expansive, is mostly uninhabited and poor. As it stagnates or slowly shrinks, its claim that it is the engine of God’s will and the agent of apocalypse will weaken, and fewer believers will arrive. And as more reports of misery within it leak out, radical Islamist movements elsewhere will be discredited: No one has tried harder to implement strict Sharia by violence. This is what it looks like.
Even so, the death of the Islamic State is unlikely to be quick, and things could still go badly wrong: if the Islamic State obtained the allegiance of al‑Qaeda—increasing, in one swoop, the unity of its base—it could wax into a worse foe than we’ve yet seen. The rift between the Islamic State and al-Qaeda has, if anything, grown in the past few months; the December issue of Dabiq featured a long account of an al‑Qaeda defector who described his old group as corrupt and ineffectual, and Zawahiri as a distant and unfit leader. But we should watch carefully for a rapprochement.
Without a catastrophe such as this, however, or perhaps the threat of the Islamic State’s storming Erbil, a vast ground invasion would certainly make the situation worse.
It would be facile, even exculpatory, to call the problem of the Islamic State “a problem with Islam.” The religion allows many interpretations, and Islamic State supporters are morally on the hook for the one they choose. And yet simply denouncing the Islamic State as un-Islamic can be counterproductive, especially if those who hear the message have read the holy texts and seen the endorsement of many of the caliphate’s practices written plainly within them.
Muslims can say that slavery is not legitimate now, and that crucifixion is wrong at this historical juncture. Many say precisely this. But they cannot condemn slavery or crucifixion outright without contradicting the Koran and the example of the Prophet. “The only principled ground that the Islamic State’s opponents could take is to say that certain core texts and traditional teachings of Islam are no longer valid,” Bernard Haykel says. That really would be an act of apostasy.
The Islamic State’s ideology exerts powerful sway over a certain subset of the population. Life’s hypocrisies and inconsistencies vanish in its face. Musa Cerantonio and the Salafis I met in London are unstumpable: no question I posed left them stuttering. They lectured me garrulously and, if one accepts their premises, convincingly. To call them un-Islamic appears, to me, to invite them into an argument that they would win. If they had been froth-spewing maniacs, I might be able to predict that their movement would burn out as the psychopaths detonated themselves or became drone-splats, one by one. But these men spoke with an academic precision that put me in mind of a good graduate seminar. I even enjoyed their company, and that frightened me as much as anything else.
Non-muslims cannot tell Muslims how to practice their religion properly. But Muslims have long since begun this debate within their own ranks. “You have to have standards,” Anjem Choudary told me. “Somebody could claim to be a Muslim, but if he believes in homosexuality or drinking alcohol, then he is not a Muslim. There is no such thing as a nonpracticing vegetarian.”
There is, however, another strand of Islam that offers a hard-line alternative to the Islamic State—just as uncompromising, but with opposite conclusions. This strand has proved appealing to many Muslims cursed or blessed with a psychological longing to see every jot and tittle of the holy texts implemented as they were in the earliest days of Islam. Islamic State supporters know how to react to Muslims who ignore parts of the Koran: with takfir and ridicule. But they also know that some other Muslims read the Koran as assiduously as they do, and pose a real ideological threat.
Baghdadi is Salafi. The term Salafi has been villainized, in part because authentic villains have ridden into battle waving the Salafi banner. But most Salafis are not jihadists, and most adhere to sects that reject the Islamic State. They are, as Haykel notes, committed to expanding Dar al-Islam, the land of Islam, even, perhaps, with the implementation of monstrous practices such as slavery and amputation—but at some future point. Their first priority is personal purification and religious observance, and they believe anything that thwarts those goals—such as causing war or unrest that would disrupt lives and prayer and scholarship—is forbidden.
They live among us. Last fall, I visited the Philadelphia mosque of Breton Pocius, 28, a Salafi imam who goes by the name Abdullah. His mosque is on the border between the crime-ridden Northern Liberties neighborhood and a gentrifying area that one might call Dar al-Hipster; his beard allows him to pass in the latter zone almost unnoticed.A theological alternative to the Islamic State exists—just as uncompromising, but with opposite conclusions.
Pocius converted 15 years ago after a Polish Catholic upbringing in Chicago. Like Cerantonio, he talks like an old soul, exhibiting deep familiarity with ancient texts, and a commitment to them motivated by curiosity and scholarship, and by a conviction that they are the only way to escape hellfire. When I met him at a local coffee shop, he carried a work of Koranic scholarship in Arabic and a book for teaching himself Japanese. He was preparing a sermon on the obligations of fatherhood for the 150 or so worshipers in his Friday congregation.
Pocius said his main goal is to encourage a halal life for worshipers in his mosque. But the rise of the Islamic State has forced him to consider political questions that are usually very far from the minds of Salafis. “Most of what they’ll say about how to pray and how to dress is exactly what I’ll say in my masjid [mosque]. But when they get to questions about social upheaval, they sound like Che Guevara.”
When Baghdadi showed up, Pocius adopted the slogan “Not my khalifa.” “The times of the Prophet were a time of great bloodshed,” he told me, “and he knew that the worst possible condition for all people was chaos, especially within the umma [Muslim community].” Accordingly, Pocius said, the correct attitude for Salafis is not to sow discord by factionalizing and declaring fellow Muslims apostates.
Instead, Pocius—like a majority of Salafis—believes that Muslims should remove themselves from politics. These quietist Salafis, as they are known, agree with the Islamic State that God’s law is the only law, and they eschew practices like voting and the creation of political parties. But they interpret the Koran’s hatred of discord and chaos as requiring them to fall into line with just about any leader, including some manifestly sinful ones. “The Prophet said: as long as the ruler does not enter into clear kufr [disbelief], give him general obedience,” Pocius told me, and the classic “books of creed” all warn against causing social upheaval. Quietist Salafis are strictly forbidden from dividing Muslims from one another—for example, by mass excommunication. Living without baya’a, Pocius said, does indeed make one ignorant, or benighted. But baya’a need not mean direct allegiance to a caliph, and certainly not to Abu Bakr al‑Baghdadi. It can mean, more broadly, allegiance to a religious social contract and commitment to a society of Muslims, whether ruled by a caliph or not.
Quietist Salafis believe that Muslims should direct their energies toward perfecting their personal life, including prayer, ritual, and hygiene. Much in the same way ultra-Orthodox Jews debate whether it’s kosher to tear off squares of toilet paper on the Sabbath (does that count as “rending cloth”?), they spend an inordinate amount of time ensuring that their trousers are not too long, that their beards are trimmed in some areas and shaggy in others. Through this fastidious observance, they believe, God will favor them with strength and numbers, and perhaps a caliphate will arise. At that moment, Muslims will take vengeance and, yes, achieve glorious victory at Dabiq. But Pocius cites a slew of modern Salafi theologians who argue that a caliphate cannot come into being in a righteous way except through the unmistakable will of God.
The Islamic State, of course, would agree, and say that God has anointed Baghdadi. Pocius’s retort amounts to a call to humility. He cites Abdullah Ibn Abbas, one of the Prophet’s companions, who sat down with dissenters and asked them how they had the gall, as a minority, to tell the majority that it was wrong. Dissent itself, to the point of bloodshed or splitting the umma, was forbidden. Even the manner of the establishment of Baghdadi’s caliphate runs contrary to expectation, he said. “The khilafa is something that Allah is going to establish,” he told me, “and it will involve a consensus of scholars from Mecca and Medina. That is not what happened. ISIS came out of nowhere.”
The Islamic State loathes this talk, and its fanboys tweet derisively about quietist Salafis. They mock them as “Salafis of menstruation,” for their obscure judgments about when women are and aren’t clean, and other low-priority aspects of life. “What we need now is fatwa about how it’s haram [forbidden] to ride a bike on Jupiter,” one tweeted drily. “That’s what scholars should focus on. More pressing than state of Ummah.” Anjem Choudary, for his part, says that no sin merits more vigorous opposition than the usurpation of God’s law, and that extremism in defense of monotheism is no vice.
Pocius doesn’t court any kind of official support from the United States, as a counterweight to jihadism. Indeed, official support would tend to discredit him, and in any case he is bitter toward America for treating him, in his words, as “less than a citizen.” (He alleges that the government paid spies to infiltrate his mosque and harassed his mother at work with questions about his being a potential terrorist.)
Still, his quietist Salafism offers an Islamic antidote to Baghdadi-style jihadism. The people who arrive at the faith spoiling for a fight cannot all be stopped from jihadism, but those whose main motivation is to find an ultraconservative, uncompromising version of Islam have an alternative here. It is not moderate Islam; most Muslims would consider it extreme. It is, however, a form of Islam that the literal-minded would not instantly find hypocritical, or blasphemously purged of its inconveniences. Hypocrisy is not a sin that ideologically minded young men tolerate well.
Western officials would probably do best to refrain from weighing in on matters of Islamic theological debate altogether. Barack Obama himself drifted into takfiri waters when he claimed that the Islamic State was “not Islamic”—the irony being that he, as the non-Muslim son of a Muslim, may himself be classified as an apostate, and yet is now practicing takfir against Muslims. Non-Muslims’ practicing takfir elicits chuckles from jihadists (“Like a pig covered in feces giving hygiene advice to others,” one tweeted).
I suspect that most Muslims appreciated Obama’s sentiment: the president was standing with them against both Baghdadi and non-Muslim chauvinists trying to implicate them in crimes. But most Muslims aren’t susceptible to joining jihad. The ones who are susceptible will only have had their suspicions confirmed: the United States lies about religion to serve its purposes.
Within the narrow bounds of its theology, the Islamic State hums with energy, even creativity. Outside those bounds, it could hardly be more arid and silent: a vision of life as obedience, order, and destiny. Musa Cerantonio and Anjem Choudary could mentally shift from contemplating mass death and eternal torture to discussing the virtues of Vietnamese coffee or treacly pastry, with apparent delight in each, yet to me it seemed that to embrace their views would be to see all the flavors of this world grow insipid compared with the vivid grotesqueries of the hereafter.
I could enjoy their company, as a guilty intellectual exercise, up to a point. In reviewing Mein Kampf in March 1940, George Orwell confessed that he had “never been able to dislike Hitler”; something about the man projected an underdog quality, even when his goals were cowardly or loathsome. “If he were killing a mouse he would know how to make it seem like a dragon.” The Islamic State’s partisans have much the same allure. They believe that they are personally involved in struggles beyond their own lives, and that merely to be swept up in the drama, on the side of righteousness, is a privilege and a pleasure—especially when it is also a burden.
Fascism, Orwell continued, is
psychologically far sounder than any hedonistic conception of life … Whereas Socialism, and even capitalism in a more grudging way, have said to people “I offer you a good time,” Hitler has said to them, “I offer you struggle, danger, and death,” and as a result a whole nation flings itself at his feet … We ought not to underrate its emotional appeal.
Nor, in the case of the Islamic State, its religious or intellectual appeal. That the Islamic State holds the imminent fulfillment of prophecy as a matter of dogma at least tells us the mettle of our opponent. It is ready to cheer its own near-obliteration, and to remain confident, even when surrounded, that it will receive divine succor if it stays true to the Prophetic model. Ideological tools may convince some potential converts that the group’s message is false, and military tools can limit its horrors. But for an organization as impervious to persuasion as the Islamic State, few measures short of these will matter, and the war may be a long one, even if it doesn’t last until the end of time.
Photo © Jonathan Nimerfroh
Photo © Jonathan Nimerfroh
Photo © Jonathan Nimerfroh
Photo © Jonathan Nimerfroh
Photo © Jonathan Nimerfroh
Photo © Jonathan Nimerfroh
Earlier this week photographer Jonathan Nimerfroh was walking along the coast of Nantucket when he noticed something odd about the waves crashing on shore. The high temperature was 19°F (-7.2°C) and while the waves weren’t completely frozen, they were thick with pieces of ice, much like the consistency of a Slurpee, or an slushy, or an ICEE, or whatever. It’s amazing to see how the ice changes the form and color of the waves, making them seem almost solid. You can see a few more shots over on Stay Wild Magazine. You can follow more of Nimerfroh’s photography on Instagram. (thnx, Amber!)
The night is dark and full of terrors