Merle should review radio shows
United Airlines has rolled out a series of updates to its Web site that the company claims will help beef up the security of customer accounts. But at first glance, the core changes — moving from a 4-digit PINs to password and requiring customers to pick five different security questions and answers — may seem like a security playbook copied from Yahoo.com, circa 2009. Here’s a closer look at what’s changed in how United authenticates customers, and hopefully a bit of insight into what the nation’s fourth-largest airline is trying to accomplish with its new system.
United, like many other carriers, has long relied on a frequent flyer account number and a 4-digit personal identification number (PIN) for authenticating customers at its Web site. This has left customer accounts ripe for takeover by crooks who specialize in hacking and draining loyalty accounts for cash.
Earlier this year, however, United began debuting new authentication systems wherein customers are asked to pick a strong password and to choose from five sets of security questions and pre-selected answers. Customers may be asked to provide the answers to two of these questions if they are logging in from a device United has never seen associated with that account, trying to reset a password, or interacting with United via phone.
Some of the questions and answers United come up with.
Yes, you read that right: The answers are pre-selected as well as the questions. For example, to the question “During what month did you first meet your spouse or significant other,” users may select only from one of…you guessed it — 12 answers (January through December).
The list of answers to another security question, “What’s your favorite pizza topping,” had me momentarily thinking I using a pull down menu at Dominos.com — waffling between “pepperoni” and “mashed potato.” (Fun fact: If you were previously unaware that mashed potatoes qualify as an actual pizza topping, United has you covered with an answer to this bit of trivia in its Frequently Asked Questions page on the security changes.)
I recorded a short video of some of these rather unique questions and answers.
United said it opted for pre-defined questions and answers because the company has found “the majority of security issues our customers face can be traced to computer viruses that record typing, and using predefined answers protects against this type of intrusion.”
This struck me as a dramatic oversimplification of the threat. I asked United why they stated this, given that any halfway decent piece of malware that is capable of keylogging is likely also doing what’s known as “form grabbing” — essentially snatching data submitted in forms — regardless of whether the victim types in this information or selects it from a pull-down menu.
Benjamin Vaughn, director of IT security intelligence at United, said the company was randomizing the questions to confound bot programs that seek to automate the submission of answers, and that security questions answered wrongly would be “locked” and not asked again. He added that multiple unsuccessful attempts at answering these questions could result in an account being locked, necessitating a call to customer service.
United said it plans to use these same questions and answers — no longer passwords or PINs — to authenticate those who call in to the company’s customer service hotline. When I went to step through United’s new security system, I discovered my account was locked for some reason. A call to United customer service unlocked it in less than two minutes. All the agent asked me for was my frequent flyer number and my name.
(Incidentally, United still somewhat relies on “security through obscurity” to protect the secrecy of customer usernames by very seldom communicating the full frequent flyer number in written and digital communications with customers. I first pointed this out in my story about the data that can be gleaned from a United boarding pass barcode, because while the full frequent flyer number is masked with “x’s” on the boarding pass, the full number is stored on the pass’s barcode).
Conventional wisdom dictates that what little additional value security questions add to the equation is nullified when the user is required to choose from a set of pre-selected answers. After all, the only sane and secure way to use secret questions if one must is to pick answers that are not only incorrect and/or irrelevant to the question, but that also can’t be guessed or gleaned by collecting facts about you from background checking sites or from your various social media presences online.
Google published some fascinating research last year that spoke to the efficacy and challenges of secret questions and answers, concluding that they are “neither secure nor reliable enough to be used as a standalone account recovery mechanism.”
Overall, the Google research team found the security answers are either somewhat secure or easy to remember—but rarely both. Put another way, easy answers aren’t secure, and hard answers aren’t as useable.
But wait, you say: United asks you to answer up to five security questions. So more security questions equals more layers for the bad guys to hack through, which equals more security, right? Well, not so fast, the Google security folks found.
“When users had to answer both together, the spread between the security and usability of secret questions becomes increasingly stark,” the researchers wrote. “The probability that an attacker could get both answers in ten guesses is 1%, but users will recall both answers only 59% of the time. Piling on more secret questions makes it more difficult for users to recover their accounts and is not a good solution, as a result.”
Vaughn said the beauty of United’s approach is that it uniquely addresses the problem identified by Google researchers — that so many people in the study had so much trouble remembering the answers — by providing users with a set of pre-selected answers from which to choose.
An infographic from Google’s research study on secret questions. Source: Google.
The security team at United reached out a few weeks back to highlight the new security changes, and in a conversation this week they asked what I thought about their plan. I replied that if United is getting pushback from security experts and tech publications about its approach, that’s probably because security people are techies/nerds at heart, and techies/nerds want options and stuff. Or at least the ability to add, enable or disable certain security features.
But the reality today is that almost any security system designed for use by tens of millions of people who aren’t techies is always going to cater to the least sophisticated user on the planet — and that’s about where the level of security for that system is bound to stay for a while.
So I told the United people that I was a somewhat despondent about this reality, mainly because I end up having little other choice but to fly United quite often.
“At the scale that United faces, we felt this approach was really optimal to fix this problem for our customers,” Vaughn said. “We have to start with something that is universally available to our customers. We can’t sent a text message to you when you’re on an airplane or out of the country, we can’t rely on all of our customers to have a smart phone, and we didn’t feel it would be a great use of our customers’ time to send them in the mail 93 million secure ID tokens. We felt a powerful onus to do something, and the something we implemented we feel improves security greatly, especially for non-technical savvy customers.”
Arlan McMillan, United’s chief information security officer, said the basic system that the company has just rolled out is built to accommodate additional security features going forward. McMillan said United has discussed rolling out some type of app-based time-based one-time password (TOTP) systems (Google Authenticator is one popular TOTP example).
“It is our intent to provide additional capabilities to our customers, and to even bring in additional security controls if [customers] choose to,” McMillan said. “We set the minimum bar here, and we think that’s a higher bar than you’re going to find at most of our competitors. And we’re going to do more, but we had to get this far first.”
Lest anyone accuse me of claiming that the thrust of this story is somehow newsy, allow me to recommend some related, earlier stories worth reading about United’s security changes:
Identity thieves have perfected a scam in which they impersonate existing customers at retail mobile phone stores, pay a small cash deposit on pricey new phones, and then charge the rest to the victim’s account. In most cases, switching on the new phones causes the victim account owner’s phone(s) to go dead. This is the story of a Pennsylvania man who allegedly died of a heart attack because his wife’s phone was switched off by ID thieves and she was temporarily unable to call for help.
On Feb. 20, 2016, James William Schwartz, 84, was going about his daily routine, which mainly consisted of caring for his wife, MaryLou. Mrs. Schwartz was suffering from the end stages of endometrial cancer and wasn’t physically mobile without assistance. When Mr. Schwartz began having a heart attack that day, MaryLou went to use her phone to call for help and discovered it was completely shut off.
Little did MaryLou know, but identity thieves had the day before entered a “premium authorized Verizon dealer” store in Florida and impersonated the Schwartzes. The thieves paid a $150 cash deposit to “upgrade” the elderly couple’s simple mobiles to new iPhone 6s devices, with the balance to be placed on the Schwartz’s account.
“Despite her severely disabled and elderly condition, MaryLou Schwartz was finally able to retrieve her husband’s cellular telephone using a mechanical arm,” reads a lawsuit (PDF) filed in Beaver County, Penn. on behalf of the Schwartz’s two daughters, alleging negligence by the Florida mobile phone store. “This monumental, determined and desperate endeavor to reach her husband’s working telephone took Mrs. Schwartz approximately forty minutes to achieve due to her condition. This vital delay in reaching emergency help proved to be fatal.”
By the time paramedics arrived, Mr. Schwartz was pronounced dead. MaryLou Schwartz died seventeen days later, on March 8, 2016. Incredibly, identity thieves would continue robbing the Schwartzes even after they were both deceased: According to the lawsuit, on April 14, 2016 the account of MaryLou Schwartz was again compromised and a tablet device was also fraudulently acquired in MaryLou’s name.
The Schwartz’s daughters say they didn’t learn about the fraud until after both parents passed away. According to them, they heard about it from the guy at a local Verizon reseller that noticed his longtime customers’ phones had been deactivated. That’s when they discovered that while their mother’s phone was inactive at the time of her father’s death, their father’s mobile had inexplicably been able to make but not receive phone calls.
Exactly what sort of identification was demanded of the thieves who impersonated the Schwartzes is in dispute at the moment. But it seems clear that this is a fairly successful and common scheme for thieves to steal (and, in all likelihood, resell) high-end phones.
Lorrie Cranor, chief technologist for the U.S. Federal Trade Commission, was similarly victimized this summer when someone walked into a mobile phone store, claimed to be her, asked to upgrade her phones and walked out with two brand new iPhones assigned to her telephone numbers.
“My phones immediately stopped receiving calls, and I was left with a large bill and the anxiety and fear of financial injury that spring from identity theft,” Cranor wrote in a blog on the FTC’s site. Cranor’s post is worth a read, as she uses the opportunity to explain how she recovered from the identity theft episode.
She also used her rights under the Fair Credit Reporting Act, which requires that companies provide business records related to identity theft to victims within 30 days of receiving a written request. Cranor said the mobile store took about twice that time to reply, but ultimately explained that the thief had used a fake ID with Cranor’s name but the impostor’s photo.
“She had acquired the iPhones at a retail store in Ohio, hundreds of miles from where I live, and charged them to my account on an installment plan,” Cranor wrote. “It appears she did not actually make use of either phone, suggesting her intention was to sell them for a quick profit. As far as I’m aware the thief has not been caught and could be targeting others with this crime.”
Cranor notes that records of identity thefts reported to the FTC provide some insight into how often thieves hijack a mobile phone account or open a new mobile phone account in a victim’s name.
“In January 2013, there were 1,038 incidents of these types of identity theft reported, representing 3.2% of all identity theft incidents reported to the FTC that month,” she explained. “By January 2016, that number had increased to 2,658 such incidents, representing 6.3% of all identity thefts reported to the FTC that month. Such thefts involved all four of the major mobile carriers.”
The reality, Cranor said, is that identity theft reports to the FTC likely represent only the tip of a much larger iceberg. According to data from the Identity Theft Supplement to the 2014 National Crime Victimization Survey conducted by the U.S. Department of Justice, less than 1% of identity theft victims reported the theft to the FTC.
While dealing with diverted calls can be a hassle, having your phone calls and incoming text messages siphoned to another phone also can present new security problems, thanks to the growing use of text messages in authentication schemes for financial services and other accounts.
Perhaps the most helpful part of Cranor’s post is a section on the security options offered by the four major mobile providers in the U.S. For example, AT&T offers an “extra security” feature that requires customers to present a custom passcode when dealing with the wireless provider via phone or online.
“All of the carriers have slightly different procedures but seem to suffer from the same problem, which is that they’re relying on retail stores relying on store employee to look at the driver’s license,” Cranor told KrebsOnSecurity. “They don’t use services that will check the information on the drivers license, and so that [falls to] the store employee who has no training in spotting fake IDs.”
Some of the security options offered by the four major providers. Source: FTC.
It’s important to note that secret passcodes often can be bypassed by determined attackers or identity thieves who are adept at social engineering — that is, tricking people into helping them commit fraud.
I’ve used a six-digit passcode for more than two years on my account with AT&T, and last summer noticed that I’d stopped receiving voicemails. A call to AT&T’s customer service revealed that all voicemails were being forwarded to a number in Seattle that I did not recognized or authorize.
Since it’s unlikely that the attackers in this case guessed my six-digit PIN, they likely tricked a customer service representative at AT&T into “authenticating” me via other methods — probably by offering static data points about me such as my Social Security number, date of birth, and other information that is widely available for sale in the cybercrime underground on virtually all Americans over the age of 35. In any case, Cranor’s post has inspired me to exercise my rights under the FCRA and find out for certain.
Vineetha Paruchuri, a masters in computer science student at Dartmouth College, recently gave a talk at the Bsides security conference in Las Vegas on her research into security at the major U.S. mobile phone providers. Paruchuri said all of the major mobile providers suffer from a lack of strict protocols for authenticating customers, leaving customer service personnel exposed to social engineering.
“As a computer science student, my contention was that if we take away the control from the humans, we can actually make this process more secure,” Paruchuri said.
Paruchuri said perhaps the most dangerous threat is the smooth-talking social engineer who spends time collecting information about the verbal shorthand or mobile industry patois used by employees at these companies. The thief then simply phones up customer support and poses as a mobile store technician or employee trying to assist a customer. This was the exact approach used in 2014, when young hooligans tricked my then-ISP Cox Communications into resetting the password for my Cox email account.
I suppose one aspect of this problem that makes the lack of strong customer authentication measures by the mobile industry so frustrating is that it’s hard to imagine a device which holds more personal and intimate details about you than your wireless phone. After all, your phone likely knows where you were last night, when you last traveled, the phone number you last called and numbers you most frequently text.
And yet, the best the mobile providers and their fleet of reseller stores can do to tell you apart from an ID thief is to store a PIN that could be bypassed by clever social engineers (who may or may not be shaving yet).
By the way, readers with AT&T phones may have received a notice this week that AT&T is making some changes to “authorized users” allowed on accounts. The notice advised that starting Sept. 1, 2016, customers can designate up to 10 authorized users per account.
“If your Authorized User does not know your account passcode or extra security passcode, your Authorized User may still access your account in a retail store using a Forgotten Passcode process. Effective Nov. 5, 2016, Authorized Users and those persons who call into Customer Service and provide sufficient account information (“Authenticated Callers”) Will have the ability to add a new line of service to your account. Such requests, whether made by you, an Authorized User, an Authenticated Caller or someone with online access to your account, will trigger a credit check on you.”
AT&T’s message this week about upcoming account changes.
I asked AT&T about what need this new policy was designed to address, and the company responded that AT&T has made no changes to how an authorized user can be added to an account. AT&T spokesman Jim Greer sent me the following:
“With this notice, we are simply increasing the number of authorized users you may add to your account and giving them the ability to add a line in stores or over the phone. We made this change since more customers have multiple lines for multiple people. Authorized users still cannot access the account holder’s sensitive personal information.”
“Over the past several years, the authentication process has been strengthened. In stores, we’re safeguarding customers through driver’s license or other government issued ID authentication. We use a two-factor authentication when you contact us online or by phone that requires a one-time PIN. We’re continuing our efforts to better protect customers, with additional improvements on the horizon.”
“You don’t have to designate anyone to become an authorized user on your account. You will be notified if any significant changes are made to your account by an authorized user, and you can remove any person as an authorized user at any time.”
The rub is what AT&T does — or more specifically, what the AT&T customer representative does — to verify your identity when the caller says he doesn’t remember his PIN or passcode. If they allow PIN-less authentication by asking for your Social Security number, date of birth and other static information about you, ID thieves can defeat that easily.
Has someone fraudulently ordered phone service or phones in your name? Sound off in the comments below.
If you’re wondering what you can do to shield yourself and your family against identity theft, check out these primers:
How I Learned to Stop Worrying and Embrace the Security Freeze (this primer goes well beyond security freezes and includes a detailed Q&A as well as other tips to help prevent and recover from ID theft).
Who is gonna pay to send me to Raffles?
"A single dragonfly can eat hundreds of mosquitoes a day." luv u dragonflies
I heard stories this week about dung beetles and cuttlefish. Both made me think about the typical stories we hear in the media about evolved human mating strategies. First, the stories:
Story #1 :The Dung Beetle
A story on Quirks and Quarks discussed the mating strategies of the dung beetle. The picture above is of a male beetle; only the males have those giant horns. He uses it to defend the entrance to a tiny burrow in which he keeps a female. He’ll violently fight off other dung beetles who try to get access to the burrow.
So far this sounds like the typical story of competitive mating that we hear all the time about all kinds of animals, right?
There’s a twist: while only male dung beetles have horns, not all males have horns. Some are completely hornless. But if horns help you win the fight, how is hornlessness being passed down genetically?
Well, it turns out that when a big ol’ horned male is fighting with some other big ol’ horned male, little hornless males sneak into burrows and mate with the females. They get discovered and booted out, of course, and the horned male will re-mate with the female with the hopes of displacing his sperm.
Those little hornless males have giant testicles, way gianter than the horned males. While the horned males are putting all of their energy into growing horns, the hornless males are making sperm. So, even though they have limited access to females, they get as much mileage out of their access as they can.
The result: two distinct types of male dung beetles with two distinct mating strategies.
Story #2: The Giant Australian Cuttlefish
The Naked Scientists podcast featured a story about Giant Australian Cuttlefish. During mating season the male cuttlefish, much larger than the females, collect “harems” and spend their time mating and defending access. Other males try to “muscle in,” but the bigger cuttlefish “throws his weight around” to scare him off. The biggest cuttlefish wins.
So far this sounds like the typical story of competitive mating that we hear all the time about all kinds of animals, right?
Well, according The Naked Scientists story, researchers have discovered an alternative mating strategy. Small males, who are far too small to compete with large males, will pretend to be female, sneak into the defended territory, mate, and leave.
How do they do this? They change their color pattern and rearrange their tentacles in a more typical female arrangement (they didn’t specify what this was) and, well, pass. The large male thinks he’s another female. In the video below, the cuttlefish uses his ability to change the pattern on his body. He simultaneously displays a male pattern to the female and a female pattern to the large male on the other side.
So, can the crossdressing cuttlefish and dodge-y dung beetle tell us anything about evolved human mating strategies?
But I do think it tells us something about how we should think about evolution and the reproduction of genes. If you listen to the media cover evolutionary psychological explanations of human mating, you only hear one story about the strategies that males use to try to get sex. That story sounds a lot like the one told about the horned beetle and the large male cuttlefish.
But these species have demonstrated that there need not be only one mating strategy. In these cases, there are at least two. So, why in Darwin’s name would we assume that human beings, in all of their beautiful and incredible complexity, would only have one? Perhaps we see a diversity in types of human males (different body shapes and sizes, different intellectual gifts, etc) because there are many different ways to attract females. Maybe females see something valuable in many different kinds of males! Maybe not all females are the same!
Let’s set aside the stereotypes about men and women that media reporting on evolutionary psychology tends to reproduce and, instead, consider the possibility that human mating is at least as complex as that of dung beetles and cuttlefish.
Originally posted in 2010.Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.
We always getting flyers from here
This is what many of my times in Japan were like
U can get these at Bread Top
looks good but busy
Signaling white supremacy.
On the heels of the Republican national convention, the notorious KKK leader David Duke announced his campaign for the Louisiana Senate. On his social media pages, he released a campaign poster featuring a young white woman with blonde hair and blue eyes wearing a gray tank top decorated with American flag imagery. She is beautiful and young, exuding innocence. Atop the image the text reads “fight for Western civilization” and included David Duke’s website and logo. It does not appear that she consented to being on the poster.
When I came upon the image, I was immediately reminded of pro-Nazi propaganda that I had seen in a museum in Germany, especially those depicting “Hitler youth.” Many of those posters featured fresh white faces, looking healthy and clean, in stark contrast to the distorted, darkened, bloated, and snarling faces of the targets of the Nazi regime.
It’s different era, but the implied message of Duke’s poster is the same — the nationalist message alongside the idealized figure — so it wasn’t difficult to find a Nazi propaganda poster that drew the comparison. I tweeted it out like this:
— Lisa Wade, PhD (@lisawade) July 23, 2016
Given that David Duke is an avowed racist running on a platform to save “Western” civilization, it didn’t seem like that much of a stretch.
Provoking racist backlash.
I hashtagged it with #davidduke and #americafirst, so I can’t say I didn’t invite it, but the backlash was greater than any I have ever received. The day after the tweet, I easily got one tweet per minute, on average.
What I found fascinating was the range of responses. I was told I looked just like her — beautiful, blue-eyed, and white — was asked if I hated myself, accused of being a race traitor, and invited to join the movement against “white genocide.” I was also told that I was just jealous: comparatively hideous thanks to my age and weight. Trolls took shots at sociology, intellectuals, and my own intelligence. I was asked if I was Jewish, accused of being so, and told to put my head in an oven. I was sent false statistics about black crime. I was also, oddly, accused of being a Nazi myself. Others, like Kate Harding, Philip Cohen, and even Leslie Jones, were roped in.
Here is a sampling (super trigger warning for all kinds of hatefulness):
It’s not news that twitter is full of trolls. It’s not news that there are proud white supremacists and neo-nazis in America. It’s not news that women online get told they’re ugly or fat on the reg. It’s not news that I’m a (proud) cat lady either, for what it’s worth. But I think transparency is our best bet to get people to acknowledge the ongoing racism, antisemitism, sexism, and anti-intellectualism in our society. So, there you have it.Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.
You can buy them at Woolies NQN!
One basic tenet of computer security is this: If you can’t vouch for a networked thing’s physical security, you cannot also vouch for its cybersecurity. That’s because in most cases, networked things really aren’t designed to foil a skilled and determined attacker who can physically connect his own devices. So you can imagine my shock and horror seeing a Cisco switch and wireless antenna sitting exposed atop of an ATM out in front of a bustling grocery store in my hometown of Northern Virginia.
I’ve long warned readers to avoid stand-alone ATMs in favor of wall-mounted and/or bank-operated ATMs. In many cases, thieves who can access the networking cables of an ATM are hooking up their own sniffing devices to grab cash machine card data flowing across the ATM network in plain text.
But I’ve never before seen a setup quite this braindead. Take a look:
An ATM in front of a grocery store in Northern Virginia.
Now let’s have a closer look at the back of this machine to see what we’re dealing with:
Need to get online in a jiffy? No problem, this ATM has plenty of network jacks for you to plug into. What could go wrong?
Daniel Battisto, the longtime KrebsOnSecurity reader who alerted me to this disaster waiting to happen, summed up my thoughts on it pretty well in an email.
“I’d like to assume, for the sake of sanity, that the admin who created this setup knows that Cisco security is broken relatively simple once physical access is gained,” said Battisto, a physical and IT security professional. “I’d also like to assume that all unused interfaces are shutdown, and port-security has been configured on the interfaces in use. I’d also like to assume that the admin established a good console login.”
While it’s impossible to test the security of this setup without tampering with the devices, “considering that this was left like this in the front vestibule of a grocery store with no cameras around AND the console cable still attached, my above assumptions are likely invalid,” Battisto observed.
“In my experience, IT departments often overlook basic security practices, and double down on the oversight by not implementing proper physical security controls (you’d be surprised, maybe, at the number of server rooms that I’ve been in that had the keys to all of the racks taped to the outside of the doors),” he said.
If something doesn’t look right about an ATM, don’t use it and move on to the next one. It’s not worth the hassle and risk associated with having your checking account emptied of cash. Also, it’s best to favor ATMs that are installed inside of a building or wall as opposed to free-standing machines, which may be more vulnerable to tampering.
If you liked this piece, check out my entire series on skimming devices, All About Skimmers.
"To be proud and protective of one’s country sounds like something good" not really
In his speech last week accepting the Republican nomination for President, Donald Trump said (my emphasis):
…our plan will put America First. Americanism, not globalism, will be our credo. As long as we are led by politicians who will not put America First, then we can be assured that other nations will not treat America with respect.
Donald Trump’s insistence that we put “America First” hardly sounds harmful or irrational on its face. To be proud and protective of one’s country sounds like something good, even inevitable. Americans are, after all, Americans. Who else would we put first?
But nationalism — a passionate investment in one’s country over and above others — is neither good nor neutral. Here are some reasons why it’s dangerous:
A nationalist is one who thinks solely, or mainly, in terms of competitive prestige… his thoughts always turn on victories, defeats, triumphs and humiliations.
He’s talking about inner cities as “them”
He talked about the laid-off workers ruined by trade deals as “you”
— Christopher Hayes (@chrislhayes) July 22, 2016
To glorify itself, nationalism generally resorts to suppositions, exaggerations, fallacious reasonings, scorn and inadmissible self-praise, and worst of all, it engages in the distortion of history, model-making and fable-writing. Historical facts are twisted to imaginary myths as it fears historical and social realism.
When Americans say “America is the greatest country on earth,” that’s nationalism. When other countries are framed as competitors instead of allies and potential allies, that’s nationalism. When people say “America first,” expressing a willfulness to cause pain and suffering to citizens of other countries if it is good for America, that’s nationalism. And that’s dangerous. It’s committing to one’s country’s preeminence and doing whatever it takes, however immoral, unlawful, or destructive, to further that goal.
.Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.
Just down the road but I don't think I want to eat any of this stuff
We often think that religion helps to build a strong society, in part because it gives people a shared set of beliefs that fosters trust. When you know what your neighbors think about right and wrong, it is easier to assume they are trustworthy people. The problem is that this logic focuses on trustworthy individuals, while social scientists often think about the relationship between religion and trust in terms of social structure and context.
New research from David Olson and Miao Li (using data from the World Values survey) examines the trust levels of 77,405 individuals from 69 countries collected between 1999 and 2010. The authors’ analysis focuses on a simple survey question about whether respondents felt they could, in general, trust other people. The authors were especially interested in how religiosity at the national level affected this trust, measuring it in two ways: the percentage of the population that regularly attended religious services and the level of religious diversity in the nation.
These two measures of religious strength and diversity in the social context brought out a surprising pattern. Nations with high religious diversity and high religious attendance had respondents who were significantly less likely to say they could generally trust other people. Conversely, nations with high religious diversity, but relatively low levels of participation, had respondents who were more likely to say they could generally trust other people.
One possible explanation for these two findings is that it is harder to navigate competing claims about truth and moral authority in a society when the stakes are high and everyone cares a lot about the answers, but also much easier to learn to trust others when living in a diverse society where the stakes for that difference are low. The most important lesson from this work, however, may be that the positive effects we usually attribute to cultural systems like religion are not guaranteed; things can turn out quite differently depending on the way religion is embedded in social context.
Evan Stewart is a PhD candidate at the University of Minnesota studying political culture. He is also a member of The Society Pages’ graduate student board. There, he writes for the blog Discoveries, where this post originally appeared. You can follow him on Twitter.