This looks like the sort of thing I am in to
Seems like something ppl would be interested in
A recent ping from a reader reminded me that I’ve been meaning to blog about the security limitations of using cell phone text messages for two-factor authentication online. The reader’s daughter had received a text message claiming to be from Google, warning that her Gmail account had been locked because someone in India had tried to access her account. The young woman was advised to expect a 6-digit verification code to be sent to her and to reply to the scammer’s message with that code.
Mark Cobb, a computer technician in Reno, Nev., said had his daughter fallen for the ruse, her Gmail account would indeed have been completely compromised, and she really would have been locked out of her account because the crooks would have changed her password straight away.
Cobb’s daughter received the scam text message because she’d enabled 2-factor authentication on her Gmail account, selecting the option to have Google request that she enter a 6-digit code texted to her cell phone each time it detects a login from an unknown computer or location (in practice, the code is to be entered on the Gmail site, not sent in any kind of texted or emailed reply).
In this case, the thieves already had her password — most likely because she re-used it on some other site that got hacked. Cobb says he and his daughter believe her mobile number and password may have been exposed as part of the 2012 breach at LinkedIn.
In any case, the crooks were priming her to expect a code and to repeat it back to them because that code was the only thing standing in the way of their seizing control over her account. And they could control when Google would send the code to her phone because Google would do this as soon as they tried to log in using her username and password. Indeed, the timing aspect of this attack helps make it more believable to the target.
This is a fairly clever — if not novel — attack, and it’s one I’d wager would likely fool a decent percentage of users who have enabled text messages as a form of two-factor authentication. Certainly, text messaging is far from the strongest form of 2-factor authentication, but it is better than allowing a login with nothing more than a username and password, as this scam illustrates.
Nevertheless, text messaging codes to users isn’t the safest way to do two-factor authentication, even if some entities — like the U.S. Social Security Administration and Sony’s Playstation network — are just getting around to offering two-factor via SMS.
But don’t take my word for it. That’s according to the National Institute of Standards and Technology (NIST), which recently issued new proposed digital authentication guidelines urging organizations to favor other forms of two-factor — such as time-base one-time passwords generated by mobile apps — over text messaging. By the way, NIST is seeking feedback on these recommendations.
If anyone’s interested, Sophos’s Naked Security blog has a very readable breakdown of what’s new in the NIST guidelines. Among my favorite highlights is this broad directive: Favor the user.
“To begin with, make your password policies user friendly and put the burden on the verifier when possible,” Sophos’s Chester Wisniewski writes. “In other words, we need to stop asking users to do things that aren’t actually improving security.” Like expiring passwords and making users change them frequently, for example.
Okay, so the geeks-in-chief are saying it’s time to move away from texting as a form of 2-factor authentication. And, of course, they’re right, because text messages are a lot like email, in that it’s difficult to tell who really sent the message, and the message itself is sent in plain text — i.e. is readable by anyone who happens to be lurking in the middle.
But security experts and many technology enthusiasts have a tendency to think that everyone should see the world through the lens of security, whereas most mere mortal users just want to get on with their lives and are perfectly content to use the same password across multiple sites — regardless of how many times they’re told not to do so.
Indeed, while many more companies now offer some form of two-factor authentication than did two or three years ago — consumer adoption of this core security feature remains seriously lacking. For example, the head of security at Dropbox recently told KrebsOnSecurity that less than one percent of its user base of 500 million registered users had chosen to turn on 2-factor authentication for their accounts. And Dropbox isn’t exactly a Johnny-come-lately to the 2-factor party: It has been offering 2-factor logins for a full four years now.
I doubt Dropbox is somehow an aberration in this regard, and it seems likely that other services also suffer from single-digit two-factor adoption rates. But if more consumers haven’t enabled two-factor options, it’s probably because a) it’s still optional and b) it still demands too much caring and understanding from the user about what’s going on and how these security systems can be subverted.
Google recently went a step further along the lines of where I’d like to see two-factor headed across the board, by debuting a new “push” authentication system that generates a prompt on the user’s mobile device that users need to tap to approve login requests. This is very similar to another push-based two-factor system I’ve long used and trusted — from Duo Security [full disclosure: Duo is an advertiser on this site].
For a comprehensive breakdown of which online services offer two-factor authentication and of what type, check out twofactorauth.org. And bear in mind that even if text-based authentication is all that’s offered, that’s still better than nothing. What’s more, it’s still probably more security than the majority of the planet has protecting their accounts.
I’d just finished parking my car in the covered garage at Reagan National Airport just across the river from Washington, D.C. when I noticed a dark green minivan slowly creeping through the row behind me. The vehicle caught my attention because its driver didn’t appear to be looking for an open spot. What’s more, the van had what looked like two cameras perched atop its roof — one of each side, both pointed down and slightly off to the side.
I had a few hours before my flight boarded, so I delayed my walk to the terminal and cut through several rows of cars to snag a video of the guy moving haltingly through another line of cars. I approached the driver and asked what he was doing. He smiled and tilted the lid on his bolted-down laptop so that I could see the pictures he was taking with the mounted cameras: He was photographing every license plate in the garage (for the record, his plate was a Virginia tag number 36-646L).
A van at Reagan National Airport equipped with automated license plate readers fixed to the roof.
The man said he was hired by the airport to keep track of the precise location of every car in the lot, explaining that the data is most often used by the airport when passengers returning from a trip forget where they parked their vehicles. I checked with the Metropolitan Washington Airports Authority (MWAA), which manages the garage, and they confirmed the license plate imaging service was handled by a third-party firm called HUB Parking.
I’m accustomed to having my license plate photographed when entering a parking area (Dulles International Airport in Virginia does this), but until that encounter at Reagan National I never considered that this was done manually.
“Reagan National uses this service to assist customers in finding their lost vehicles,” said MWAA spokesperson Kimberly Gibbs. “If the customer remembers their license plate it can be entered into the system to determine what garages and on what aisle their vehicle is parked.”
What does HUB Parking do with the information its clients collect? Ilaria Riva, marketing manager for HUB Parking, says the company does not sell or share the data it collects, and that it is up to the client to decide how that information is stored or shared.
“It is true the solution that HUB provides to our clients may collect data, but HUB does not own the data nor do we have any control over what the customer does with it,” Riva said.
Gibbs said MWAA does not share parking information with outside organizations. But make no mistake: the technology used at Reagan National Airport, known as automated license plate reader or ALPR systems, is already widely deployed by municipalities, police forces and private companies — particularly those in the business of repossessing vehicles from deadbeat owners who don’t pay their bills.
It’s true that people have zero expectation of privacy in public places — and roads and parking garages certainly are public places for the most part. But according to the Electronic Frontier Foundation (EFF), the data collected by ALPR systems can be very revealing, and in many cities ALPR technology is rapidly outpacing the law.
“By matching your car to a particular time, date and location, and then building a database of that information over time, law enforcement can learn where you work and live, what doctor you go to, which religious services you attend, and who your friends are,” the EFF warns.
A 2014 ABC News investigation in Los Angeles found the technology broadly in use by everyone from the local police to repo men. The story notes that there are little or no restrictions on what private companies that collect time- and location-stamped license plate data can do with the information. As a result, they are selling it to insurers, banks, law enforcement and federal agencies.
In Texas, the EFF highlights how state and local law enforcement agencies have free access to ALPR equipment and license plate data maintained by a private company called Vigilant Solutions. In exchange, police cruisers are retrofitted with credit-card machines so that law enforcement officers can take payments for delinquent fines and other charges on the spot — with a 25 percent processing fee tacked on that goes straight to Vigilant. In essence, the driver is paying Vigilant to provide the local cops with the technology used to identify and detain the driver.
“The ‘warrant redemption’ program works like this,” the EFF wrote. “The agency is given no-cost license plate readers as well as free access to LEARN-NVLS, the ALPR data system Vigilant says contains more than 2.8-billion plate scans and is growing by more than 70-million scans a month. This also includes a wide variety of analytical and predictive software tools. Also, the agency is merely licensing the technology; Vigilant can take it back at any time.”
That’s right: Even if the contract between the state and Vigilant ends, the latter gets to keep all of the license plate data collected by the agency, and potentially sell or license the information to other governments or use it for other purposes.
I wanted to write this story not because it’s particularly newsy, but because I was curious about a single event and ended up learning a great deal that I didn’t already know about how pervasive this technology has become.
Yes, we need more transparency about what companies and governments are doing with information collected in public. But here’s the naked truth: None of us should harbor any illusions about maintaining the privacy of our location at any given moment — particularly in public spaces.
As it happens, location privacy is a considerably expensive and difficult goal for most Americans to attain and maintain. Our mobile phones are constantly pinging cell towers, making it simple for mobile providers and law enforcement agencies to get a fix on your location within a few dozen meters.
Obscuring the address of your residence is even harder. If you’ve ever had a mortgage on your home or secured utilities for your residence using your own name, chances are excellent that your name and address are in thousands of databases, and can be found with a free or inexpensive public records search online.
Increasingly, location privacy is the exclusive purview of two groups of Americans: Those who are indigent and/or homeless and those who are wealthy. Only the well-off can afford the substantial costs and many petty inconveniences associated with separating one’s name from their address, vehicle, phone records and other modern niceties that make one easy to track and find.
Merle should review radio shows
United Airlines has rolled out a series of updates to its Web site that the company claims will help beef up the security of customer accounts. But at first glance, the core changes — moving from a 4-digit PINs to password and requiring customers to pick five different security questions and answers — may seem like a security playbook copied from Yahoo.com, circa 2009. Here’s a closer look at what’s changed in how United authenticates customers, and hopefully a bit of insight into what the nation’s fourth-largest airline is trying to accomplish with its new system.
United, like many other carriers, has long relied on a frequent flyer account number and a 4-digit personal identification number (PIN) for authenticating customers at its Web site. This has left customer accounts ripe for takeover by crooks who specialize in hacking and draining loyalty accounts for cash.
Earlier this year, however, United began debuting new authentication systems wherein customers are asked to pick a strong password and to choose from five sets of security questions and pre-selected answers. Customers may be asked to provide the answers to two of these questions if they are logging in from a device United has never seen associated with that account, trying to reset a password, or interacting with United via phone.
Some of the questions and answers United come up with.
Yes, you read that right: The answers are pre-selected as well as the questions. For example, to the question “During what month did you first meet your spouse or significant other,” users may select only from one of…you guessed it — 12 answers (January through December).
The list of answers to another security question, “What’s your favorite pizza topping,” had me momentarily thinking I using a pull down menu at Dominos.com — waffling between “pepperoni” and “mashed potato.” (Fun fact: If you were previously unaware that mashed potatoes qualify as an actual pizza topping, United has you covered with an answer to this bit of trivia in its Frequently Asked Questions page on the security changes.)
I recorded a short video of some of these rather unique questions and answers.
United said it opted for pre-defined questions and answers because the company has found “the majority of security issues our customers face can be traced to computer viruses that record typing, and using predefined answers protects against this type of intrusion.”
This struck me as a dramatic oversimplification of the threat. I asked United why they stated this, given that any halfway decent piece of malware that is capable of keylogging is likely also doing what’s known as “form grabbing” — essentially snatching data submitted in forms — regardless of whether the victim types in this information or selects it from a pull-down menu.
Benjamin Vaughn, director of IT security intelligence at United, said the company was randomizing the questions to confound bot programs that seek to automate the submission of answers, and that security questions answered wrongly would be “locked” and not asked again. He added that multiple unsuccessful attempts at answering these questions could result in an account being locked, necessitating a call to customer service.
United said it plans to use these same questions and answers — no longer passwords or PINs — to authenticate those who call in to the company’s customer service hotline. When I went to step through United’s new security system, I discovered my account was locked for some reason. A call to United customer service unlocked it in less than two minutes. All the agent asked me for was my frequent flyer number and my name.
(Incidentally, United still somewhat relies on “security through obscurity” to protect the secrecy of customer usernames by very seldom communicating the full frequent flyer number in written and digital communications with customers. I first pointed this out in my story about the data that can be gleaned from a United boarding pass barcode, because while the full frequent flyer number is masked with “x’s” on the boarding pass, the full number is stored on the pass’s barcode).
Conventional wisdom dictates that what little additional value security questions add to the equation is nullified when the user is required to choose from a set of pre-selected answers. After all, the only sane and secure way to use secret questions if one must is to pick answers that are not only incorrect and/or irrelevant to the question, but that also can’t be guessed or gleaned by collecting facts about you from background checking sites or from your various social media presences online.
Google published some fascinating research last year that spoke to the efficacy and challenges of secret questions and answers, concluding that they are “neither secure nor reliable enough to be used as a standalone account recovery mechanism.”
Overall, the Google research team found the security answers are either somewhat secure or easy to remember—but rarely both. Put another way, easy answers aren’t secure, and hard answers aren’t as useable.
But wait, you say: United asks you to answer up to five security questions. So more security questions equals more layers for the bad guys to hack through, which equals more security, right? Well, not so fast, the Google security folks found.
“When users had to answer both together, the spread between the security and usability of secret questions becomes increasingly stark,” the researchers wrote. “The probability that an attacker could get both answers in ten guesses is 1%, but users will recall both answers only 59% of the time. Piling on more secret questions makes it more difficult for users to recover their accounts and is not a good solution, as a result.”
Vaughn said the beauty of United’s approach is that it uniquely addresses the problem identified by Google researchers — that so many people in the study had so much trouble remembering the answers — by providing users with a set of pre-selected answers from which to choose.
An infographic from Google’s research study on secret questions. Source: Google.
The security team at United reached out a few weeks back to highlight the new security changes, and in a conversation this week they asked what I thought about their plan. I replied that if United is getting pushback from security experts and tech publications about its approach, that’s probably because security people are techies/nerds at heart, and techies/nerds want options and stuff. Or at least the ability to add, enable or disable certain security features.
But the reality today is that almost any security system designed for use by tens of millions of people who aren’t techies is always going to cater to the least sophisticated user on the planet — and that’s about where the level of security for that system is bound to stay for a while.
So I told the United people that I was a somewhat despondent about this reality, mainly because I end up having little other choice but to fly United quite often.
“At the scale that United faces, we felt this approach was really optimal to fix this problem for our customers,” Vaughn said. “We have to start with something that is universally available to our customers. We can’t sent a text message to you when you’re on an airplane or out of the country, we can’t rely on all of our customers to have a smart phone, and we didn’t feel it would be a great use of our customers’ time to send them in the mail 93 million secure ID tokens. We felt a powerful onus to do something, and the something we implemented we feel improves security greatly, especially for non-technical savvy customers.”
Arlan McMillan, United’s chief information security officer, said the basic system that the company has just rolled out is built to accommodate additional security features going forward. McMillan said United has discussed rolling out some type of app-based time-based one-time password (TOTP) systems (Google Authenticator is one popular TOTP example).
“It is our intent to provide additional capabilities to our customers, and to even bring in additional security controls if [customers] choose to,” McMillan said. “We set the minimum bar here, and we think that’s a higher bar than you’re going to find at most of our competitors. And we’re going to do more, but we had to get this far first.”
Lest anyone accuse me of claiming that the thrust of this story is somehow newsy, allow me to recommend some related, earlier stories worth reading about United’s security changes:
Identity thieves have perfected a scam in which they impersonate existing customers at retail mobile phone stores, pay a small cash deposit on pricey new phones, and then charge the rest to the victim’s account. In most cases, switching on the new phones causes the victim account owner’s phone(s) to go dead. This is the story of a Pennsylvania man who allegedly died of a heart attack because his wife’s phone was switched off by ID thieves and she was temporarily unable to call for help.
On Feb. 20, 2016, James William Schwartz, 84, was going about his daily routine, which mainly consisted of caring for his wife, MaryLou. Mrs. Schwartz was suffering from the end stages of endometrial cancer and wasn’t physically mobile without assistance. When Mr. Schwartz began having a heart attack that day, MaryLou went to use her phone to call for help and discovered it was completely shut off.
Little did MaryLou know, but identity thieves had the day before entered a “premium authorized Verizon dealer” store in Florida and impersonated the Schwartzes. The thieves paid a $150 cash deposit to “upgrade” the elderly couple’s simple mobiles to new iPhone 6s devices, with the balance to be placed on the Schwartz’s account.
“Despite her severely disabled and elderly condition, MaryLou Schwartz was finally able to retrieve her husband’s cellular telephone using a mechanical arm,” reads a lawsuit (PDF) filed in Beaver County, Penn. on behalf of the Schwartz’s two daughters, alleging negligence by the Florida mobile phone store. “This monumental, determined and desperate endeavor to reach her husband’s working telephone took Mrs. Schwartz approximately forty minutes to achieve due to her condition. This vital delay in reaching emergency help proved to be fatal.”
By the time paramedics arrived, Mr. Schwartz was pronounced dead. MaryLou Schwartz died seventeen days later, on March 8, 2016. Incredibly, identity thieves would continue robbing the Schwartzes even after they were both deceased: According to the lawsuit, on April 14, 2016 the account of MaryLou Schwartz was again compromised and a tablet device was also fraudulently acquired in MaryLou’s name.
The Schwartz’s daughters say they didn’t learn about the fraud until after both parents passed away. According to them, they heard about it from the guy at a local Verizon reseller that noticed his longtime customers’ phones had been deactivated. That’s when they discovered that while their mother’s phone was inactive at the time of her father’s death, their father’s mobile had inexplicably been able to make but not receive phone calls.
Exactly what sort of identification was demanded of the thieves who impersonated the Schwartzes is in dispute at the moment. But it seems clear that this is a fairly successful and common scheme for thieves to steal (and, in all likelihood, resell) high-end phones.
Lorrie Cranor, chief technologist for the U.S. Federal Trade Commission, was similarly victimized this summer when someone walked into a mobile phone store, claimed to be her, asked to upgrade her phones and walked out with two brand new iPhones assigned to her telephone numbers.
“My phones immediately stopped receiving calls, and I was left with a large bill and the anxiety and fear of financial injury that spring from identity theft,” Cranor wrote in a blog on the FTC’s site. Cranor’s post is worth a read, as she uses the opportunity to explain how she recovered from the identity theft episode.
She also used her rights under the Fair Credit Reporting Act, which requires that companies provide business records related to identity theft to victims within 30 days of receiving a written request. Cranor said the mobile store took about twice that time to reply, but ultimately explained that the thief had used a fake ID with Cranor’s name but the impostor’s photo.
“She had acquired the iPhones at a retail store in Ohio, hundreds of miles from where I live, and charged them to my account on an installment plan,” Cranor wrote. “It appears she did not actually make use of either phone, suggesting her intention was to sell them for a quick profit. As far as I’m aware the thief has not been caught and could be targeting others with this crime.”
Cranor notes that records of identity thefts reported to the FTC provide some insight into how often thieves hijack a mobile phone account or open a new mobile phone account in a victim’s name.
“In January 2013, there were 1,038 incidents of these types of identity theft reported, representing 3.2% of all identity theft incidents reported to the FTC that month,” she explained. “By January 2016, that number had increased to 2,658 such incidents, representing 6.3% of all identity thefts reported to the FTC that month. Such thefts involved all four of the major mobile carriers.”
The reality, Cranor said, is that identity theft reports to the FTC likely represent only the tip of a much larger iceberg. According to data from the Identity Theft Supplement to the 2014 National Crime Victimization Survey conducted by the U.S. Department of Justice, less than 1% of identity theft victims reported the theft to the FTC.
While dealing with diverted calls can be a hassle, having your phone calls and incoming text messages siphoned to another phone also can present new security problems, thanks to the growing use of text messages in authentication schemes for financial services and other accounts.
Perhaps the most helpful part of Cranor’s post is a section on the security options offered by the four major mobile providers in the U.S. For example, AT&T offers an “extra security” feature that requires customers to present a custom passcode when dealing with the wireless provider via phone or online.
“All of the carriers have slightly different procedures but seem to suffer from the same problem, which is that they’re relying on retail stores relying on store employee to look at the driver’s license,” Cranor told KrebsOnSecurity. “They don’t use services that will check the information on the drivers license, and so that [falls to] the store employee who has no training in spotting fake IDs.”
Some of the security options offered by the four major providers. Source: FTC.
It’s important to note that secret passcodes often can be bypassed by determined attackers or identity thieves who are adept at social engineering — that is, tricking people into helping them commit fraud.
I’ve used a six-digit passcode for more than two years on my account with AT&T, and last summer noticed that I’d stopped receiving voicemails. A call to AT&T’s customer service revealed that all voicemails were being forwarded to a number in Seattle that I did not recognized or authorize.
Since it’s unlikely that the attackers in this case guessed my six-digit PIN, they likely tricked a customer service representative at AT&T into “authenticating” me via other methods — probably by offering static data points about me such as my Social Security number, date of birth, and other information that is widely available for sale in the cybercrime underground on virtually all Americans over the age of 35. In any case, Cranor’s post has inspired me to exercise my rights under the FCRA and find out for certain.
Vineetha Paruchuri, a masters in computer science student at Dartmouth College, recently gave a talk at the Bsides security conference in Las Vegas on her research into security at the major U.S. mobile phone providers. Paruchuri said all of the major mobile providers suffer from a lack of strict protocols for authenticating customers, leaving customer service personnel exposed to social engineering.
“As a computer science student, my contention was that if we take away the control from the humans, we can actually make this process more secure,” Paruchuri said.
Paruchuri said perhaps the most dangerous threat is the smooth-talking social engineer who spends time collecting information about the verbal shorthand or mobile industry patois used by employees at these companies. The thief then simply phones up customer support and poses as a mobile store technician or employee trying to assist a customer. This was the exact approach used in 2014, when young hooligans tricked my then-ISP Cox Communications into resetting the password for my Cox email account.
I suppose one aspect of this problem that makes the lack of strong customer authentication measures by the mobile industry so frustrating is that it’s hard to imagine a device which holds more personal and intimate details about you than your wireless phone. After all, your phone likely knows where you were last night, when you last traveled, the phone number you last called and numbers you most frequently text.
And yet, the best the mobile providers and their fleet of reseller stores can do to tell you apart from an ID thief is to store a PIN that could be bypassed by clever social engineers (who may or may not be shaving yet).
By the way, readers with AT&T phones may have received a notice this week that AT&T is making some changes to “authorized users” allowed on accounts. The notice advised that starting Sept. 1, 2016, customers can designate up to 10 authorized users per account.
“If your Authorized User does not know your account passcode or extra security passcode, your Authorized User may still access your account in a retail store using a Forgotten Passcode process. Effective Nov. 5, 2016, Authorized Users and those persons who call into Customer Service and provide sufficient account information (“Authenticated Callers”) Will have the ability to add a new line of service to your account. Such requests, whether made by you, an Authorized User, an Authenticated Caller or someone with online access to your account, will trigger a credit check on you.”
AT&T’s message this week about upcoming account changes.
I asked AT&T about what need this new policy was designed to address, and the company responded that AT&T has made no changes to how an authorized user can be added to an account. AT&T spokesman Jim Greer sent me the following:
“With this notice, we are simply increasing the number of authorized users you may add to your account and giving them the ability to add a line in stores or over the phone. We made this change since more customers have multiple lines for multiple people. Authorized users still cannot access the account holder’s sensitive personal information.”
“Over the past several years, the authentication process has been strengthened. In stores, we’re safeguarding customers through driver’s license or other government issued ID authentication. We use a two-factor authentication when you contact us online or by phone that requires a one-time PIN. We’re continuing our efforts to better protect customers, with additional improvements on the horizon.”
“You don’t have to designate anyone to become an authorized user on your account. You will be notified if any significant changes are made to your account by an authorized user, and you can remove any person as an authorized user at any time.”
The rub is what AT&T does — or more specifically, what the AT&T customer representative does — to verify your identity when the caller says he doesn’t remember his PIN or passcode. If they allow PIN-less authentication by asking for your Social Security number, date of birth and other static information about you, ID thieves can defeat that easily.
Has someone fraudulently ordered phone service or phones in your name? Sound off in the comments below.
If you’re wondering what you can do to shield yourself and your family against identity theft, check out these primers:
How I Learned to Stop Worrying and Embrace the Security Freeze (this primer goes well beyond security freezes and includes a detailed Q&A as well as other tips to help prevent and recover from ID theft).
Who is gonna pay to send me to Raffles?
"A single dragonfly can eat hundreds of mosquitoes a day." luv u dragonflies
I heard stories this week about dung beetles and cuttlefish. Both made me think about the typical stories we hear in the media about evolved human mating strategies. First, the stories:
Story #1 :The Dung Beetle
A story on Quirks and Quarks discussed the mating strategies of the dung beetle. The picture above is of a male beetle; only the males have those giant horns. He uses it to defend the entrance to a tiny burrow in which he keeps a female. He’ll violently fight off other dung beetles who try to get access to the burrow.
So far this sounds like the typical story of competitive mating that we hear all the time about all kinds of animals, right?
There’s a twist: while only male dung beetles have horns, not all males have horns. Some are completely hornless. But if horns help you win the fight, how is hornlessness being passed down genetically?
Well, it turns out that when a big ol’ horned male is fighting with some other big ol’ horned male, little hornless males sneak into burrows and mate with the females. They get discovered and booted out, of course, and the horned male will re-mate with the female with the hopes of displacing his sperm.
Those little hornless males have giant testicles, way gianter than the horned males. While the horned males are putting all of their energy into growing horns, the hornless males are making sperm. So, even though they have limited access to females, they get as much mileage out of their access as they can.
The result: two distinct types of male dung beetles with two distinct mating strategies.
Story #2: The Giant Australian Cuttlefish
The Naked Scientists podcast featured a story about Giant Australian Cuttlefish. During mating season the male cuttlefish, much larger than the females, collect “harems” and spend their time mating and defending access. Other males try to “muscle in,” but the bigger cuttlefish “throws his weight around” to scare him off. The biggest cuttlefish wins.
So far this sounds like the typical story of competitive mating that we hear all the time about all kinds of animals, right?
Well, according The Naked Scientists story, researchers have discovered an alternative mating strategy. Small males, who are far too small to compete with large males, will pretend to be female, sneak into the defended territory, mate, and leave.
How do they do this? They change their color pattern and rearrange their tentacles in a more typical female arrangement (they didn’t specify what this was) and, well, pass. The large male thinks he’s another female. In the video below, the cuttlefish uses his ability to change the pattern on his body. He simultaneously displays a male pattern to the female and a female pattern to the large male on the other side.
So, can the crossdressing cuttlefish and dodge-y dung beetle tell us anything about evolved human mating strategies?
But I do think it tells us something about how we should think about evolution and the reproduction of genes. If you listen to the media cover evolutionary psychological explanations of human mating, you only hear one story about the strategies that males use to try to get sex. That story sounds a lot like the one told about the horned beetle and the large male cuttlefish.
But these species have demonstrated that there need not be only one mating strategy. In these cases, there are at least two. So, why in Darwin’s name would we assume that human beings, in all of their beautiful and incredible complexity, would only have one? Perhaps we see a diversity in types of human males (different body shapes and sizes, different intellectual gifts, etc) because there are many different ways to attract females. Maybe females see something valuable in many different kinds of males! Maybe not all females are the same!
Let’s set aside the stereotypes about men and women that media reporting on evolutionary psychology tends to reproduce and, instead, consider the possibility that human mating is at least as complex as that of dung beetles and cuttlefish.
Originally posted in 2010.Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.
We always getting flyers from here
This is what many of my times in Japan were like
U can get these at Bread Top
looks good but busy
Signaling white supremacy.
On the heels of the Republican national convention, the notorious KKK leader David Duke announced his campaign for the Louisiana Senate. On his social media pages, he released a campaign poster featuring a young white woman with blonde hair and blue eyes wearing a gray tank top decorated with American flag imagery. She is beautiful and young, exuding innocence. Atop the image the text reads “fight for Western civilization” and included David Duke’s website and logo. It does not appear that she consented to being on the poster.
When I came upon the image, I was immediately reminded of pro-Nazi propaganda that I had seen in a museum in Germany, especially those depicting “Hitler youth.” Many of those posters featured fresh white faces, looking healthy and clean, in stark contrast to the distorted, darkened, bloated, and snarling faces of the targets of the Nazi regime.
It’s different era, but the implied message of Duke’s poster is the same — the nationalist message alongside the idealized figure — so it wasn’t difficult to find a Nazi propaganda poster that drew the comparison. I tweeted it out like this:
— Lisa Wade, PhD (@lisawade) July 23, 2016
Given that David Duke is an avowed racist running on a platform to save “Western” civilization, it didn’t seem like that much of a stretch.
Provoking racist backlash.
I hashtagged it with #davidduke and #americafirst, so I can’t say I didn’t invite it, but the backlash was greater than any I have ever received. The day after the tweet, I easily got one tweet per minute, on average.
What I found fascinating was the range of responses. I was told I looked just like her — beautiful, blue-eyed, and white — was asked if I hated myself, accused of being a race traitor, and invited to join the movement against “white genocide.” I was also told that I was just jealous: comparatively hideous thanks to my age and weight. Trolls took shots at sociology, intellectuals, and my own intelligence. I was asked if I was Jewish, accused of being so, and told to put my head in an oven. I was sent false statistics about black crime. I was also, oddly, accused of being a Nazi myself. Others, like Kate Harding, Philip Cohen, and even Leslie Jones, were roped in.
Here is a sampling (super trigger warning for all kinds of hatefulness):
It’s not news that twitter is full of trolls. It’s not news that there are proud white supremacists and neo-nazis in America. It’s not news that women online get told they’re ugly or fat on the reg. It’s not news that I’m a (proud) cat lady either, for what it’s worth. But I think transparency is our best bet to get people to acknowledge the ongoing racism, antisemitism, sexism, and anti-intellectualism in our society. So, there you have it.Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.
You can buy them at Woolies NQN!