Shared posts

18 Mar 14:37

Death Metal Music Inspires Joy Not Violence, Study Finds

by BeauHD
An anonymous reader quotes a report from the BBC: I've had one desire since I was born; to see my body ripped and torn. The lyrics of death metal band Bloodbath's cannibalism-themed track, Eaten, do not leave much to the imagination. But neither this song -- nor the gruesome lyrics of others of the genre -- inspire violence. That is the conclusion of Macquarie University's music lab, which used the track in a psychological test. It revealed that death metal fans are not "desensitized" to violent imagery. The findings are published in the Royal Society journal Open Science. How do scientists test people's sensitivity to violence? With a classic psychological experiment that probes people's subconscious responses; and by recruiting death metal fans to take part. The test involved asking 32 fans and 48 non-fans listen to death metal or to pop whilst looking at some pretty unpleasant images. Lead researcher Yanan Sun explained that the aim of the experiment was to measure how much participants' brains noticed violent scenes, and to compare how their sensitivity was affected by the musical accompaniment. To test the impact of different types of music, they also used a track they deemed to be the opposite of Eaten. "We used 'Happy' by Pharrell Williams as a [comparison]," said Dr Sun. Each participant was played Happy or Eaten through headphones, while they were shown a pair of images -- one to each eye. One image showed a violent scene, such as someone being attacked in a street. The other showed something innocuous -- a group of people walking down that same street, for example. "If fans of violent music were desensitized to violence, which is what a lot of parent groups, religious groups and censorship boards are worried about, then they wouldn't show this same bias. "But the fans showed the very same bias towards processing these violent images as those who were not fans of this music."

Share on Google+

Read more of this story at Slashdot.

18 Mar 13:41

Giving Algorithms a Sense of Uncertainty Could Make Them More Ethical

by BeauHD
An anonymous reader quotes a report from MIT Technology Review: Algorithms are increasingly being used to make ethical decisions. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers' lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like "freedom" and "well-being," a satisfactory mathematical solution doesn't always exist. "We as humans want multiple incompatible things," says Peter Eckersley, the director of research for the Partnership on AI, who recently released a paper that explores this issue. "There are many high-stakes situations where it's actually inappropriate -- perhaps dangerous -- to program in a single objective function that tries to describe your ethics." These solutionless dilemmas aren't specific to algorithms. Ethicists have studied them for decades and refer to them as impossibility theorems. So when Eckersley first recognized their applications to artificial intelligence, he borrowed an idea directly from the field of ethics to propose a solution: what if we built uncertainty into our algorithms? Eckersley puts forth two possible techniques to express this idea mathematically. He begins with the premise that algorithms are typically programmed with clear rules about human preferences. We'd have to tell it, for example, that we definitely prefer friendly soldiers over friendly civilians, and friendly civilians over enemy soldiers -- even if we weren't actually sure or didn't think that should always be the case. The algorithm's design leaves little room for uncertainty. The first technique, known as partial ordering, begins to introduce just the slightest bit of uncertainty. You could program the algorithm to prefer friendly soldiers over enemy soldiers and friendly civilians over enemy soldiers, but you wouldn't specify a preference between friendly soldiers and friendly civilians. In the second technique, known as uncertain ordering, you have several lists of absolute preferences, but each one has a probability attached to it. Three-quarters of the time you might prefer friendly soldiers over friendly civilians over enemy soldiers. A quarter of the time you might prefer friendly civilians over friendly soldiers over enemy soldiers. The algorithm could handle this uncertainty by computing multiple solutions and then giving humans a menu of options with their associated trade-offs, Eckersley says.

Share on Google+

Read more of this story at Slashdot.