There’s this misguided idea that spirits and demons only show up during the month of October, even though the undead are active throughout the year. You think just because it’s hot out and you fell asleep in your American flag bikini holding a hot dog doesn’t mean there isn’t still a poltergeist subtly scooting a book…
ORACLE PHOTO/CHAVELI GUZMAN
For five decades WUSF has provided an outlet for growth and experience for students, as well as informative and educational programming for their viewers. On Sunday at 11:59:59 p.m., the station will go dark, according to WUSF-TV General Manager JoAnn Urofsky.
Kim Thurman is a former student who interned at WUSF during her time at USF.
“My last semester of college I interned at WUSF,” Thurman said. “I did a little bit of everything. I was in radio for a couple of weeks and in TV production for a number of weeks as well.”
Thurman was able to take the knowledge and experience she gained during her time as an intern and transition that into a position as an employee of WUSF-TV.
“When I was graduating, they offered me a position as a part-time employee,” Thurman said. “It took a little while, probably around six months, until I was hired part-time in June as the assistant editor in TV. So I prepped everything before the actual editor got to it and made sure that he had everything that he needed.”
Thurman is not the only person to have interned at WUSF before beginning a career in journalism. Larry Goodman interned before his graduation in 1967. He said he considers his time interning and taking courses with WUSF to have been some of the most valuable moments of his academic career.
“I recall a TV Broadcasting class as a student, around 1967, in which we practiced giving news reports as well as a technical class in lighting,” Goodman said. “It will be sad for the students who never had these valuable opportunities and sad to no longer be able to broadcast ‘University Beat’ and interesting news on the latest University research, athletic advancement, extraordinary students.”
Urofsky said internship opportunities for students will still remain plentiful, event without an active TV studio and station.
“Internships will not be limited moving forward,” Urofsky said. “In fact, we have more interns than ever this semester and are planning for even more to join us next semester.
“So, we are pretty enthusiastic about all of the opportunities that we are going to have for students. Even radio is not just audio anymore, it incorporates videos now too. We will need video for our websites and reports, so we are still looking for students who want video experience. “
As far as what the closure means for employees of WUSF-TV, Urofsky said some employees, such as herself, will remain on in other capacities within the station, but others are being offered training in resume building and interviewing skills for their future endeavors.
“Employees have been transitioning out on their own and we did have some layoffs earlier,” Urofsky said. “Not all of the employees are being accommodated within the university, although the university has been very accommodating. They have been able to have access to career counseling, resume writing and interviewing skills.”
For what her future entails, Urofsky, who has been in her position since 2002 said, “We have two radio stations and we have the studios, so we will be transitioning the WUSF TV production studios into ones for radio and other projects both within and outside of the university.”
Goodman said he wishes something different could have been done to salvage the TV station.
“Even if it is going to be used as a lab, that is great, but it had so much more to gain by staying open in so many ways,” Goodman said.
For employees of the TV station, such as Thurman, there is one very important message to convey to the TV station’s audience.
“It definitely is the end of an era,” Thurman said. “The programming that we had on our channel was special and I think our viewers know that, so my message to them would just be a giant thank you. It really was all about the viewers and we could not have done it for as long as we did without them.”
Following several delays, a new range of social domestic robots is expected to enter the market at the end of this year. They are no ordinary bots. Designed to provide companionship and care, they recognise faces and voices of close family and friends, play games, tell jokes and continue to learn from each interaction.
They also have one feature in common. They are intended to be extraordinarily cute and appealing.
Our research explores how cute design influences our relationship with home robots, including our perception of the risks associated with their unprecedented access to personal information.
The power of cuteness
Social home robots are designed to manage household tasks as well as relationships.
They are networked with the internet and smart home devices, and they use cameras and voice control to provide empathetic interactions. Researchers are exploring how their cute design counteracts the possible risks associated with such companions.
Read more: How to make robots that we can trust
In a recent publication, New York University digital media scholar Luke Stark argues that the materiality of digital devices is tied to our feelings about them. The more “warmly” we feel about our devices, the more we lower our barriers to privacy.
The context for our feelings shapes what we intuit as appropriate in terms of our desire for privacy. Yet our affective and emotional connection to the hardware and interfaces of our devices is precisely what prompts us to be less conscious, and thus less uneasy, about what is the most critical element of information privacy: our device content, our own ‘small’ trails of data and metadata.
We are tapping into an emerging field of research called cute studies to argue that the cuteness of these robots is likely intended to push our Darwinian buttons (in Sherry Turkle’s words). Cute robots appear vulnerable, weak and in need of protection.
We believe that this not only encourages the consumer to adopt the robot as a member of the family, but also appears to establish a clear power dynamic between the robot and the human user. The robot occupies a position of lesser power.
We have studied several robots, including Jibo and Kuri, and a striking similarity is that they have all been deliberately stunted or enfeebled in some way. Some of them don’t have limbs, some have no speech capacity, and some can’t move. None of the new home robots can do everything, even though it would be entirely possible, from a design and engineering perspective, to make a robot that can.
The cute face of personal data
As helpful home robots, each needs help and input from their users. Their cute aesthetic becomes even more pronounced through their design-led incapacitations, which function as visible signs of powerlessness.
There are obvious reasons why roboticists would want their creations to appear less powerful than humans. After all, if popular culture has taught us anything, it has taught us to be wary of opening our homes to robots.
A number of concerns arise in relation to the emotional engineering of these new home robots. As they will have access to an unprecedented amount of personal data, they are clear targets for hacking. There is also little contractual information available about what consumers are signing up for with a home robot.
Concerns are also ethical and moral. What is the status of the home robot? Is it a pet, child, friend, lover, servant, slave or do they represent the evolution of a new species?
Confronting robotic mortality
The appeal of cuteness (in commodity items, at least) is notoriously short lived. Cute commodities are well known to have instant consumer appeal, and some studies have linked cuteness to impulse buying.
Jonathan Chapman has described cuteness as a kind of flirtation with the consumer, and has likened the cute object to a film with an incredible opening line, which just continues to be repeated throughout.
This might very well pose some long-term problems for human-robotic interaction, which relies on the creation of a long-term, shared narrative between the home robot and the adopting family.
We know from Chapman’s research in the area of emotionally durable design that if objects want to create long-term emotional connections with consumers, they need to be more than simply cute.
They need to intrigue, and in order to intrigue they need to show some signs of otherness. This would require home robots to surpass their limitations established by their cute aesthetic, but this has the potential to destabilise the strict user-robot hierarchy.
A further complication arises in the relationship between the technology and the permanence of these home robots. As their networked “brains” and memories outlive their material bodies, a gap will likely emerge between the “immortal” nature of the data they have collected about each individual user or family, and the “mortality” of their material technologies.
Home robots are susceptible to the obsolescence of electronic goods and the fantasies of greater intimacy - both promises of newer social technologies.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.
Geneticists have now firmly established that roughly two percent of the DNA of all living non-African people comes from our Neanderthal cousins.
It’s difficult to imagine why our early ancestors would have mated with them. Neanderthals were a different species to us after all, and the thought of it seems distasteful to us today.
Hindsight is a wonderful thing of course, and armed with so few facts about the circumstances surrounding this interspecies dalliance, we mustn’t be too quick to judge.
Still, scientists are learning a great deal now about how active this Neanderthal DNA is in our bodies and the role that it might be playing in determining how we look and behave as well as our susceptibility to certain diseases.
One of the very first features suggested as having a Neanderthal origin was red hair. A set of Neanderthal genes responsible for both light hair and skin colour was identified by geneticists more than a decade ago and linked to human survival at high latitude, light poor, regions like Europe.
Because the Neanderthals had lived in Europe for several hundred thousand years, it was reasoned that natural selection gave them light skin and hair colour helping to prevent diseases like rickets from occurring.
But as is so often the case in science, the situation is far more complicated than most of us would have imagined. Red hair wasn’t inherited from Neanderthals at all. It now turns out they didn’t even carry the gene for it!
Red hair is a uniquely human feature, according to a new study by Michael Danneman and Janet Kelso of the Max Planck Institute for Evolutionary Anthropology and published in the The American Journal of Human Genetics.
It’s striking and paradoxical that half of all the Neanderthal genes in our genome play a role in determining skin and hair colour. Yet this new research shows us that Neanderthal genes have no more influence over these features than the unique human genes we carry for them.
What does all of this mean? Well, over time, tens of thousands years in fact, natural selection has produced a fine balance between Neanderthal and human genes for these features. We might think of lightly skinned and haired people today as having the best bits of both genomes for these traits.
Some of the other skin colour genes inherited from Neanderthals include one associated with both the ease with which people tan and the incidence of childhood sunburn.
Another surprise for me in this new study was the role that Neanderthal genes play in human sleep patterns, as determined by the body’s circadian rhythms. The natural cycles of night and day, and their length, which vary enormously with latitude and season, are strong influences over our circadian rhythms.
Danneman and Kelso searched for a link between latitude and the prevalence of a Neanderthal form of a gene (ASB1) which plays a role in determining whether you are an ‘evening person’, and is associated with the need for daytime napping as well as being tied to narcolepsy.
It turns out that indeed non-African populations living far away from the equator today show a higher prevalence of ASB1 than people living close to it.
Human circadian rhythms are medically important because of the well known 24-hour variation in blood levels of glucose, insulin and leptin, which controls our appetite. Clock variability underpins short sleep episodes, sleep deprivation and poor quality sleep, which have all been associated with diabetes, metabolic syndrome, increased appetite, and even obesity.
Some of the other newly discovered Neanderthal genes in the human genome are linked to body height in adults as well as the stature reached by children at 10 years of age, pulse rate, and the distribution of fat in the legs.
Other Neanderthal genes apparently help determine our mood, as influenced by our exposure to sunlight, or even whether we like to eat pork or not.
It’s no longer such a novelty that our ancestors interbred with archaic humans like the Neanderthals. No more lame jokes from me about ‘shagging the ancestors’!
Their decision to mate with the Neanderthals, what ever the reason, continues to reverberate after tens of thousands of years. Neanderthal genes are playing a very real role today in influencing how we look, feel and behave, including even some commonly suffered diseases often linked to a Western lifestyle and diet.
All of this reinforces once again how remarkable and surprising our evolutionary history as a species truly is. And it brings into sharp relief the very real importance of our evolution for a proper understanding of many of the challenges humankind faces globally today.
Darren Curnoe receives funding from the Australian Research Council.
Rewilding and restoration of land often rely on the reintroduction of species. But what happens when what you want to reintroduce no longer exists? What if the animal in question is not only locally extinct, but gone for good?
Yes, this might sound like the plot of Jurassic Park. But in real life this is actually happening in the case of the Aurochs (Bos primigenius). This wild ancestor of all modern cattle has not been seen since the last individual died in 1627, in today’s Poland.
Aurochs have been deep within the human psyche for as long as there have been humans, as attested by their prominence in cave art. However, the advent of agriculture and domestication put the magnificent animal on a path to extinction.
So why bring the Aurochs back today and how? And what is the likely outcome?
What is left of Aurochs, besides their depiction in cave paintings, are some fossil remains and some descriptions in the historical record. “Their strength and speed are extraordinary,” wrote the Roman emperor, Julius Caesar, in Commentarii de bello Gallico.
Despite the former large range of habitat of this animal (from the Fertile Crescent to the Iberian Peninsula, from Scandinavia to the Indian subcontinent), the historical record is quite slim on exact descriptions. And in all likelihood its size, behaviour, and general temperament will have varied across different environments. Despite this likely variation, the Auroch has survived into modernity as the primordial, powerful and enormous, ox.
The idea around today is that the Aurochs’ characteristics have survived, genetically scattered throughout its descendants. By breeding these together and selecting offspring that show increasingly more Aurochs-like traits, the theory is that we can eventually re-create something very similar to the lost animal. This theory is known as back-breeding: literally breeding backwards.
The first attempt to revive the Aurochs was made in the 1930s in Germany by two zoo directors, the brothers Lutz and Heinz Heck, with an undeniable Nazi party affiliation.
Their creation, now known as the Heck cattle, took only 12 years to accomplish and mixed breeds of domestic cattle with fighting bulls from Spain. The brothers focused more on size and aggression than on being faithful to the anatomical description of the Aurochs. This is partly why nobody today considers Heck cattle to be actual recreations of an extinct species, something reflected in the name these animals carry.
The Heck cattle made it through World War II and have since populated pastures and zoos throughout Europe. Though certainly not Aurochs, many find that they do the Aurochs’ job just fine. This is why the famous Oostvaardersplassen nature reserve in the Netherlands uses them as one of their primary grazers.
For most of the 20th century it was assumed that the landscape in Europe before human settlement was mostly forest. Frans Vera, a Dutch biologist, changed this inherited wisdom and proposed that the primeval European landscape was a mosaic consisting of forest as well as meadows and other kinds of habitat. One of the main reasons for this, he argued, is that big animals (the Aurochs among them) would have engineered this landscape through their grazing behaviour, something now known as “natural grazing”.
The Oostvaardersplassen, founded by Vera, is his way of proving that he is right. The herds of Heck cattle were introduced to engineer the landscape, to see what happens to the land in the presence of many grazers.
The theory of natural grazing has attracted many that are eager to introduce grazing animals to new land, in the hope that they will become the engineers of a future European wilderness. This push for wild grazing animals is one of the primary factors behind the drive to recreate the Aurochs.
This changing land-use pattern across continental scales has re-energised the restoration debate. The Vera hypothesis of an original mozaic landscape is motivating others to restore and rewild by using big grazers.
What an Aurochs should look like
Since the Heck brothers conducted their hasty experiments there have been new attempts at back-breeding. Heck cattle have also become an element of this new experimentation.
There are currently projects to recreate the Aurochs in several European countries. One of the largest attempts is led by the Taurus Foundation in partnership with Rewilding Europe, a rewilding organisation that wants to introduce the new Aurochs across the continent, as ecosystem engineers. Rival projects exist in the Netherlands, Germany and Hungary, and the Heck cattle are not going anywhere.
There is no shared set of criteria that guides everyone towards the same goal. One of the obvious criteria is genetic, but it was only in 2015 that Stephen Park and his colleagues were able to sequence the first full Aurochs genome. The genetic material came from one single fossilised specimen, and much work is still to be done to understand the genetic variability of the extinct species.
It is unlikely that an organisation will be able to impose a standard for what, in the future, will count as an Auroch.
Some argue that bringing extinct species back has no ethical basis and is in fact impossible, while others consider it an ethical duty to do so. The likeliest result of current and past experimentation is a future full of competing Aurochs, with new genetic paths leading into an unknown future.
Functionally speaking, it makes little difference what the created animals look like, as long as they behave a certain way. But part of the drive for recreating a lost animal is undoubtedly aesthetic: people want the new to look like their idea of the old. And this, more than anything, will ensure future rivalries between competing back-breeders. In the drive to bring one species back, we are almost certain to create several.
Mihnea Tanasescu receives funding from the Research Foundation - Flanders (FWO).
In a recent RAC survey, 26% of UK 1,700 motorists reported using a handheld mobile phone while driving, despite it being illegal. In response, road safety charity Brake, argued that society’s phone “addiction” can have very serious consequences.
But psychological research shows that not only do people miss things because they are staring at their phone’s screen, they also miss things when they’re looking ahead but talking on their phone. In fact, people conversing on a phone can appear to look at something yet fail to consciously detect it.
This “inattention blindness” has been demonstrated in various ways, including the famous “invisible gorilla” experiment. By focusing on one particular task (such as counting how often a basketball is passed between team members) we can miss other, highly salient events in the scene – such as a person dressed as a gorilla.
The ability to focus our attention like this is extremely useful, as we simply couldn’t process all of the incoming visual information that we are constantly bombarded with. But in some situations, inattention blindness can have serious consequences.
Here’s research on five things you can’t do properly while on your phone:
1) Notice hazards when driving
Drivers using a hands-free phone are far less likely to notice and react to hazards, even those directly ahead of them. This leads to increased stopping distances and a four-fold increase in accident risk. Research suggests this inattention blindness is produced by the need to share limited mental resources between tasks.
Phone conversations have a visual component – you picture where your conversation partner is and what they are saying – and this mental imagery draws on resources which are needed for accurate visual perception. Consequently, someone on the phone can look at, but not see, a hazard.
2) Cross the road safely
Pedestrians talking on the phone are more likely to be injured crossing the road. They tend to take longer to decide to cross, and then walk more slowly. They also make more unsafe judgements on crossings.
In one study, phone-users successfully crossed a simulated street only 84% of the time. Compared with other distractions, including listening to music, phone use is associated with poorer decision-making, missed opportunities to cross and increased likelihood of being involved in a collision.
3) Take the most direct route
Phone-users may change the way they walk, which in turn affects the route they take and what they notice around them. One observational study found that people talking on the phone were more likely to change the direction they were walking in, were less likely to be aware of other people around them, resulting in them getting in other people’s way, and tended to walk more slowly than people who were either listening to music or undistracted.
Even a highly practised and “automatic” task like walking can become disrupted when a person’s attention is diverted to a phone conversation. Another study looked at participants’ gait while walking to a previously learned destination. Compared to undistracted walkers, phone users walked slower and made more lateral deviations from the set route, meaning they walked further than needed.
4) Notice advertisements you pass
Phone-users are less likely to recall seeing advertisements that they have passed while on the phone. Research has shown that even though people distracted by a phone conversation look at advertisements as often as those who are undistracted, they don’t remember them when later questioned.
5) Spot a unicycling clown
One study neatly demonstrated the power of inattention blindness in phone users by observing people distracted either by a phone call, a conversation with another person, or listening to music.
Walking across a large square on a college campus, participants passed an unexpected and highly visible item – a clown on a unicycle. While those talking to another person or listening to music mostly noticed the clown, only 25% of people on the phone reported having seen him. Unsurprisingly, these phone users were quite shocked to have missed something so obviously attention grabbing.
So, it appears from the available research that people talking on their phones have diminished “situation awareness” – they are less conscious of what is happening around them, which can have important implications for their own and others’ safety. Phone users are more likely to miss important and highly visible events – and crucially are often unaware of how unaware they may be.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond the academic appointment above.
You’ve just finished your weekly practice. Your heart rate is up and your stress levels are down. You’ve just got your breath back and now you and all your teammates are heading down the pub. Spirits are high. You’re all getting a bit rowdy and Dave’s started chanting the team song.
Community is at the heart of what you do: your family, friends and fans support you throughout your ups and downs and the weekly training sessions have transformed you all from a group of strangers to best mates. There’s nothing better than putting on an away-performance outside your home turf.
You are, of course, in a community choir.
Like being in a local football squad, choir members benefit physically, mentally and socially from their team activity. But the Big Choral Census found that while around 300,000 more people participate in choirs than play football, amateur football receives £30m in funding each year compared to under £500,000 a year for choirs.
“Sing ‘til you’re grinning”, part of the Manchester Science Festival will look at how being in a community choir can actually aid health and well-being. The project created a new choir to be scrutinised by scientists over a 12-week period to test whether being in a choir really does make a person healthier. The social benefits are already well documented. We know, for example, that group singing is an excellent ice-breaker and can lead to quick, effective bonding for large groups.
Like many other activities, being in a choir provides a platform for people to meet others with similar interests, which can lead to new friendships and a fuller social circle. Choir participation can also be spiritually uplifting to those who are grieving and has been shown to improve the quality of life for cancer survivors and their carers.
Of particular interest are the benefits to people who suffer from social isolation and loneliness. Half a million older people in the UK go for a week at a time without seeing or speaking to anyone and investment in choirs may provide one solution to this national social challenge.
The Creative Health report by the All Parliamentary Group on Arts, Health and Well-being set out evidence and examples of arts-based practice which impacts positively on the health and well-being of participants. The report makes recommendations for the potential of arts in health to be realised, including recommending that arts and cultural organisations are involved directly in the delivery of health services.
Greater Manchester is one the first of the city regions with a directly elected “metro-mayor” to have made arts and culture integral to its health and well-being strategy. The region – which has a devolved health and social care budget of £6 billion – is a national pilot area exploring how the arts can help improve economic performance, education, health and well-being.
The science of singing
So what is the underlying science that makes choral singing good for your health? Current research focuses on the presence of three hormones: endorphin, oxytocin and cortisol. Endorphin is the body’s “feel good” hormone and is released during exercise, laughing and eating chocolate. Endorphins are released when people are performing in a choir but not when people are merely listening to music. This is because choral singing is in itself a physical workout. The deep breaths taken as part of singing equate to aerobic exercise, which increases blood flow and releases the “feel-good” hormone. This explains why choral singing especially benefits people who suffer from asthma, as it helps with their breathing.
Oxytocin, the body’s “love drug” increases feelings of love and trust. Not only is oxytocin released during group singing but it is released in such significant quantities that after just one singing class, choir members feel closer to each other than they would do when participating in any other group activity. This explains the close friendships that stem from community choirs.
Cortisol is our “stress hormone” and is significantly reduced after just one hour of choir singing. Low levels of cortisol can boost the immune system and help the body fight infections.
So, with a decrease in stress hormones and an increase in feel-good and love-hormones, it’s no wonder choir singers report feeling high after their training. In the future could we see the likes of choirmaster and TV personality Gareth Malone reach the status of some the world’s top footballers? Perhaps not. We’ve still got some way to go before choirs and football are on the same playing field.
In addition to being an academic researcher, Gary W. Kerr is a freelance science festivals consultant and undertakes various contracts in staff training, creative production, staff management and delivery of various science festivals across the UK and overseas. He is a co-producer of the event described in this article. Sing ‘til you’re grinning, co-produced by Salford Community Leisure and the University of Salford, as part of Manchester Science Festival, is taking place at Eccles Gateway and Library on Thursday 26th October 2017 from 2.30pm – 3.30pm.
Fifty-nine were killed and 527 were injured Oct. 1 in Las Vegas when 64-year-old Stephen Paddock opened fire on a crowd of concert-goers from the window of his hotel room. But these 586 human beings were not just the victims of a deranged gunman.
They were also the victims of a gun-saturated cultural, governmental and economic system that vehemently reveres the destructive capacity of firearms yet has the audacity to express shock at every grisly, heartbreaking episode of mass violence that is perpetrated by individuals with guns.
Those whose lives were taken were mothers, daughters, sons, fathers, artists, musicians, financial advisors, law enforcement officers, sorority sisters, nurses and small business owners, just to name a few. From all walks of life, they came together under the starlit Las Vegas sky to celebrate life to the tune of good music.
These 586 people were heinously aggreived by Paddock, but to reduce the blame for this tragedy to the deranged inner workings of a single assailant would be exceedingly short-sighted.
The truth is, we can neither honor the victims of this horrific shooting nor prevent these dreadful episodes from occurring if we fail to recognize and address the grim reality that gave rise to the massacre: Guns control America.
The omnipotence of the firearm in the U.S. can be summed up by — and perhaps traced back to — the fact that the right to its possession was penned into the country’s Constitution.
But it is hard to imagine the Founding Fathers could have foreseen the development of semiautomatic rifles capable of firing up to 180 rounds per minute. The Second Amendment was written during a time when the typical firearm was a musket with an effective firing rate of three rounds per minute.
What the Founding Fathers could predict, however, was that changing times may call for a need to alter the Constitution to respond to new needs or threats. Thus, they penned Article V, which states that, “The Congress, whenever two thirds of both Houses shall deem it necessary, shall propose Amendments to this Constitution.”
But if policy-makers have the right and duty to amend the Constitution as new concerns arise — for instance, an epidemic of 521 mass shootings in the 477 days since the Pulse shooting — what is stopping them? Why do they continue to helplessly tweet their “thoughts and prayers” to those affected by mass shootings instead of passing laws that, say, prevent individuals from legally stockpiling 33 firearms in 12 months, as Paddock did?
Perhaps it has something to do with the fact that whatever weight may hold down their hearts when a white man slaughters dozens of people is not enough to offset the weight of the National Rifle Association’s (NRA) lobbying money in their pockets.
According to The Center for Responsive Politics, a nonprofit research organization that tracks the relationship between lobbying and public policy, the NRA spent $54.4 million on the 2016 election cycle alone.
The NRA has consistently encouraged politicians to loosen the U.S.’s already weak gun restrictions by spending outrageous sums of money lobbying the people who pledge to serve all citizens equally.
For instance, according to the Center for Responsive Politics, the NRA spent $37 million on lobbying 54 senators “who voted in 2015 against a measure prohibiting people on the government’s terrorist watch list from buying guns.” The only Democrat to vote against the measure was Senator Heidi Heitkamp of North Dakota.
The NRA’s attempts to deregulate the sale of guns has been so successful that there are now “more gun clubs and gun shops in America than McDonalds,” according to the New York Times, and “almost as many privately-owned firearms in this country as there are people living inside it,” according to the Washington Post.
As we have come to witness all too consistently, the pervasiveness of such powerful weapons has resulted in this country being one where catastrophes like Las Vegas are embedded into the fabric of everyday life.
Indeed, the numbers show that gun violence in the U.S. is outrageously high.
Everytown Research Center, a nonprofit group that tracks firearm violence in the U.S., reports that the U.S.’s gun homicide rate is 25 times the average of other developed countries. Everytown also reports that, on an average day, 93 Americans are killed with guns. That’s almost twice as many people as were killed in Vegas.
These numbers are by no means comparable to those of any other developed country, and that is no accident. It is the result of the domination of the U.S.'s legislative system and social imaginary by a tremendous economic power and cultural symbol.
The victims of the Las Vegas attack gathered together for a night that should have been full of joy and festivity. Instead, they were subjected to a terrible mass shooting, and the blame is not merely on a 64-year-old gunman but also on the fundamentally defective system that enabled this – and countless other – attacks.
Renee Perez is a junior majoring in political science and economics.
When My Little Pony: The Movie lands in cinemas this October, producers Hasbro will be hoping that the animation proves popular with its target audience of young girls. The film will also attract a very different group of viewers: adult My Little Pony fans known as “Bronies”. Made up predominantly of young men aged 18-30, the fandom has evoked reactions ranging from bemusement to celebration since it emerged at the beginning of this decade.
Hasbro launched a new My Little Pony series, subtitled Friendship is Magic, in 2010. The new show adopted a distinctive and appealing art style while focusing on creating strong female characters and consistent character arcs.
The adult fandom initially developed via the notorious internet forum 4-Chan. Although users planned to mock the show, a number became regular viewers and began discussing it in dedicated forums. The term “Bronies” – a combination of “bros” and “ponies” – developed soon after. Although this initially applied to male fans, it is now generally used to describe fans of all genders.
Bronies engage in typical fan activities. This includes writing fan fiction and creating artwork. Fans write songs and create pony inspired music, from orchestral, to dubsteb and metal. Physical meet-ups and large conventions (often involving cosplay) offer opportunities to engage with other fans in offline settings.
The fandom has generated both positive and negative reactions. It has attracted praise for progressive views of gender, encouraging charitable activity and promoting creativity. At the same time, some argue that academic and media interest in male fans risks ignoring the many female “Pony” fans who are not seen as worthy of comment.
The most common criticism of the fandom has focused on its supposedly “regressive” nature – the worry that adults watching cartoons designed for young children represents an inability to deal with real life. As Anne Gilbert argued, popular press coverage “reveals a pervasive discomfort” with adult men celebrating a cartoon for young girls.
This repeats a longstanding criticism of fandom. Representatives of the Frankfurt School, such as Theodor Adorno and Max Horkheimer, attacked 1930s jazz fandom as inherently regressive, while the Tolkien Clubs that formed in 1960s America were branded “infantile” on their foundation. Critics argued that fans were living in fantasy worlds in order to avoid adult responsibilities.
Neither is adult consumption of children’s media new or particularly novel, as the continued popularity of Disney animation among adults suggests. As far back as 1908, Walford Graham Robertson’s children’s play Pinkie and the Fairies attracted such large numbers of soldiers that its audience was said to resemble a military parade ground.
What is more novel in My Little Pony fandom is its scale and visibility, facilitated by the internet and rise of social media. For Hasbro, this presents both opportunities and challenges.
Bronies might sound like a marketing dream for Hasbro’s movie. Adult fans offer a built-in audience, social media presence and disposable income to spend on merchandise. Although initially wary of Bronies, Hasbro have increasingly courted the adult fanbase. The toy range now includes a line of more expensive products designed for collection and display, alongside merchandise including coffee-table art books explicitly aimed at adult fans.
Marketing for the movie has therefore tried to capitalise on fan enthusiasm. The initial trailer launched alongside a GIF maker designed to generate memes and facilitate promotion on platforms such as Tumblr and Twitter. Exclusive designer posters were distributed to fans at San Diego Comic Con, while the movie’s Twitter account produced stylised artwork designed to appeal to adults.
At the same time, a committed fandom brings challenges. Fans criticised producers over elements of the movie’s promotion, complaining when early trailers emphasised guest stars (such as Emily Blunt and Sia) instead of allowing the TV show’s voice actors to reprise their central roles. Issues such as where the movie fits into My Little Pony continuity – which are unlikely to worry the film’s young audience – suddenly also become important considerations.
Above all, producers have to balance their knowledge of adult fandom with the requirement that the film appeals primarily to children. Ironically, some fans have expressed concerns that producers pandered to them over recent years, citing the show’s original innocence and un-selfconscious charm as responsible for its initial appeal. As Ewan Kirkland has argued, there is a risk that in the pursuit of adult fans, My Little Pony’s young female audience is getting lost in the process.
Press coverage focused on the supposed “weirdness” of Bronies misses the point that historically and culturally they aren’t doing anything overly unusual. The fandom nonetheless raises questions about fan/producer interactions. How visible do Hasbro want their involvement with a sometimes ridiculed fandom to be? Are producers celebrating fans or exploiting their enthusiasm for marketing purposes? And where does the child audience fit in all of this? The movie, and reaction to it, should offer some fascinating answers to these questions.
Andrew Crome does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
The furore that erupted when David Slater, a British wildlife photographer, released a “selfie” taken by a macaque monkey in 2015 has only just reached legal resolution. The animal rights group, PETA (“People for the Ethical Treatment of Animals”), which had filed on behalf of the macaque, allegedly named “Naruto”, withdrew its suit against Slater when he agreed to give 25% of any royalties from the selfie to animal welfare charities.
This case marks a high-profile opening salvo in a struggle that will be increasingly fought among animal rights activists, protectors of human intellectual property and defenders of the free market. The case has been generally reported as being about whether a macaque that took a selfie (and gained worldwide notoriety courtesy of Wikipedia) is entitled to copyright. While this account is fine as far it goes, the case also hints at the profound challenges that digital and animal cultures pose to the law’s recognition of human uniqueness.
The story begins with Wikipedia, whose “open source” and “open access” approach to knowledge production makes it the ultimate free market in cyberspace. Basically anything is fair game for inclusion on its pages if it is not prohibited either by its own editors, who are largely crowdsourced, or some explicit legal ruling.
When Wikipedia’s editors decided to feature the macaque selfie, Slater claimed that it was in violation of his copyright. The selfie had been taken while his camera was active but unattended in Indonesia, where he was on assignment photographing the rare monkeys. Wikipedia replied by saying that if anyone owned the copyright, it was the macaque who actually took the selfie. At that point, PETA got involved, suing Slater on behalf of the macaque for copyright infringement.
The court had no problem dismissing the case, simply by arguing that copyright law was not designed to include animals as copyright-holders. But it also said that the law may be amended to include them in the future. In doing so, it tiptoed around the issue that PETA was keen on raising, namely, whether the monkey was morally entitled to whatever royalties might otherwise accrue to Slater as the copyright-holder. This helps to explain the out-of-court settlement, which left Slater the formal victor in the case. But that was really all that he was left with. Slater had been earning minuscule royalties from the selfie and even approached bankruptcy as PETA’s case against him dragged on.
The most striking feature of the case is not the very idea that a monkey might hold copyright, but that the internet’s relatively unregulated market environment provided the opportunity to broach the issue. The placement of a photo in virtual as opposed to physical reality radically loosens our intuitions about ownership. This became clear in the recent flurry of cases around the multiple postings of nude celebrity selfies in social media. Defendants claimed loss of control over their image in a world where image control is everything. In a more profound sense, something similar is happening to the image of the human being itself in the monkey selfie case.
The monkey selfie case managed to level the playing field between the human and the animal because the distinction between producer and consumer is largely erased in cyberspace. Unless the law intervenes, an online object can be reframed and reappropriated as the user wishes. And among these reframings and reappropriations are accounts of what makes the object what it is. In the end, only the explicit disqualification of animals from copyright law ended up saving Slater, even though some legal experts admitted that Naruto may have behaved toward the camera in a way that would make a comparably situated human eligible for copyright.
Marx and a macaque
Faced with Slater’s original claim to copyright infringement, Wikipedia interestingly gave little weight to the core of Slater’s argument, which was that had he not gone to Indonesia, photographed the macaques and even set up the camera so that they might use it, the selfie would never have been taken. (Of course, Slater was also the one who allowed the photos to go online in the first place.)
Instead Wikipedia focused on the particular monkey’s skill in arranging the camera so as to take the striking selfie. To the ears of animal rights activists, Wikipedia made Slater sound like an employer who claims ownership over his employees’ labour because he took the effort to set up the business for which they work. When only humans are involved, it’s called exploitation. Why not extend the same concept to the macaques?
Whatever may have motivated Wikipedia to pursue this framing of the situation, it certainly resonates with the history of extending human rights. Thanks to Karl Marx, we understand exploitation as a form of injustice that comes when workers are denied the full fruits of their labour. Wikipedia opened the door to revisit Marx, and PETA charged through it. The original capitalist rejoinder was that the employer is the one who takes the initial risk, invests the capital and sets up the environment which makes the work possible and so the workers, who might otherwise not be employed, should be satisfied with a steady wage, not a share of the profits. One hears echoes of Slater’s defence here, including his claim that his photography was part of an effort to save the macaques from extinction.
But bound up in this dispute is a disagreement about whether all producers are also creators. Historically, in the human sphere, Marx ultimately won this argument, largely by appealing to a conception of the human that is both universal and exceptional: all (but only) humans are both producers and creators. Like today’s copyright law, Marx recognised a clear species barrier between humans and other animals when it comes to creativity.
Cyberspace’s blurring of the producer/consumer distinction may be opening the door to reimagining “creator” more generally, as the source of whatever makes an object valuable to its user. In that case, the law may need to be adjusted to provide legal protection to “creative” animals in the same spirit as it historically provided protection to “creative” workers.
Steve Fuller does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Sitting is so culturally ingrained at work, at the wheel, in front of the TV and at the movies, it takes a great effort to imagine doing these things standing up, let alone pedalling as you work at a “bike desk”.
Abandon your chair for four hours to stay healthy, office workers are told
Stand up at your desk for two hours a day, new guidelines say
But what many media reports did not mention was the guidelines were based on limited evidence. They were also co-authored by someone with commercial links to sit-stand desks (desks you raise and lower to work at standing or sitting), a link not declared when the guidelines were first published in a journal.
Media reports also overplayed the dangers of sitting at work, incorrectly saying it wiped out the benefits of exercise.
Our new study reveals the nature of this media coverage and its role in overselling sit-stand desks as a solution to inactivity at work.
Read more: Health Check: the low-down on standing desks
Employers are also starting to see sitting as an occupational health and safety issue and sit-stand desks, standing desks and even treadmill desks are popping up at work.
To address these issues, the guidelines recommended measures including:
aiming for two hours a day of standing and light activity (slow walking) during working hours, eventually progressing to a total of four hours a day for all office workers with mainly desk-based roles
regularly breaking up sit-down work with standing using adjustable sit-stand desks or work stations
avoiding long periods of standing still, which may be as harmful as long periods sitting
changing posture and doing some light walking to alleviate possible musculoskeletal pain and fatigue, and
recommending employers warn staff about the potential dangers of too much sitting at work or at home, as part of workplace health and wellness activities.
How did the media report this?
Our team analysed news articles about the guidelines published in media outlets around the world.
We found all the articles reported the top-line recommendation to reduce sitting by two hours a day, and to replace the sitting with standing or slow walking.
Almost two-thirds of articles also noted the recommendation that people should regularly break up seated work with standing, and that this could be done with a sit-stand desk.
Even though the guidelines’ authors said the recommendations were based on the best evidence so far and more evidence was needed, these caveats did not make it into most news media reporting.
These caveats are important because the authors acknowledge the evidence quality is weak and that guidelines are likely to change.
The news media also seemed to be unaware of amendments to the journal article, including to expand the disclosure of competing interests to clarify one author, Gavin Bradley, has a connection to the business of selling sit-stand desks.
The revised version notes Gavin Bradley is 100% owner of a website that sells sit-stand work products called Sit-Stand Trading Limited. He is also director of the Active Working Community Interest Company (CIC).
Read more: Why sitting is not the ‘new smoking’
According to the Australian arm, Get Australia Standing, these campaigns aim to raise awareness and educate the community about:
… the dangers of sedentary working and prolonged sitting time.
The website also features a range of sit-stand work products and providers.
We are not suggesting Gavin Bradley skewed the sit-stand desk evidence in the guidelines. But the initial failure to disclose his interests is a concern.
No, sitting doesn’t cancel out exercise
In our study, we also found more than one-third of articles incorrectly warned that too much sitting cancels out the benefits of exercise.
This is contrary to recent research showing high levels of moderate intensity physical activity (about 60–75 min a day) seem to eliminate the increased risk of early death associated with high levels of sitting time (eight hours a day or more).
This rigorous study, analysing data from one million adults, also found this high activity level reduces, but does not remove, the increased risk linked to high levels of TV-viewing.
Yet, this study does not appear among the research resources on the Get Australia Standing campaign website, which appears to promote the message that it doesn’t matter if you are physically active, if you sit a lot you are doing yourself harm.
How realistic are the recommendations anyway?
Regardless of the media reporting of the guidelines, we need to ask ourselves how realistic the guidelines are.
The recommendations may be premature and hard to put into practice given that studies involving motivated participants have only managed to reduce the time spent sitting by 77 minutes in an eight-hour work day.
Workers may use sit-stand desks and they may reduce sitting time but the evidence is not yet in to show this produces detectable health benefits, at least in the short term. And standing too long at work has been linked to an increased risk of heart disease.
The guidelines also contrast with recently updated Australian national physical activity guidelines.
These make general recommendations to sit less and break up periods of uninterrupted sitting because the experts conclude the evidence does not point to a specific amount of sitting time at which harm begins.
Given the evolving research field and the vested interests, we need to pay attention to sitting time, standing, and physical activity levels as well as the role of industry players and their contribution to advice on health.
Catriona Bonfiglioli has received funding from the Australian Research Council (ARCDP 1096251), the UTS Early Career Research Grant program (2010 2009001198), the Australian Centre for Independent Journalism (2009), the Public Health Education and Research Program (2000: PHERP 052), The Reuter Foundation (1997), and the University of Sydney (1999 UPA). Catriona has been a co-investigator on projects funded by The National Health and Medical Research Council Project (2010: 632840) and UTS. Catriona is a member of the Public Health Association of Australia's NSW branch committee.
Josephine Chau was supported by Postdoctoral Fellowship (#100567) from the National Heart Foundation of Australia (2015-2017); she has directly received consulting funds from the World Health Organization, and Bill Bellew Consulting Associates; and travel reimbursement from Marsh Pty Ltd.
We often think about what young people can expect to gain from university, or what universities contribute to society. But it’s not often that we talk about how higher education can change society beyond the shaping of individuals.
As tuition fees rise, and universities are cast in increasingly intense competition for students and staff, their cultural and civic role has become ever more important. Now, universities need to prove that they do more than just teach students to pass their degree courses.
We already know that individuals’ active participation in the organisations, clubs and societies which make up civil society help foster trust and well-being – all of which are also essential for the formation of a democratic and harmonious society. But what is it that UK universities do for society in this regard? We’ve been examining national surveys and speaking to graduates to find out.
We started by looking at the British Household Panel Survey and the National Child Development Study. Here we found that graduates, on the whole, are more likely to be members of associations, organisations and societies such as trade unions. They are also more likely than non-graduates to join environmental groups, residents’ associations, religious organisations and sports clubs.
These findings are irrespective of whether they studied in an “elite” higher education system – before the end of the 1980s when under 15% of the population went to university – or a “mass” system, as now when more than 15% attend.
However, the difference in likelihood of graduates and non-graduates joining a trade union was greater for the mass cohort compared to the elite cohort. This means that the beneficial effect of going to university on a person’s likelihood of joining a trade union is stronger for people who went to university in a mass system compared to an elite system.
Meanwhile for environmental groups, religious organisations, and tenants and residents associations, the reverse is true. For the elite cohort, going to university is more important to their likelihood of joining one of these organisations compared to mass graduates. Put simply, the beneficial effect of going to university on the likelihood of joining these organisations is stronger for elite graduates compared to mass graduates. This distinction between systems matters because it changes the “effect” of being a graduate or non-graduate of either cohort, and varies what the students gained in terms of skills that would change their likelihood of participating in civil society.
We also wanted to know how important particular university experiences were for equipping graduates with the skills, knowledge or attributes needed for civic participation. To find this out, we interviewed 30-year-old graduates who had studied a range of subjects at a range of universities. This included a Russell Group institution, Oxford and Cambridge, and a Post-92 university – that is a university which was regarded as a “polytechnic” before 1992.
One of the most interesting things we learned from these interviews was the role of degree subjects in amplifying civic participation. This often occurred indirectly, through the way it intensified social and political attitudes and values.
Perhaps unsurprisingly, the effect was most striking for social science, and arts and humanities graduates. They were most likely to reflect on how their subject had given them a deeper and broader understanding about politics and social issues. Many told us that their university experience amplified pre-existing social and political attitudes and values. It also encouraged them to participate in certain social and political activities.
Distinct teaching practices also seem to play a role in fostering civic participation. The tutorial system at Oxford and Cambridge – where students meet once or twice a week with a tutor to discuss their subject – seems to provide some graduates with critiquing, debate and discussion skills. And this is especially so for former social science, and arts and humanities students. For a few, their own personal and intellectual confidence combined with these university-learned abilities gave them the abilities and confidence to take part in civic activities.
If HE provides individuals with the skills and knowledge needed for civic participation, there is strong justification for getting more students involved, far beyond an economic rationale.
Yet HE’s effect on civil society does not appear to be equal for all graduates. For students graduating from particular universities, and with particular degrees, their gained skills, credentials and knowledge may give them an advantage in terms of their capacity to participate in civic society. This means that they will have a better opportunity to develop their social capital, which includes their social networks with friends, neighbours and acquaintances through participating in organisations and associations, compared to other graduates.
Overall, what we have found is that universities do far more than just teach students in a specific discipline, or increase an individual’s job prospects. The skills that are built by students have the potential to fundamentally change society for good.
Ceryn Evans receives funding from Economic and Social Research Council (ESRC) .
I was sitting on the sofa across from Christine in her home. She offered me a cup of coffee. Each time I visited, she sat in the same spot — the place where she felt most comfortable and safe. She had shared stories from the past and decided to talk about the birth of her daughters, grandchildren and great grandchildren.
For Christine, a research participant in a multi-sited study into dementia and digital storytelling, the fear dementia brings is that she won’t be able to be a part of special moments such as the celebration of birth.
As we worked together in Edmonton, creating a multimedia story from her memory, Christina started to remember new things. She became emotional when she talked about her daughters becoming mothers themselves. She pointed out that the project was so much more powerful than looking through a photo album. Like many participants, she said she recalled stories she hadn’t thought about for years.
As a post-doctoral fellow in occupational therapy under the supervision of Dr. Lili Liu, at the University of Alberta I worked with several participants in this study. Funded by the Canadian Consortium on Neurodegeneration in Aging, one of our goals was to investigate quality of life and how technology affects the lived experiences of persons with dementia.
Technology and quality of life
In this research project we defined digital storytelling as using media technology — including photos, sound, music and videos — to create and present a story.
Most previous research on digital storytelling and dementia has focused on the use of digital media for reminiscence therapy, creating memory books, or enhancing conversation. Collaboratively creating personal digital stories with persons with dementia is an innovative approach, with only one similar study found in the United Kingdom.
During this project, I met with seven participants over eight weeks. Our weekly sessions included a preliminary interview to discuss demographics and past experiences with technology. Then we worked on sharing different meaningful stories, selecting one to focus on and building and shaping the story. This included writing a script, selecting music, images and photographs and editing the draft story.
Participants worked on a variety of topics. Some told stories about family and relationships, while others talked about a particular activity or event that was important to them. After all participants completed their digital stories, we had a viewing night and presented the stories to family members.
Happiness in the moment
It was an intense process. Eight sessions working one-on-one with persons with dementia required a significant amount of thinking, remembering and communicating for the participants. There were challenges, such as when participants found themselves unable to express their thoughts or remember details.
Although many participants were tired after a session, they all felt that it was a beneficial and meaningful activity. Working in their homes on a personally gratifying activity with a tangible outcome seemed to keep them motivated and eager to continue. The process was also enjoyable and gave the participants something to look forward to each week.
There was a sense of happiness in the moment. And the way that participants responded to me, along with their ability to remember who I was and the purpose of our sessions, all indicated a deeper positive connection. The participants all felt a sense of accomplishment and family members were proud to see the end product at the viewing night.
Into the future
I have met with one of the research participants again recently, and she still remembers me. I would like to follow up with the others to get a sense of the long term impact of this digital storytelling project. I am also eager to see how the findings in Edmonton line up with those from the studies in Vancouver and Toronto.
For the participants, talking about memories helped them open up about having dementia. Getting past the fear and looking ahead with optimism was the message I heard, and one that I hope to keep hearing.
Elly Park receives funding from the Canadian Consortium on Neurodegeneration in Aging (CCNA). She is affiliated with AGE-WELL, NCE.
Many Californians’ regularly scheduled broadcasts were interrupted Thursday morning with strange emergency messages warning of extraterrestrial invasions and the beginning of Armageddon. The bizarre warnings aired on TVs in the Orange County area, affecting Cox and Spectrum cable users, according to the Orange County…
Earlier today the U.S. Copyright Office released its long-awaited review of improvements to Section 108 of the Copyright Act, the section which grants limited, specific exceptions to copyright for libraries and archives. Over a decade ago the Office convened the Section 108 Study Group* to assess improvements to this section, and in 2008 that group produced its report. Since then (and with recent inquiries from the Office to stakeholders) we’ve been waiting to hear from the Copyright Office about its views on updates to Section 108. This Section 108 “Discussion Document” does just that.
Before getting into the document I want to start with two observations. The first is that Section 108 is horribly outdated. Most of its text is exactly the same as enacted in 1976. The piecemeal updates that have been added to address modern library and archives practices, including online uses, haven’t worked well and are awkward additions. I–and many others–have written about the need to update Section 108.
The second is that I’m leery of asking Congress to revise any part of the Copyright Act, including Section 108. From someone who thinks that copyright law already unnecessarily restricts access to lots of information in ways that have no positive effect on the copyright system’s underlying purpose–encouraging the creation and dissemination of new creative works–I don’t think Congress has a great track record on legislative revisions. Since the 1970s Congress has consistently made copyright terms longer, dramatically expanded the number of works protected, and has made using those works riskier. Asking Congress to revisit Section 108 could mean that it gets much worse, rather than better.
All that said, I think many of the Office’s suggestions are pretty good. I can’t go into every detail in this blog post–the Discussion Document is around 60 pages long, and it needs every one of those pages–so, for now, I thought I’d point out the top three positives I see in this document:
1) The Office suggests in a number of places removing hard numerical limits on the number of copies allowed. For preservation purposes, for example, the proposal would allow libraries, archives, and museums to reproduce works “as many times as is reasonably necessary for preservation and security.” This is a major problem under the current statute, which generally only allows for making three preservation copies. Perhaps more significantly, the proposal would also low eligible institutions to make incidental, temporary copies that are needed for making resulting preservation copies and for copies made for users. This is important when thinking about digital access because it would eliminate concerns about whether 108 can apply at all when incidental copies are made in the course of transfer from one machine to another.
2) It would expand the categories of works to which Section 108 applies. The current statute makes several Section 108 exceptions inapplicable to musical works, pictorial, graphic or sculptural works, and to motion picture or other audiovisual works. That restriction currently limits 108’s usefulness–and makes it all the more difficult to understand and apply–without providing a clear benefit for rightsholders of those kinds of works. This document also reframes how the Section 108 exceptions would apply to “published” versus “unpublished” works (the current Section 108 treats unpublished works differently, with the idea that unlike published works, there generally isn’t a commercial market to be harmed by the use of those materials ). The new proposal opts instead to make distinctions based on whether the work was ever “disseminated to the public” by the copyright owner. “Publication” is a notoriously difficult concept, so the move away from it to something a bit broader is welcome, though I’m not sure the concept of “disseminated to the public” is going to be easier to apply in practice.
3) It suggests that institutions should be able to provide remote digital access to users, albeit in some cases limited to one user at a time, for a limited time. This most directly applies to works “not disseminated to the public,” (i.e. unpublished works). For archives, this enhancement could be significant when thinking about how to provide access to preservation copies. Would an online reading room, with technology to allow for controlled digital lending, be permissible under these terms?
The Office’s 108 document also has parts that are likely to cause some controversy. One big one is a suggestion that eligible libraries, archives, and museums could be exempt from copyright liability for violating non-negotiable contract terms that prohibit institutions from engaging in preservation activities otherwise permitted under Section 108. I think this is an incredibly important suggestion, given the number of click-wrap, consumer-oriented license agreements that libraries enter into so they can provide electronic access to their patrons. Many of those contracts prohibit making copies necessary for preservation purposes, but if libraries aren’t saving copies there is a great risk that in the long term, those works may one day become entirely inaccessible to everyone.
Another part of the document likely to cause some controversy is the requirement that eligible institutions implement reasonable digital security measures. I understand the desire for such a limitation, but this is an area where the devil is going to be in the details. Who decides what is reasonable is an open question, and how compliance with that provision is monitored and assessed could be extremely burdensome for some institutions.
Overall, I have to say that I’m impressed. I think the Office did good work in pulling together the results of the Section 108 Study Group report as well as feedback from stakeholders in creating this document. As proposed, the Section 108 envisioned in this document still wouldn’t provide all or even most of what libraries, archives, and museums need to fulfill their missions, and fair use would remain an important and probably overriding consideration when making uses of copyrighted works. But, as a sort of safe harbor for institutions seeking certainty for activities that they commonly engage in, the types of improvements outlined in this document would be welcome and a great help in facilitating modern (as opposed to 1970s-era) libraries, archives, and museums.
* The 108 study group was jointly convened by the The National Digital Information Infrastructure and Preservation program of the Library of Congress and the Copyright Office.
We’re all familiar with Nikola Tesla and his brilliant work helping to invent the electric technologies that we live with today. But did you know that there’s a guy named Thomas Edison who also helped invent some of that tech? If you didn’t, you should check out the new trailer for the upcoming movie, The Current War.
The Confederate flag continues to loom over I-75 in Tampa. SPECIAL TO THE ORACLE
Tampa natives have probably witnessed the infamous Confederate flag looming over I-75, unavoidably consuming the view of all who speed by it. It’s known by various names, including the Rebel flag, but depending upon who is asked, its name may be uttered with either pride or disgust.
The Confederate flag debate is between those who argue that the flag is an emblem of the valiant cause their ancestors defended during the Civil War and those who say the flag is a hateful reminder of the deplorable practice of slavery, which the Confederacy fought to preserve.
However, many don’t know the Confederate flag was never the official banner of the South, but was one of dozens of Civil War battle flags.
According to John M. Coski – Historian and Vice President of Research and Publications of the Museum of the Confederacy in Virginia – Confederate Gen. Pierre Gustave Toutant-Beauregard requested that every Confederate military regiment uniformly fly the flag during battle, but his request was largely dismissed. Instead, many southern regiments continued to fly their own distinct flags.
After the Civil War, the flag was almost obsolete until the 1930’s when the KKK and various fascist groups began using the Confederate flag in ceremonies. By the 1950’s it became common to see the combined celebration of the Confederate flag and the Nazi regime’s swastika, according to Coski’s Harvard Press book The Confederate Battle Flag: America’s Most Embattled Emblem. It is no coincidence the two symbols resurfaced together at Charlottesville’s “Unite the Right” rally. The partnership between Nazism and the Confederate flag is decades old.
The flag’s history and use are inextricably connected to hate and discrimination.
But does this necessarily mean every single person who proudly displays the Confederate flag as a part of their southern heritage is an evil racist? No, it doesn’t. Many people are simply misinformed about the flag’s past and fly it with the sole intention of displaying their pride as rebellious southerners.
Regardless of whether its suporters hoist the Confederate flag out of malice or pride, research indicates opinions on the flag differ starkly depending upon the extent of racial oppression one has endured.
According to a recent poll conducted by CNN’s Polling Director, Jennifer Agiesta, 72 percent of African-Americans perceive the Confederate flag as a symbol of racism, while only 25 percent of white Americans do.
An additional poll concentrating solely on the South’s general perception of the flag revealed that 75 percent of African-Americans deem the Confederate flag as racist, though a mere 11 percent consider it a symbol of pride. In sharp contrast, 75 percent of white southerners view the flag as an icon of pride, while just 18 percent associate it with racism.
There is a racial divide in how the flag is perceived, particularly in the South, the region with the most work to do to combat the institutional and social remnants of slavery.
This discrepancy suggests that people flaunting the Confederate flag are in a position of privilege due to their race: They can celebrate it because they cannot relate to racial oppression. They aren’t reminded of their ancestors being dehumanized, nor are they still faced with tremendous acts of discrimination the way people of color are in the U.S.
Although the majority of white southerners do not associate the Confederate flag with racism, the flag’s historical use — in defense of slavery during sporadic Civil War battles, in partnership with Nazis and the KKK, and most recently at Unite The Right — demonstrates that the flag is deeply rooted in white supremacy and hate. Perhaps it is time the South let the flag go and adopt an emblem with a less malicious history.
The violent scenes in Charlottesville, Virginia, that led to the death of one woman and left many more injured began as a dispute over a statue of General Robert E. Lee, which sits in a local public park. However, the controversy feeds into a much wider debate that is as old as the United States itself. So who was Lee and why does a memorial to him trouble so many people?
The meeting of white supremacists in Charlottesville was originally held under the pretext of demonstrating against plans to remove the statue. The Charlottesville city council voted in February for it to be removed from the recently renamed Emancipation Park (formerly Lee Park). The decision came as part of a movement to challenge the ubiquity of Confederate symbols in the South.
These statues, for their opponents, signify the oppression of African Americans under slavery and the Jim Crow segregation laws. They serve as daily reminders of the vulnerability of black people. The message of such monuments is the same to many of their defenders, even if their interpretation is different. To the white supremacists who gathered on the streets of Charlottesville, the statue of Lee represents white military and political power.
In the decades after the Civil War, memorials celebrating the south’s valiant effort and glorious defeat appeared all over the region. They embodied the myth of the “lost cause” – the idea that the war had been fought to defend states’ rights, rather than slavery. In this interpretation, the south only lost because of the industrial might of the northern “aggressor”.
This doctrine came to prominence during the Jim Crow era when whites implemented racial segregation through violent, extra-legal and then legal means. The lost cause memory was used to justify and enforce white supremacy.
For many, the Confederate memorials continue to represent this repression. They are a celebration of southern identity as white. Since the rise of the Black Lives Matter movement, the slogan “BLM”, as well as the names of victims of police shootings have appeared on memorials around the country. “Black Lives Matter” was sprayed on Charlottesville’s Lee statue in the days following the Charleston church shootings in 2015. Along with calls for the removal of Confederate flags from civic buildings there have been increasingly vocal, and successful, demands to reconsider the place of monuments in public spaces.
Who was Robert E. Lee?
The statue in Charlottesville is of Lee atop his horse, Traveller. The Confederate general and native of Virginia holds a hat in one hand and his horse’s reins in the other, his sword ready at his side.
It was unveiled by Lee’s great-granddaughter at a ceremony in May 1924. As was the custom on these occasions it was accompanied by a parade and speeches. In the dedication address, Lee was celebrated as a hero, who embodied “the moral greatness of the Old South”, and as a proponent of reconciliation between the two sections. The war itself was remembered as a conflict between “interpretations of our Constitution” and between “ideals of democracy.”
Here was the states’ rights argument. The south fought a noble war over its right to self-determination, rather than an effort to keep millions enslaved. Lee, claimed one of the speakers, “abhorred slavery”. In his position as commander of the Confederate Army of North Virginia, Lee represented military endeavour rather than a political struggle to uphold human bondage.
This has made Lee a powerful symbol of the Confederacy. He allowed white southerners to ignore the central role of slavery in the war. They could forget that the southern states seceded in order to uphold slavery and that their defeat meant freedom to millions of enslaved people. There was no space for black memories of emancipation in the south’s public spaces. Confederates were white and their monuments were celebrations of whiteness.
Lee is one of the most frequent figures to be memorialised in statues, aside from the common soldier. In 2016, the Southern Poverty Law Center catalogued all publicly supported spaces dedicated to the Confederacy, finding 203 examples named for Robert E. Lee. Streets, highways, counties, cities, parks, monuments and 52 public schools are named for the Confederate general. Up until 2015, Lee’s birthday was officially marked in five states.
The same year as Lee’s statue appeared in Charlottesville, Virginia passed laws which strengthened definitions of who was “colored” and who was “white”, and which reinforced the law prohibiting interracial marriage. Then, two years later, the state passed a law to enforce racial segregation in places of public entertainment.
The monument to Lee served the same purpose as the legislation – to remind African Americans of their perceived place and inferiority. White nationalists gathered to protect the statue in 2017 because they wanted to celebrate its message. Like the original creators and supporters of the Lee monument, they sought to celebrate a white supremacist vision not just of the past, but of the present.
Jenny Woodley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Like a B-movie for a post-Brexit era, consumers in Britain may soon be unwillingly cast in the 2019 blockbuster, Attack of the Chlorine Chickens. If news headlines are to be believed, flocks of toxic fowl are waiting to storm Britain’s shores like mini featherless zombies as part of a US-UK trade deal.
But before getting into a flap about the health risks of chlorine, we should maybe pause to consider why you would bleach a chicken in the first place. It is actually primarily to mitigate the disease risks from raising nearly 9 billion chickens in overcrowded environments with low standards of animal welfare.
However the failure to frame the chickens’ welfare as anything other than a side issue poses important questions about the nature of our interactions with animals. Why are chickens so far down in the pecking order for moral concern? Would our reaction have been the same if the animal in question were a mammal? The moral outrage sparked by when horsemeat was found in beef burgers in Britain and Ireland in 2013 would suggest not.
Despite the widespread symbolism of the cockerel across cultures, history shows that we’ve never really been concerned about the welfare of chickens. Until the late 18th century, cock-throwing – tying a chicken to a stake and pelting it with objects until it felt the sweet release of death – was an extremely popular pastime in Britain. Eventually outlawed on grounds of cruelty, research has drawn parallels between cock-throwing and the widespread appearance of chickens in modern video games, which are customarily killed or used for chicken-kicking competitions. I doubt there are many video games in which players beat up dogs for kicks.
So what is it about our attitude to chickens that encourages us to disregard their widespread maltreatment? Psychological research into people’s beliefs repeatedly throws up the common perception that chickens are close to the bottom of the pile when it comes to cognitive abilities.
Yet this assumption flies in the face of scientific evidence. Alongside characteristics associated with sentience in other species – such as pain perception or emotions – chickens communicate, show sensitivity to differing contexts and display personalities. This disconnect between our perception of chickens and the reality of their mental lives is undoubtedly important. The more we see an animal as “minded”, the more likely we are to believe its welfare should be protected.
Psychologists used to believe that the animals we consider to have a mind was determined mainly by social factors such as cultural background. However, we now know a range of factors, such as our age and sex, affect our willingness to attribute mental capacities to animals. For the majority of animals it also appears that simple familiarity helps – owning a pet typically increases the mental faculties we associate with that particular species.
This is logical since the greater our contact with an animal, the more likely our chances of observing behaviour that we recognise as intelligent. And yet having a chicken in our clutches doesn’t appear to help their plight. One study showed that, in a group of students, keeping chickens had no effect on the mental characteristics participants associated with them. Only by actively training the chickens in cognitive tasks did the students’ attitudes begin to change.
But why doesn’t general contact with chickens alter our views on their brainpower? Our latest paper, published in Trends in Cognitive Sciences, argues that we should also consider how our own cognitive mechanisms influence our judgements about how intelligent an animal is. We are currently in the process of looking at how consistent people are when making attributions of mind to other species.
Research already tells us that context and behavioural similarity between animals and humans are central factors in our psychological interpretation of animals’ actions. We also know that mirror neurons – a type of brain cells that fire both when we perform an action or when we watch others perform the same action – are automatically activated when we watch both humans and other animals carry out similar actions to achieve an assumed goal. This means when we see a rat reach out to grasp a food item, our brain is activated using similar mechanisms to those we would use to interpret the behaviour of a human doing the same thing.
These findings lend weight to the theory that humans attribute cognitive abilities across species based on how they view specific behavioural events, such as grasping food or chewing.
Moving like a chicken might therefore be a major disadvantage when you’re being compared to other farmyard inhabitants such as cows or pigs. Despite spending time observing them, it would be harder for our brains to automatically “see” their behaviour and use it as a basis for assuming some semblance of brainpower.
So next time you read stories about “frankenchicken”, maybe attempt to avoid snap judgements – your perceptions of chickens aren’t based on their lack of brains, but rather on the constraints of your own.
Click here to take part in Queen Mary University of London’s survey investigating people’s attitudes to the animal mind.
Caroline's work is funded by the ESRC and supported by World Animal Protection.
Editor’s note: Under a trade deal concluded in May, China has begun exporting chicken to the United States. Critics have pointed to China’s record of food safety issues and argued the deal prioritizes commerce over public health. Here Maurice Pitesky, a poultry extension specialist at the University of California, Davis School of Veterinary Medicine with a focus on poultry health and food safety epidemiology, answers five questions about importing Chinese chicken.
Why is the United States importing chicken from China? Do we have a shortage?
Hardly. The United States is the largest poultry producer in the world, and the second-largest poultry exporter after Brazil. However, as part of a recent bilateral trade deal, China has agreed to accept imports of beef and liquefied natural gas from the United States. In exchange, the United States is allowing China to export cooked poultry meat to the United States.
Why can China send us only cooked chicken?
This is most likely in response to concerns over avian influenza transmission from raw poultry to the United States. Viable avian influenza viruses could potentially infect U.S. poultry or birds and spread these novel viruses in North America. Some of these viruses can infect humans.
South and Southeast Asia have dense human populations, with numerous poultry producers, vendors and markets where people are exposed to live birds – all conditions that contribute to the spread of avian flu. Since 2013 China has confirmed 1,557 human cases of AH7N9 flu and 370 deaths.
Given China’s history of food safety problems, should US consumers be worried about eating chicken processed there?
China is already the third leading supplier of food and agricultural imports to the United States. U.S. consumers are eating imported Chinese fish, shellfish, juices, canned fruits and vegetables.
If poultry is cooked properly, there is no food safety risk from viruses or bacteria. However, if the poultry is not cooked properly, or if there is some type of cross-contamination – for example, if raw chicken or feathers come into contact with cooked product or packaging material – then zoonotic bacteria like salmonella and campylobacter can cross the species barrier and sicken humans.
Most cases of salmonellosis and campylobacteriosis are thought to be associated with eating raw or undercooked poultry meat, or with cross-contamination of other foods by these items. There are no publicly available data on rates of salmonellosis and campylobacteriosis in China. In the United States, infections from these two bacteria sickened nearly 14,000 people in 2014. Of this group, 3,221 were hospitalized and 41 died.
Poultry meat can also contain contaminants, such as heavy metals, and antibiotic residues if birds are treated with antibiotics in an inappropriate fashion. Specifically, when poultry farmers use antibiotics inappropriately (quantity, type and timing), residues can persist in muscle, organs and eggs and toxic and harmful residues build up in the birds. These risks are probably greater for poultry raised and processed in China than for poultry raised and processed in the United States.
Here in the United States there are strict rules requiring growers to stop giving birds antibiotics for periods of days or weeks before they are processed, and we have a National Residue Program that is designed to test for these compounds in eggs and meat.
China has similar rules, but they are not robustly enforced, and many poultry farmers are not well-informed about them. The Chinese government recently announced a plan to increase surveillance, oversight and monitoring of poultry, livestock and aquatic products to decrease the presence of antibiotic residues by 2020.
Heavy metals in Chinese poultry products may also be an issue. This is a worldwide concern, but it’s especially serious in China because they still burn huge quantities of coal, which releases lead, mercury, cadmium and arsenic. High levels of lead and cadmium have been reported in agricultural areas near Chinese coal mines. These heavy metals can contaminate soil and end up in animal feed and animal meat and eggs.
We really don’t understand how widespread these problems are in China and the Chinese government isn’t very transparent about food safety. That’s starting to change, but there’s nothing like the publicly available data that we have in the United States at the processing plant and retail level.
What will US inspectors do to determine whether Chinese chicken is safe?
The U.S. Department of Agriculture’s Food Safety Inspection Service is responsible for determining whether other countries have meat and poultry safeguards that are equivalent to ours. Chinese poultry processing plants cannot ship cooked poultry to the United States unless they meet that test.
When a foreign program is approved by the USDA, the Food Safety Inspection Service relies on that country’s government to certify that its plants are eligible and conduct regular inspections of the exporting plants. The Food Safety Inspection Service conducts on-site audits of the plants at least annually to verify that they are still meeting the required standards. It will be interesting to see whether the U.S. National Residue Program is involved in those inspections.
Where will chicken processed in China show up in US markets?
This is the million-dollar question. Cooked poultry is considered to be a processed food item, so it is excluded from country of origin labeling requirements which would apply to raw chicken. This means that U.S. consumers will not know they are consuming chicken grown and processed in China. Restaurants also are excluded from country of origin labeling, so the cooked poultry could be sold to restaurants without consumers knowing. The first Chinese exporter did not specify the name brand that its cooked chicken is being sold under.
The key issue is cost competitiveness. If China can sell cooked poultry at a competitive price point, there will most likely be a U.S. market for it. At this point, though, the Chinese poultry industry is not as integrated (that is, organized so that one company owns breeder birds, hatcheries, grow-out farms and processing plants) or technologically advanced as the U.S. poultry industry. In the short run this makes it difficult for China to compete with the U.S. poultry industry at any appreciable level, even though Chinese labor costs are lower.
Maurice Pitesky does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
After over 50 years on air, the WUSF-TV station was sold and is getting ready to shut down Oct. 15. SPECIAL TO THE ORACLE
WUSF-TV will officially be going dark Oct. 15.
The official date was announced Friday by Alisa Carmichael, executive administrative specialist with the station.
After 50 years of broadcasting, it was announced in February that the station would be sold in the Federal Communications Commission spectrum auction for $18.7 million.
In February, university spokeswoman Lara Wade said sale was because the station didn’t “align with our resources and mission and vision. The broadcast TV license was not part of our education mission to continue student success.”
Some of the programming will continue at WEDU.org, according to Carmichael.
This does not impact radio stations such as WUSF 89.7, the local NPR station, Classical WSMR 89.1 and 103.9.
The space the studio occupies was one source of discussion when the idea of selling came about. The Board of Trustees discussed in October 2015 the possibility of renting it to a local station.
However, according to Carmichael’s email, the film studios will be used for video production projects and serve as a learning environment for students guided by WUSF professionals.
On 29 March 1951, shortly after 5 p.m., a hand-grenade-sized pipe bomb exploded in the landmark Grand Central Terminal in New York City. Ordinarily, the detonation of a pipe bomb in a busy commuter terminal at rush hour would be cause for grave public concern, yet the local news media barely acknowledged the event. It […]
Now that the CJEU has confirmed that linking to protected content could lead to infringing copyright comes in some cases (see see here, here and here), a slew of practical questions arise. Not least, how much do copyright owners lose from online platforms hosting links to unauthorized copies of their works uploaded by third parties? In other words, how much damages should be awarded by courts, and on what basis? Two euros per protected works, according to the Paris Court of Appeal's latest decision on the matter. Multiply this by the number of copyright works infringed, itself multiplied by the number of views each file received, divided by two, and you obtain the flat rate going for damages in relation to the secondary liability of platforms indexing links to files "ready for download" on their website. Confused much? Let's simplify matters, here is how the formula would read in pseudo-mathematical terms:
In this case, the total amounted to 13 millions euros, and followed a prison sentence of one year awarded by the Paris Criminal Court. (Paris Court of Appeal, Pol 5, Ch 13, 7 June 2017, D.M. v APP, Microsoft, Sacem and others, available in French language here; Paris Criminal Court of First Instance, 2 April 2015 [unreported]).
This 13-million liability (and prison sentence) fell on the owner and manager of the website "wawa-mania.eu". Wawa-mania.eu offered a forum platform allowing members to index links redirecting internet users to servers hosting infringing content they could then download. Wawa-mania also offered downloadable circumvention tools to remove anti-piracy locks shielding Windows software from copying.
In 2009, the forum-like website "wawa-mania" was subject to investigations in France by the Information Technology Fraud Investigations Brigade (in French, BEFTI short for 'Brigade d'enquêtes sur les fraudes aux technologies de l'information'), which revealed mass infringement of protected material ranging from videos, music to computer software. Some of the forum members, known as "uploaders" would obtain infringing copies of copyright works from at least four servers including "rapidshare.com", "Megaupload", "Gigaup.com" and "Free". Once downloaded from these servers, uploaders would re-upload the content online, and share links to the files on "wawa-mania". The forum-like website generated revenues from advertising spaces on its pages.
|A penny for your thoughts...|
Legal proceedings & appeal decision
D.M. first faced litigation on the basis of counterfeit by the Paris Criminal Court of First Instance ("tribunal correctionnel") on 2 April 2015. The criminal court handed down a one-year prison sentence, together with a range of fines, to sanction D.M.'s continuous provision of means and infrastructure to enable the infringement of protected content. In the same judgment, the criminal Bench also ordered for the take-down of the website for a period of two years, and required Google and Yahoo to have the following publication appear in results searching for the names D.M. or "wawa-mania": "On 2 April 2015, the Paris Criminal Court of First Instance sentenced D.M. to a one-year prison sentence for the counterfeit of copyright works carried out by uploading links on the forum wawa-mania that enabled the downloading of protected works illegally obtained". The criminal decision will stand as the right of appeal has now lapsed.
|...Two euros for your links.|
Civil litigation followed suit, headed by a number of claimants including Microsoft, SACEM, Disney, Universal City Studios or Twentieth Century Fox - to only name a few. On 2 July 2015, the Paris (civil) Tribunal awarded damages for a range of damage (economic, moral, procedural) to each claimant. The main bill, 13 millions euros, concerned the infringement of copyright's economic rights for providing links to downloadable material exclusively, and was calculated as per the formula described above, i.e. two euros, per views divided by two, per copyright work listed on the website. The Paris Tribunal divided in half the number of views for each work recorded by the linking-website because they recognized that users "viewing", or accessing, the files may not have downloaded them. Halving the number of views before applying it as a coefficient in the equation to calculate damages was regarded as a fair account of the "likely" levels of infringement. Nothing in the decision explains or justifies why the likely levels of infringement would be accurately obtained by dividing the number of views by two. Is this a rough guess on the part of the court? Was there expert evidence submitted to support this calculation? Was it based on the Court's own experience of downloading and streaming - or perhaps that of a reasonable person?
The Paris Court of Appeal approved the calculation put forward in the first instance decision. They too regarded the division of "views" by two as appropriate since the claimants did not submit evidence that viewing or accessing the files lead to their download consistently. For this reason, the fixed amount of damages had to account for a probable, and not proven, prejudice. The rest of the tribunal decision was confirmed in every aspect, only further damages to cover litigation costs were added by the Paris Court of Appeal, totaling at additional 2,200 euros - a relatively small sum in comparison to the multi-million euro sanction.
The decision confirms two things. First, and more generally, it evidences that intellectual property enforcement measures do, or can, have 'teeth' in the context of secondary liability - provided that claimants and public prosecution are in the position to gather the necessary evidence of infringement, and take legal actions. These remain big "ifs". Second, the decision goes to show that the "linking" defense website owners may be tempted to argue would not hold in the context of platforms dedicated to enabling illegal downloads.
The caveats in liability rules carved by regulations and jurisprudence for file-sharing or link-sharing platforms have limits, and this is one illustration. It appears that French Court are stiffening their position with regard to illegal online practices. Looking at this decision together with a recent case heard by the French Supreme Court dated 6 July 2017 (see here), this is the conclusion one is inclined to come to. Indeed, the highest civil court recently held that internet service and browser providers ought to cover the costs of blocking and filtering injunctions, regardless of their lack of liability. This line of jurisprudence is in tune with the EU 2016 copyright reform proposal which identifies internet intermediaries as key allies in the fight against infringement and the 'value gap' (See Article 13, Recital 38).
The International Federation of Library Associations and Institutions (IFLA) has called on the World Wide Web Consortium (W3C) to reconsider its decision to incorporate digital locks into official HTML standards. Last week, W3C announced its decision to publish Encrypted Media Extensions (EME)—a standard for applying locks to web video—in its HTML specifications.
IFLA urges W3C to consider the impact that EME will have on the work of libraries and archives:
While recognising both the potential for technological protection measures to hinder infringing uses, as well as the additional simplicity offered by this solution, IFLA is concerned that it will become easier to apply such measures to digital content without also making it easier for libraries and their users to remove measures that prevent legitimate uses of works.
Technological protection measures […] do not always stop at preventing illicit activities, and can often serve to stop libraries and their users from making fair uses of works. This can affect activities such as preservation, or inter-library document supply. To make it easier to apply TPMs, regardless of the nature of activities they are preventing, is to risk unbalancing copyright itself.
IFLA’s concerns are an excellent example of the dangers of digital locks (sometimes referred to as digital rights management or simply DRM): under the U.S. Digital Millennium Copyright Act (DMCA) and similar copyright laws in many other countries, it’s illegal to circumvent those locks or to provide others with the means of doing so. That provision puts librarians in legal danger when they come across DRM in the course of their work—not to mention educators, historians, security researchers, journalists, and any number of other people who work with copyrighted material in completely lawful ways.
Of course, as IFLA’s statement notes, W3C doesn’t have the authority to change copyright law, but it should consider the implications of copyright law in its policy decisions: “While clearly it may not be in the purview of the W3C to change the laws and regulations regulating copyright around the world, they must take account of the implications of their decisions on the rights of the users of copyright works.”
EFF is in the process of appealing W3C’s controversial decision, and we’re urging the standards body to adopt a covenant protecting security researchers from anti-circumvention laws.
How often do you see this when you’re online, whether downloading a new app or software or signing up for some new service?
Click Agree to accept our Terms and Conditions.
This is what happened recently to more than 20,000 people in the UK when they accepted the terms and conditions for free Wi-Fi that included a commitment to clean public toilets, hug stray dogs, and paint snails’ shells to brighten up their existence.
Thankfully the Wi-Fi provider, Purple, says it is not going to enforce its “Community Service Clause”.
But it makes a good point. Purple says it added the spoof clause to its terms and conditions for a two-week period to see if anyone would notice. It said in a statement:
The real reason behind our experiment is to highlight the lack of consumer awareness when signing up to use free Wi-Fi.
All users were given the chance to flag up the questionable clause in return for a prize, but remarkably only one individual, which is 0.000045% of all Wi-Fi users throughout the whole two weeks, managed to spot it.
Read on, if you dare
We want free online service and free software, and we want it now. So we readily agree to the terms and conditions despite having little idea what we are agreeing to, and the service provider is in no hurry to tell us.
That’s a concern for everyone who readily accepts free Wi-Fi conections in places such as shopping centres, cafes, restaurants, hotels, bars, or any other public Wi-Fi hotspots. The Australian Communications and Media Authority said that as of June 30, 2015, an average of 4.23 million people in Australia had used a public Wi-Fi hotspot, either free or paid.
The same concerns apply when it comes to downloading free software and apps which can sometimes come bundled with other software or extensions, often referred to as Potentially Unwanted Programs. If people don’t read the terms and conditions then they won’t know what else they are agreeing to install.
We have been warned about these problems for years and yet the recent Purple example shows that people still haven’t learned.
Earlier this year the consumer group Choice raised the issue of licence agreements, terms-of-use agreements, and terms and conditions that people never read.
It gave the example Amazon’s Kindle Voyage e-reader, which it said had a minimum of eight documents that needed to be read and agreed to when buying the device, as well as documents to be read to use any subscription service.
The total word count is more than 73,000, which Choice said would take about nine hours to read. It even tasked someone to read the lot, but here’s the abridged version.
Properly informed consent
While the great majority of tech companies operate lawfully, if not ethically, the process of getting actual informed consent remains problematic. At present, just clicking Agree will do, regardless of what lies buried deep in the many words of those terms and conditions.
One survey in Britain found that only 7% of people read the terms and conditions carefully when signing up for an online service or product.
These documents are typically written in legalese, meaning that only a trained lawyer would be able to understand them properly. Yet the simple act of clicking on a check-box constitutes informed consent in the legal sense.
That same survey found that one in five people said they had suffered as a result of agreeing to terms and conditions without having read them carefully. One in ten had been locked into a contract for longer than expected because they didn’t read the small print.
Choice says “lengthy and overly complex contracts” should be considered unfair and has called for reform of the Australian Consumer Law (ACL) to protect people from such agreements.
A readable solution
With billions of dollars at stake, IT companies need to make it clearer just what the consequences of using that product or service will be, including any potential dangers.
If users can give genuinely informed consent, it’s a win-win situation.
For example, if we know we’re agreeing that an online product can use some of our personal information – and we know what that information is – we could receive targeted advertising that might be useful to us, and even be a good fit for our lifestyle.
So how can we do to make sure people are properly informed in plain language about the consequences of using a product or service?
One solution that already works well is the way Creative Commons includes a human readable summary of its licensing conditions. It breaks it down to the basics then highlights anything out of the ordinary.
It’s not difficult to do this, and if you have nothing to hide, the user is unlikely to be scared off by it.
David Tuffley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.
Hokusai’s The great wave off Kanagawa remains the enduring image of Japanese art. The print depicts a giant wave with unmistakable frothing tentacles poised to smash a boat below. The boat’s occupants toil uncaring or unaware of the hovering deluge - the curve of their vessel matching the lines of the heaving sea around them. With the intense drama unfolding in the foreground, the central image of the work - the white-capped Mount Fuji - is easily missed, or mistaken for another ocean crest.
Although diminutive in scale, the importance of Hokusai’s “Great Wave” cannot be overstated. The work profoundly motivated the French Impressionist movement, which in-turn shaped the course of European Modernism, the artistic and philosophical movement that would define the early 20th century. As such, this small print exhibited at the National Gallery of Victoria from July provides a valuable link to the gallery’s recent Van Gogh exhibition.
The most immediate and attractive aspect of Hokusai’s wave is its colour. At 70 years old, Hokusai was a master and created the image using four printing blocks. The astounding power of the work belies its restrictive palette – it’s essentially a study in blue.
The story of this blue pigment highlights the role of cultural exchange at the heart of creative discovery and ranks among the more contradictory tales in the history of art. The vibrant hue, long considered to be quintessentially Japanese, was actually a European innovation.
In truth, it had been invented half a world away, 130 years before Hokusai’s wave broke, in an accident involving one of Europe’s most colourful figures: Johann Conrad Dippel. Born in the actual “Castle Frankenstein” in Germany in 1673, the enigmatic theologian and passionate dissector believed the souls of the living could be funnelled from one corpse to another, thus becoming the rumoured inspiration for Mary Shelley’s masterpiece, Frankenstein.
In his thirties, Dippel had become captivated by the proto-science of alchemy, but like so many in the profession, had failed to convert base metals into gold. He instead settled on the apparently easier task of inventing an elixir of immortality. The consequence was Dippel’s oil, a compound so toxic that two centuries later it would be deployed as a chemical weapon in World War II.
To cut costs in his Berlin laboratory, Dippel lab-shared with the Swiss pigment maker Johann Jacob Diesbach, a fellow scientist engaged in the lucrative business of producing colours. One fateful evening around 1705, when Diesbach was preparing a batch of crushed insects, iron sulphate and potash in a reliable recipe for a deep red pigment, he accidentally used one of Dippel’s implements infected by the noxious oil.
The following morning the pair found not the expected red, but a deep blue. The immense value of the substance was immediately clear. The recipe for Egyptian blue used by the Romans had been lost to history some time in the middle ages. Its substitute, lapis lazuli, consisting of crushed Afghan gemstones, sold at astronomical rates. So the discovery of a stable blue colour was literally more valuable than gold. Adding further worth, the pigment could be blended to produce entirely new colours, a process that the costly lapis lazuli did not allow.
The discovery sparked “blue fever” in Europe. Dippel, suddenly forced to flee legal action in Berlin for his controversial theological positions, failed to commercialise the newly named “Prussian blue”, but his dazzling co-invention was a secret too big to keep.
Within a few short years, the recipe had gone into factory production. It was used extensively in painting, wallpaper, flags, postage stamps, and became the official uniform colour of the Prussian Army. People seemed drunk on the stuff. Indeed, they were actually drinking it. By mid century, the British East India Company was dyeing Chinese tea Prussian Blue to increase its exotic appeal back in Europe .
Blue arrives in Asia
In the early 1800s, a Guangzhou entrepreneur deciphered the recipe and began manufacturing the pigment in China at a much lower cost. Despite Japan’s strict ban on all imports and exports, the colour found its way to the printmaking industry in Osaka, Japan where it was trafficked as “bero”, a derivation from the Dutch “Berlyns blaauw” (“Berlin blue”). Its vivid hue, tonal range and foreignness saw it explode in popularity just as it had in Europe.
Hokusai was one of the first Japanese printmakers to boldly embrace the colour, a decision that would have major implications in the world of art. Using it extensively in his series Thirty Six Views of Mount Fuji (1830), of which the Great Wave was the first, the pigment especially lent itself to expressing both depth in water and distance, crucial atmospheric qualities to render land and seascapes.
Hokusai and his contemporary Hiroshige became renowned for their depictions of pure landscape form. But although extremely popular in mainstream society, these woodblock prints were seen as vulgar by the Japanese literati and beneath consideration for artistic merit.
When Japan’s isolationist policies finally ended under threat of war from the US Navy in 1853, the prints were used as wrapping paper for more worthy trade trinkets.
Following Paris’s International Exposition of 1867, their value dramatically shifted. A showcase at the inaugural Japanese Pavilion elevated the artistic status of woodblock prints and a craze for their collection quickly followed. Among the most prized were the striking blue landscapes, particularly by Hokusai and Hiroshige, that led European artists to incorrectly deem the colour as idiosyncratically Japanese.
It wasn’t just the colour, style and execution of Hokusai’s prints that made them so radically influential, but the subject matter too. His collection of “manga” sketches elevated everyday street life in to the realm of art, ideas that were a revelation for Edgar Degas and Henri de Toulouse-Lautrec. Both borrowed heavily from Hokusai’s depictions of marginal society and the bodies of women in repose.
Claude Monet was so seduced by the “Japonism” aesthetic he acquired 250 Japanese prints, including 23 by Hokusai. The obsession bled from Monet’s art to his life and the painter modelled his garden after a Japanese print while his wife sported a kimono around the house.
Perhaps the single most vividly identifiable influence upon the European modernist founders is Van Gogh’s celebrated Starry Night, which owes everything to Hokusai’s blue wave from its colour to the shape of its sky. In letters to his brother, Van Gogh professed the Japanese master had left a deep emotional impact on him.
Hokusai’s European influence
The importance of Hokusai to the early European modernist movement is both immense and well mapped. Much less known is the extent to which Hokusai had himself borrowed from European image culture. Although in the artist’s lifetime, Japan was subject to Sakoku, the 250-year policy that forbade exchange with the outside world on penalty of death, a clandestine group of Japanese artists and scientists had dedicated themselves to studying the exotic mysteries of Western representation.
Hokusai drew influence from a particular “Rangakusha” (scholar of Dutch texts) painter named Shiba Kokan, who experimented with European principles of composition. In The Great Wave, Hokusai abandoned traditional Japanese isometric view, where motifs were scaled according to importance, and instead adopted the dynamic style of Western perspective featuring intersecting lines of sight.
This lent the work the dramatic sense of the wave about to break on top of the viewer. The embracing of his final works by Europeans is in part due to Hokusai’s use of a familiar compositional style.
Yet this historical truth lay dormant for decades as it deeply contradicted the European vision of Japan. In the Western imagination, Japan was a land preserved in amber, a pure and innocent people in close communion with nature whose isolation had sealed them from the horrors that industrialisation had wrought upon Europe.
In reality, Hokusai had skillfully blended European colour and structure with Japanese motifs and techniques into a seamless work of international appeal. Certainly, without Hokusai’s striking print, the great wave of European Modernism might never have happened.
The art of Hokusai will be showing at the National Gallery of Victoria until October 15 2017.
Hugh Davies has been funded to undertake creative research into game cultures in Japan through Asialink.
Imagine you wanted to find books or journal articles on a particular subject. Or find manuscripts by a particular author. Or locate serials, music or maps. You would use a library catalog that includes facts – like title, author, publication date, subject headings and genre.
That information and more is stored in the treasure trove of library catalogs.
It is hard to overstate how important this library catalog information is, particularly as the amount of information expands every day. With this information, scholars and librarians are able to find things in a predictable way. That’s because of the descriptive facts presented in a systematic way in catalog records.
But what if you could also experiment with the data in those records to explore other kinds of research questions – like trends in subject matter, semantics in titles or patterns in the geographic source of works on a given topic?
Now it is possible. The Library of Congress has made 25 million digital catalog records available for anyone to use at no charge. The free data set includes records from 1968 to 2014.
This is the largest release of digital catalog records in history. These records are part of a data ecosystem that crosses decades and parallels the evolution of information technology.
In my research about copyright and library collections, I rely on these kinds of records for information that can help determine the copyright status of works. The data in these records already are embodied in library catalogs. What’s new is the free accessibility of this organized data set for new kinds of inquiry.
The decision reflects a fresh attitude toward shared data by the Library of Congress. It is a symbolic and practical manifestation of the library’s leadership aligned with its mission of public service.
To understand the implications of this news, it helps to know a bit about the history of library catalog records.
Today, search engines let us easily find books we want to borrow from libraries or purchase from any number of sources. Not long ago, this would have seemed magical. Search engines use data about books – like the title, author, publisher, publication date and subject matter – to identify particular books. That descriptive information was gathered over the years in library catalog records by librarians.
The library’s action sheds light on this unseen but critical network. This infrastructure is invisible to most of us as we use libraries, buy books or use search engines.
For many, the idea of a library catalog conjures up the image of card catalogs. The descriptions contained in catalog records are “metadata” – information about information. Early catalog records date back to 1791, just after the French Revolution. The revolutionary government used playing cards to document property seized from the church. The idea was to make a national bibliography of library holdings confiscated during the Revolution.
For many years, library collections were organized individually. As the number of books and libraries grew, the increased complexity demanded a more consistent approach. For example, when the Library of Congress purchased Thomas Jefferson’s personal library in 1815, it arranged its collections around Jefferson’s personal system organized around the themes of memory, reason and imagination. (Jefferson based this on Francis Bacon’s own model.) The library sought to arrange its collections on that model into the 19th century.
As the number of books and libraries grew, a more systematic approach was needed. The Dewey Decimal System appeared in 1876 to tackle this challenge. It combined consistent numbers (“classes”) with particular topics. Each class can be further divided for more specific descriptions.
In the 1890s, the library developed the Library of Congress Classification System. It is still used today to predictably manage millions of items in libraries worldwide.
Catalogs, cards and computers
By the 1960s, systematic descriptions made the transition from analog cards to online catalog systems a natural step. Machine-Readable-Cataloging (or MARC) records were developed to electronically read and interpret the data in bibliographic cataloging records. The structured categorization coincided naturally with the use of computers.
The Library of Congress remains a primary – but not the only – source for catalog records. Individual libraries produce catalog records that are compiled and circulated through organizations like OCLC. OCLC connects libraries around the globe and offers an online catalog. WorldCat coordinates catalog records from many libraries into a cohesive online resource. Groups like these charge libraries through membership fees for access to the compiled data. Libraries, though, typically do not charge for the catalog records they produce, instead working cooperatively through organizations like OCLC. This may evolve as more shared effort and crowdsourced resources can be combined with the library’s data in ways that improve search and inquiry. Examples include SHARE and Wikipedia.
One month later
In the short time since the Library of Congress’ data release, we see inklings of what may come. At a Hack-to-Learn event in May, researchers showed off early experiments with the data, including a zoomable list of nine million unique titles and a natural language interface with the data.
For my part, I am considering how to use the library’s data to learn more about the history of publishing. For example, it might be possible to see if there are trends in dates of publication, locations of publishers and patterns in subject matter. It would be fruitful to correlate copyright information data retained by the U.S. Copyright Office to see if one could associate particular works with their copyright information like registration, renewal and ownership changes. However, those records remain in formats that remain difficult to search or manipulate. The records prior to 1978 are not yet available online at all from the U.S. Copyright Office.
Colleagues at the University of Michigan Library are studying the recently released records as a way to practice map-making and explore geographic patterns with visualizations based on the data. They are thinking about gleaning locations from subject metadata and then mapping how those locations shift through time.
There’s a growing expectation that this kind of data should be freely available. This is evidenced by the expanding number of open data initiatives, from institutional repositories such as Deep Blue Data here at the University of Michigan Library to the U.S. government’s data.gov. The U.K.‘s Open Research Data Task Force just released a report discussing technical, infrastructure, policy and cultural matters to be addressed to support open data.
The Library of Congress’ action demonstrates an overarching shift in use of technology to meet historical research missions and advance beyond. Because the data are freely available, anyone can experiment with them.
Melissa Levine has received funding from the Institute for Museum and Library Services.
Drinking beverages containing low-calorie sweeteners may not help you lose weight and may even be bad for your health, according to new research published in the Canadian Medical Association Journal. The researchers, who reviewed a number of studies, found that people who regularly consume low-calorie sweeteners (such as aspartame, sucralose and stevioside) tend have a higher risk of long-term weight gain, obesity, type 2 diabetes, hypertension and heart disease than those who don’t.
Obesity is a growing, global problem, and excess sugar consumption is suspected of being a major factor in this grim trajectory. In an effort to avoid the health consequences of consuming too much sugar, people have been switching to low-calorie sweeteners. Consumption of these sweeteners has increased significantly in recent years, and the trend is expected to continue.
However, a number of recent studies – including animal studies – have suggested that low-calorie sweeteners may not be that good for your health. For example, there is some evidence that they may negatively affect glucose metabolism, gut microbes and appetite control. But small, individual studies don’t always give a complete picture. In order to get a better idea of what low-calorie sweeteners are doing to our health, the researchers in this latest study conducted a systematic review with a meta-analysis. This means they pooled data from the best studies they could find and re-analysed the combined data in order to generate more reliable statistics.
For their review, the researchers analysed data from seven randomised controlled trials (the “gold standard” for clinical studies) and 30 observational studies. In all, the studies included 400,000 participants.
The trials, which included 1,003 participants who were followed for an average of six months, showed no consistent evidence that low-calorie sweeteners helped people manage their weight. The observational studies, which followed people for an average of ten years, found that people who regularly drank low-calorie sweetened drinks (one or more drinks a day) had an increased risk of moderate weight gain, hypertension, metabolic syndrome, type 2 diabetes and heart disease (including stroke).
Randomised controlled trials can prove that one thing causes another, but observational studies can only show an association between things. So we need to treat the findings from observational studies with more caution, as there may be other factors (known as “confounders”) that explain the associations. For example, it is possible that heavier people consume drinks with low-calorie sweeteners to try and lose weight, rather than that the consumption of these drinks cause an increase in weight.
The review also only looked at beverages that contained low-calorie sweeteners, not at the intake of these sweeteners in other foods. Nowadays, low-calorie sweeteners are found in a range of foods, including yogurts, sauces, baked goods and “health bars”. Many brands of toothpaste also contain them. It’s possible that the people in the observational studies who declared that they never drank beverages containing low-calorie sweeteners consumed these sweeteners in other foods.
How does it compare with sugar?
Unfortunately, none of the studies in the review compared consuming low-calorie sweetened drinks with sugary drinks. What is clear from earlier studies is that there is a close parallel between the rise in sugar consumption and increases in global obesity. And other studies have found that consuming sugar-sweetened beverages is associated with an increased risk of type 2 diabetes and coronary heart disease, independent of weight.
It has become increasingly clear: sugar is very bad for your health. So it seems logical to assume that if high calorie sugar is replaced with low-calorie sweetener then we reduce the risk of weight gain, type 2 diabetes and heart disease. What this latest study suggests is that this may not be the case.
The evidence against low-calorie sweeteners may not be watertight, but, if the latest review is correct, it might be best if we avoided them. Maybe we’ll just have to wean ourselves off sweet-tasting foods altogether, regardless of what they’re sweetened with.
Rachel Adams ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son poste universitaire.