Shared posts

15 Aug 19:35

Charlottesville, Virginia: the history of the statue at the centre of violent unrest

by Jenny Woodley, Lecturer in Modern American History, Nottingham Trent University
The statue of Robert Lee in Charlottesville. Shutterstock

The violent scenes in Charlottesville, Virginia, that led to the death of one woman and left many more injured began as a dispute over a statue of General Robert E. Lee, which sits in a local public park. However, the controversy feeds into a much wider debate that is as old as the United States itself. So who was Lee and why does a memorial to him trouble so many people?

The meeting of white supremacists in Charlottesville was originally held under the pretext of demonstrating against plans to remove the statue. The Charlottesville city council voted in February for it to be removed from the recently renamed Emancipation Park (formerly Lee Park). The decision came as part of a movement to challenge the ubiquity of Confederate symbols in the South.

These statues, for their opponents, signify the oppression of African Americans under slavery and the Jim Crow segregation laws. They serve as daily reminders of the vulnerability of black people. The message of such monuments is the same to many of their defenders, even if their interpretation is different. To the white supremacists who gathered on the streets of Charlottesville, the statue of Lee represents white military and political power.

In the decades after the Civil War, memorials celebrating the south’s valiant effort and glorious defeat appeared all over the region. They embodied the myth of the “lost cause” – the idea that the war had been fought to defend states’ rights, rather than slavery. In this interpretation, the south only lost because of the industrial might of the northern “aggressor”.

This doctrine came to prominence during the Jim Crow era when whites implemented racial segregation through violent, extra-legal and then legal means. The lost cause memory was used to justify and enforce white supremacy.

For many, the Confederate memorials continue to represent this repression. They are a celebration of southern identity as white. Since the rise of the Black Lives Matter movement, the slogan “BLM”, as well as the names of victims of police shootings have appeared on memorials around the country. “Black Lives Matter” was sprayed on Charlottesville’s Lee statue in the days following the Charleston church shootings in 2015. Along with calls for the removal of Confederate flags from civic buildings there have been increasingly vocal, and successful, demands to reconsider the place of monuments in public spaces.

Who was Robert E. Lee?

The statue in Charlottesville is of Lee atop his horse, Traveller. The Confederate general and native of Virginia holds a hat in one hand and his horse’s reins in the other, his sword ready at his side.

It was unveiled by Lee’s great-granddaughter at a ceremony in May 1924. As was the custom on these occasions it was accompanied by a parade and speeches. In the dedication address, Lee was celebrated as a hero, who embodied “the moral greatness of the Old South”, and as a proponent of reconciliation between the two sections. The war itself was remembered as a conflict between “interpretations of our Constitution” and between “ideals of democracy.”

Here was the states’ rights argument. The south fought a noble war over its right to self-determination, rather than an effort to keep millions enslaved. Lee, claimed one of the speakers, “abhorred slavery”. In his position as commander of the Confederate Army of North Virginia, Lee represented military endeavour rather than a political struggle to uphold human bondage.

This has made Lee a powerful symbol of the Confederacy. He allowed white southerners to ignore the central role of slavery in the war. They could forget that the southern states seceded in order to uphold slavery and that their defeat meant freedom to millions of enslaved people. There was no space for black memories of emancipation in the south’s public spaces. Confederates were white and their monuments were celebrations of whiteness.

Lee is one of the most frequent figures to be memorialised in statues, aside from the common soldier. In 2016, the Southern Poverty Law Center catalogued all publicly supported spaces dedicated to the Confederacy, finding 203 examples named for Robert E. Lee. Streets, highways, counties, cities, parks, monuments and 52 public schools are named for the Confederate general. Up until 2015, Lee’s birthday was officially marked in five states.

The same year as Lee’s statue appeared in Charlottesville, Virginia passed laws which strengthened definitions of who was “colored” and who was “white”, and which reinforced the law prohibiting interracial marriage. Then, two years later, the state passed a law to enforce racial segregation in places of public entertainment.

The monument to Lee served the same purpose as the legislation – to remind African Americans of their perceived place and inferiority. White nationalists gathered to protect the statue in 2017 because they wanted to celebrate its message. Like the original creators and supporters of the Lee monument, they sought to celebrate a white supremacist vision not just of the past, but of the present.

The Conversation

Jenny Woodley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

14 Aug 19:35

Why aren't we more outraged about eating chicken?

by Caroline Spence, PhD Candidate, Biological and Experimental Psychology, Queen Mary University of London
Chickens have personalities, too. Pixabay

Like a B-movie for a post-Brexit era, consumers in Britain may soon be unwillingly cast in the 2019 blockbuster, Attack of the Chlorine Chickens. If news headlines are to be believed, flocks of toxic fowl are waiting to storm Britain’s shores like mini featherless zombies as part of a US-UK trade deal.

But before getting into a flap about the health risks of chlorine, we should maybe pause to consider why you would bleach a chicken in the first place. It is actually primarily to mitigate the disease risks from raising nearly 9 billion chickens in overcrowded environments with low standards of animal welfare.

However the failure to frame the chickens’ welfare as anything other than a side issue poses important questions about the nature of our interactions with animals. Why are chickens so far down in the pecking order for moral concern? Would our reaction have been the same if the animal in question were a mammal? The moral outrage sparked by when horsemeat was found in beef burgers in Britain and Ireland in 2013 would suggest not.

Despite the widespread symbolism of the cockerel across cultures, history shows that we’ve never really been concerned about the welfare of chickens. Until the late 18th century, cock-throwing – tying a chicken to a stake and pelting it with objects until it felt the sweet release of death – was an extremely popular pastime in Britain. Eventually outlawed on grounds of cruelty, research has drawn parallels between cock-throwing and the widespread appearance of chickens in modern video games, which are customarily killed or used for chicken-kicking competitions. I doubt there are many video games in which players beat up dogs for kicks.

So what is it about our attitude to chickens that encourages us to disregard their widespread maltreatment? Psychological research into people’s beliefs repeatedly throws up the common perception that chickens are close to the bottom of the pile when it comes to cognitive abilities.

Yummy? Pixabay

Yet this assumption flies in the face of scientific evidence. Alongside characteristics associated with sentience in other species – such as pain perception or emotions – chickens communicate, show sensitivity to differing contexts and display personalities. This disconnect between our perception of chickens and the reality of their mental lives is undoubtedly important. The more we see an animal as “minded”, the more likely we are to believe its welfare should be protected.

Psychologists used to believe that the animals we consider to have a mind was determined mainly by social factors such as cultural background. However, we now know a range of factors, such as our age and sex, affect our willingness to attribute mental capacities to animals. For the majority of animals it also appears that simple familiarity helps – owning a pet typically increases the mental faculties we associate with that particular species.

This is logical since the greater our contact with an animal, the more likely our chances of observing behaviour that we recognise as intelligent. And yet having a chicken in our clutches doesn’t appear to help their plight. One study showed that, in a group of students, keeping chickens had no effect on the mental characteristics participants associated with them. Only by actively training the chickens in cognitive tasks did the students’ attitudes begin to change.

New perspective

But why doesn’t general contact with chickens alter our views on their brainpower? Our latest paper, published in Trends in Cognitive Sciences, argues that we should also consider how our own cognitive mechanisms influence our judgements about how intelligent an animal is. We are currently in the process of looking at how consistent people are when making attributions of mind to other species.

Research already tells us that context and behavioural similarity between animals and humans are central factors in our psychological interpretation of animals’ actions. We also know that mirror neurons – a type of brain cells that fire both when we perform an action or when we watch others perform the same action – are automatically activated when we watch both humans and other animals carry out similar actions to achieve an assumed goal. This means when we see a rat reach out to grasp a food item, our brain is activated using similar mechanisms to those we would use to interpret the behaviour of a human doing the same thing.

These findings lend weight to the theory that humans attribute cognitive abilities across species based on how they view specific behavioural events, such as grasping food or chewing.

Moving like a chicken might therefore be a major disadvantage when you’re being compared to other farmyard inhabitants such as cows or pigs. Despite spending time observing them, it would be harder for our brains to automatically “see” their behaviour and use it as a basis for assuming some semblance of brainpower.

So next time you read stories about “frankenchicken”, maybe attempt to avoid snap judgements – your perceptions of chickens aren’t based on their lack of brains, but rather on the constraints of your own.

Click here to take part in Queen Mary University of London’s survey investigating people’s attitudes to the animal mind.

The Conversation

Caroline's work is funded by the ESRC and supported by World Animal Protection.

14 Aug 19:28

How safe is chicken imported from China? 5 questions answered

by Maurice Pitesky, Lecturer and Assistant Specialist in Cooperative Extension, University of California, Davis
Cooked chicken meat imported from China could end up in U.S. restaurant meals without information about its origin. Jacek Chabraszewski/Shutterstock

Editor’s note: Under a trade deal concluded in May, China has begun exporting chicken to the United States. Critics have pointed to China’s record of food safety issues and argued the deal prioritizes commerce over public health. Here Maurice Pitesky, a poultry extension specialist at the University of California, Davis School of Veterinary Medicine with a focus on poultry health and food safety epidemiology, answers five questions about importing Chinese chicken.

Why is the United States importing chicken from China? Do we have a shortage?

Hardly. The United States is the largest poultry producer in the world, and the second-largest poultry exporter after Brazil. However, as part of a recent bilateral trade deal, China has agreed to accept imports of beef and liquefied natural gas from the United States. In exchange, the United States is allowing China to export cooked poultry meat to the United States.

Why can China send us only cooked chicken?

This is most likely in response to concerns over avian influenza transmission from raw poultry to the United States. Viable avian influenza viruses could potentially infect U.S. poultry or birds and spread these novel viruses in North America. Some of these viruses can infect humans.

South and Southeast Asia have dense human populations, with numerous poultry producers, vendors and markets where people are exposed to live birds – all conditions that contribute to the spread of avian flu. Since 2013 China has confirmed 1,557 human cases of AH7N9 flu and 370 deaths.

A man selects chickens at a wholesale market in Shanghai, Jan. 21, 2014. AP

Given China’s history of food safety problems, should US consumers be worried about eating chicken processed there?

China is already the third leading supplier of food and agricultural imports to the United States. U.S. consumers are eating imported Chinese fish, shellfish, juices, canned fruits and vegetables.

If poultry is cooked properly, there is no food safety risk from viruses or bacteria. However, if the poultry is not cooked properly, or if there is some type of cross-contamination – for example, if raw chicken or feathers come into contact with cooked product or packaging material – then zoonotic bacteria like salmonella and campylobacter can cross the species barrier and sicken humans.

Most cases of salmonellosis and campylobacteriosis are thought to be associated with eating raw or undercooked poultry meat, or with cross-contamination of other foods by these items. There are no publicly available data on rates of salmonellosis and campylobacteriosis in China. In the United States, infections from these two bacteria sickened nearly 14,000 people in 2014. Of this group, 3,221 were hospitalized and 41 died.

Poultry meat can also contain contaminants, such as heavy metals, and antibiotic residues if birds are treated with antibiotics in an inappropriate fashion. Specifically, when poultry farmers use antibiotics inappropriately (quantity, type and timing), residues can persist in muscle, organs and eggs and toxic and harmful residues build up in the birds. These risks are probably greater for poultry raised and processed in China than for poultry raised and processed in the United States.

Here in the United States there are strict rules requiring growers to stop giving birds antibiotics for periods of days or weeks before they are processed, and we have a National Residue Program that is designed to test for these compounds in eggs and meat.

U.S. Department of Agriculture Food Safety Inspection Service inspectors check the temperature of chicken carcasses at various control points in a processing plant to compare them with those measured and recorded by the plant to prevent multiplication of pathogenic bacteria, August 10, 2012. Lester Shepherd, USDA

China has similar rules, but they are not robustly enforced, and many poultry farmers are not well-informed about them. The Chinese government recently announced a plan to increase surveillance, oversight and monitoring of poultry, livestock and aquatic products to decrease the presence of antibiotic residues by 2020.

Heavy metals in Chinese poultry products may also be an issue. This is a worldwide concern, but it’s especially serious in China because they still burn huge quantities of coal, which releases lead, mercury, cadmium and arsenic. High levels of lead and cadmium have been reported in agricultural areas near Chinese coal mines. These heavy metals can contaminate soil and end up in animal feed and animal meat and eggs.

We really don’t understand how widespread these problems are in China and the Chinese government isn’t very transparent about food safety. That’s starting to change, but there’s nothing like the publicly available data that we have in the United States at the processing plant and retail level.

What will US inspectors do to determine whether Chinese chicken is safe?

The U.S. Department of Agriculture’s Food Safety Inspection Service is responsible for determining whether other countries have meat and poultry safeguards that are equivalent to ours. Chinese poultry processing plants cannot ship cooked poultry to the United States unless they meet that test.

When a foreign program is approved by the USDA, the Food Safety Inspection Service relies on that country’s government to certify that its plants are eligible and conduct regular inspections of the exporting plants. The Food Safety Inspection Service conducts on-site audits of the plants at least annually to verify that they are still meeting the required standards. It will be interesting to see whether the U.S. National Residue Program is involved in those inspections.

Demand for meat in China is rising along with incomes. U.S. beef producers are eager to export to China. USDA

Where will chicken processed in China show up in US markets?

This is the million-dollar question. Cooked poultry is considered to be a processed food item, so it is excluded from country of origin labeling requirements which would apply to raw chicken. This means that U.S. consumers will not know they are consuming chicken grown and processed in China. Restaurants also are excluded from country of origin labeling, so the cooked poultry could be sold to restaurants without consumers knowing. The first Chinese exporter did not specify the name brand that its cooked chicken is being sold under.

The key issue is cost competitiveness. If China can sell cooked poultry at a competitive price point, there will most likely be a U.S. market for it. At this point, though, the Chinese poultry industry is not as integrated (that is, organized so that one company owns breeder birds, hatcheries, grow-out farms and processing plants) or technologically advanced as the U.S. poultry industry. In the short run this makes it difficult for China to compete with the U.S. poultry industry at any appreciable level, even though Chinese labor costs are lower.

The Conversation

Maurice Pitesky does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

14 Aug 11:18

WUSF-TV goes dark in October

by oracleeditor@gmail.com (Miki Shine, Editor in Chief)

After over 50 years on air, the WUSF-TV station was sold and is getting ready to shut down Oct. 15. SPECIAL TO THE ORACLE

WUSF-TV will officially be going dark Oct. 15.

The official date was announced Friday by Alisa Carmichael, executive administrative specialist with the station.

After 50 years of broadcasting, it was announced in February that the station would be sold in the Federal Communications Commission spectrum auction for $18.7 million.

In February, university spokeswoman Lara Wade said sale was because the station didn’t “align with our resources and mission and vision. The broadcast TV license was not part of our education mission to continue student success.”

Some of the programming will continue at WEDU.org, according to Carmichael.

This does not impact radio stations such as WUSF 89.7, the local NPR station, Classical WSMR 89.1 and 103.9.

The space the studio occupies was one source of discussion when the idea of selling came about. The Board of Trustees discussed in October 2015 the possibility of renting it to a local station.

However, according to Carmichael’s email, the film studios will be used for video production projects and serve as a learning environment for students guided by WUSF professionals.

31 Jul 16:48

Ghoulish Acts & Dastardly Deeds

by Alan Bellows

Ghoulish Acts & Dastardly Deeds:

On 29 March 1951, shortly after 5 p.m., a hand-grenade-sized pipe bomb exploded in the landmark Grand Central Terminal in New York City. Ordinarily, the detonation of a pipe bomb in a busy commuter terminal at rush hour would be cause for grave public concern, yet the local news media barely acknowledged the event. It […]

31 Jul 11:17

France: 13 million in damages awarded for linking to downloadable copyright works

by noreply@blogger.com (Mathilde Pavis)

Now that the CJEU has confirmed that linking to protected content could lead to infringing copyright comes in some cases (see see herehere and here), a slew of practical questions arise. Not least, how much do copyright owners lose from online platforms hosting links to unauthorized copies of their works uploaded by third parties? In other words, how much damages should be awarded by courts, and on what basis? Two euros per protected works, according to the Paris Court of Appeal's latest decision on the matter. Multiply this by the number of copyright works infringed, itself multiplied by the number of views each file received, divided by two, and you obtain the flat rate going for damages in relation to the secondary liability of platforms indexing links to files "ready for download" on their website.  Confused much? Let's simplify matters, here is how the formula would read in pseudo-mathematical terms: 

In this case, the total amounted to 13 millions euros, and followed a prison sentence of one year awarded by the Paris Criminal Court. (Paris Court of Appeal, Pol 5, Ch 13, 7 June 2017, D.M. v APP, Microsoft, Sacem and others, available in French language here; Paris Criminal Court of First Instance, 2 April 2015 [unreported]).

This 13-million liability (and prison sentence) fell on the owner and manager of the website "wawa-mania.eu". Wawa-mania.eu offered a forum platform allowing members to index links redirecting internet users to servers hosting infringing content they could then download. Wawa-mania also offered downloadable circumvention tools to remove anti-piracy locks shielding Windows software from copying. 

Facts 
In 2009, the forum-like website "wawa-mania" was subject to investigations in France by the Information Technology Fraud Investigations Brigade (in French, BEFTI short for 'Brigade d'enquêtes sur les fraudes aux technologies de l'information'), which revealed mass infringement of protected material ranging from videos, music to computer software. Some of the forum members, known as "uploaders" would obtain infringing copies of copyright works from at least four servers including "rapidshare.com", "Megaupload", "Gigaup.com" and "Free". Once downloaded from these servers, uploaders would re-upload the content online, and share links to the files on "wawa-mania". The forum-like website generated revenues from advertising spaces on its pages.

A penny for your thoughts...
The owner of "wawa-mania", known as 'D.M.', admitted to creating the website in 2006 and managing it ever since, renting server space from a third-party host for this purpose. He explained in preliminary hearings that as "super administrator" of the website, he handled responsibilities of content editing, user management, update of the operating software and held special rights to access to the server. D.M. also admitted to knowing that most of the content shared were breaching intellectual property rights, but stressed that his website hosted no files but merely listed links to access files located on a different server.

Legal proceedings & appeal decision
D.M. first faced litigation on the basis of counterfeit by the Paris Criminal Court of First Instance ("tribunal correctionnel") on 2 April 2015. The criminal court handed down a one-year prison sentence, together with a range of fines, to sanction D.M.'s continuous provision of means and infrastructure to enable the infringement of protected content. In the same judgment, the criminal Bench also ordered for the take-down of the website for a period of two years, and required Google and Yahoo to have the following publication appear in results searching for the names D.M. or "wawa-mania": "On 2 April 2015, the Paris Criminal Court of First Instance sentenced D.M. to a one-year prison sentence for the counterfeit of copyright works carried out by uploading links on the forum wawa-mania that enabled the downloading of protected works illegally obtained". The criminal decision will stand as the right of appeal has now lapsed.
...Two euros for your links.

Civil litigation followed suit, headed by a number of claimants including Microsoft, SACEM, Disney, Universal City Studios or Twentieth Century Fox - to only name a few. On 2 July 2015, the Paris (civil) Tribunal awarded damages for a range of damage (economic, moral, procedural) to each claimant. The main bill, 13 millions euros, concerned the infringement of copyright's economic rights for providing links to downloadable material exclusively, and was calculated as per the formula described above, i.e. two euros, per views divided by two, per copyright work listed on the website. The Paris Tribunal divided in half the number of views for each work recorded by the linking-website because they recognized that users "viewing", or accessing, the files may not have downloaded them. Halving the number of views before applying it as a coefficient in the equation to calculate damages was regarded as a fair account of the "likely" levels of infringement. Nothing in the decision explains or justifies why the likely levels of infringement would be accurately obtained by dividing the number of views by two. Is this a rough guess on the part of the court? Was there expert evidence submitted to support this calculation? Was it based on the Court's own experience of downloading and streaming - or perhaps that of a reasonable person?

The Paris Court of Appeal approved the calculation put forward in the first instance decision. They too regarded the division of "views" by two as appropriate since the claimants did not submit evidence that viewing or accessing the files lead to their download consistently. For this reason, the fixed amount of damages had to account for a probable, and not proven, prejudice. The rest of the tribunal decision was confirmed in every aspect, only further damages to cover litigation costs were added by the Paris Court of Appeal, totaling at additional 2,200 euros - a relatively small sum in comparison to the multi-million euro sanction. 

The decision confirms two things. First, and more generally, it evidences that intellectual property enforcement measures do, or can, have 'teeth' in the context of secondary liability - provided that claimants and public prosecution are in the position to gather the necessary evidence of infringement, and take legal actions. These remain big "ifs". Second, the decision goes to show that the "linking" defense website owners may be tempted to argue would not hold in the context of platforms dedicated to enabling illegal downloads.

The caveats in liability rules carved by regulations and jurisprudence for file-sharing or link-sharing platforms have limits, and this is one illustration. It appears that French Court are stiffening their position with regard to illegal online practices. Looking at this decision together with a recent case heard by the French Supreme Court dated 6 July 2017 (see here), this is the conclusion one is inclined to come to. Indeed, the highest civil court recently held that internet service and browser providers ought to cover the costs of blocking and filtering injunctions, regardless of their lack of liability. This line of jurisprudence is in tune with the EU 2016 copyright reform proposal which identifies internet intermediaries as key allies in the fight against infringement and the 'value gap' (See Article 13, Recital 38). 
28 Jul 19:40

Librarians Call on W3C to Rethink its Support for DRM

by elliot

The International Federation of Library Associations and Institutions (IFLA) has called on the World Wide Web Consortium (W3C) to reconsider its decision to incorporate digital locks into official HTML standards. Last week, W3C announced its decision to publish Encrypted Media Extensions (EME)—a standard for applying locks to web video—in its HTML specifications.

IFLA urges W3C to consider the impact that EME will have on the work of libraries and archives:

While recognising both the potential for technological protection measures to hinder infringing uses, as well as the additional simplicity offered by this solution, IFLA is concerned that it will become easier to apply such measures to digital content without also making it easier for libraries and their users to remove measures that prevent legitimate uses of works.

[…]

Technological protection measures […] do not always stop at preventing illicit activities, and can often serve to stop libraries and their users from making fair uses of works. This can affect activities such as preservation, or inter-library document supply. To make it easier to apply TPMs, regardless of the nature of activities they are preventing, is to risk unbalancing copyright itself.

IFLA’s concerns are an excellent example of the dangers of digital locks (sometimes referred to as digital rights management or simply DRM): under the U.S. Digital Millennium Copyright Act (DMCA) and similar copyright laws in many other countries, it’s illegal to circumvent those locks or to provide others with the means of doing so. That provision puts librarians in legal danger when they come across DRM in the course of their work—not to mention educators, historians, security researchers, journalists, and any number of other people who work with copyrighted material in completely lawful ways.

Of course, as IFLA’s statement notes, W3C doesn’t have the authority to change copyright law, but it should consider the implications of copyright law in its policy decisions: “While clearly it may not be in the purview of the W3C to change the laws and regulations regulating copyright around the world, they must take account of the implications of their decisions on the rights of the users of copyright works.”

EFF is in the process of appealing W3C’s controversial decision, and we’re urging the standards body to adopt a covenant protecting security researchers from anti-circumvention laws.

24 Jul 17:58

How not to agree to clean public toilets when you accept any online terms and conditions

by David Tuffley, Senior Lecturer in Applied Ethics and Socio-Technical Studies, Griffith University
Be careful what you agree to, you could be cleaning the public toilets. Flickr/Ewan Munro, CC BY-SA

How often do you see this when you’re online, whether downloading a new app or software or signing up for some new service?

Click Agree to accept our Terms and Conditions.

You click on it, but then discover you’ve just agreed to give up your future first-born child or clean public toilets for 1,000 hours.

This is what happened recently to more than 20,000 people in the UK when they accepted the terms and conditions for free Wi-Fi that included a commitment to clean public toilets, hug stray dogs, and paint snails’ shells to brighten up their existence.

Thankfully the Wi-Fi provider, Purple, says it is not going to enforce its “Community Service Clause”.

But it makes a good point. Purple says it added the spoof clause to its terms and conditions for a two-week period to see if anyone would notice. It said in a statement:

The real reason behind our experiment is to highlight the lack of consumer awareness when signing up to use free Wi-Fi.

All users were given the chance to flag up the questionable clause in return for a prize, but remarkably only one individual, which is 0.000045% of all Wi-Fi users throughout the whole two weeks, managed to spot it.

Read on, if you dare

We want free online service and free software, and we want it now. So we readily agree to the terms and conditions despite having little idea what we are agreeing to, and the service provider is in no hurry to tell us.

That’s a concern for everyone who readily accepts free Wi-Fi conections in places such as shopping centres, cafes, restaurants, hotels, bars, or any other public Wi-Fi hotspots. The Australian Communications and Media Authority said that as of June 30, 2015, an average of 4.23 million people in Australia had used a public Wi-Fi hotspot, either free or paid.

The same concerns apply when it comes to downloading free software and apps which can sometimes come bundled with other software or extensions, often referred to as Potentially Unwanted Programs. If people don’t read the terms and conditions then they won’t know what else they are agreeing to install.

We have been warned about these problems for years and yet the recent Purple example shows that people still haven’t learned.

Earlier this year the consumer group Choice raised the issue of licence agreements, terms-of-use agreements, and terms and conditions that people never read.

It gave the example Amazon’s Kindle Voyage e-reader, which it said had a minimum of eight documents that needed to be read and agreed to when buying the device, as well as documents to be read to use any subscription service.

The total word count is more than 73,000, which Choice said would take about nine hours to read. It even tasked someone to read the lot, but here’s the abridged version.

A shorter version… thankfully.

Properly informed consent

While the great majority of tech companies operate lawfully, if not ethically, the process of getting actual informed consent remains problematic. At present, just clicking Agree will do, regardless of what lies buried deep in the many words of those terms and conditions.

One survey in Britain found that only 7% of people read the terms and conditions carefully when signing up for an online service or product.

These documents are typically written in legalese, meaning that only a trained lawyer would be able to understand them properly. Yet the simple act of clicking on a check-box constitutes informed consent in the legal sense.

That same survey found that one in five people said they had suffered as a result of agreeing to terms and conditions without having read them carefully. One in ten had been locked into a contract for longer than expected because they didn’t read the small print.

Choice says “lengthy and overly complex contracts” should be considered unfair and has called for reform of the Australian Consumer Law (ACL) to protect people from such agreements.

A readable solution

With billions of dollars at stake, IT companies need to make it clearer just what the consequences of using that product or service will be, including any potential dangers.

If users can give genuinely informed consent, it’s a win-win situation.

For example, if we know we’re agreeing that an online product can use some of our personal information – and we know what that information is – we could receive targeted advertising that might be useful to us, and even be a good fit for our lifestyle.

So how can we do to make sure people are properly informed in plain language about the consequences of using a product or service?

One solution that already works well is the way Creative Commons includes a human readable summary of its licensing conditions. It breaks it down to the basics then highlights anything out of the ordinary.

It’s not difficult to do this, and if you have nothing to hide, the user is unlikely to be scared off by it.

The Conversation

David Tuffley does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

21 Jul 18:11

Friday essay: from the Great Wave to Starry Night, how a blue pigment changed the world

by Hugh Davies, Senior Lecturer in Media, La Trobe University
Detail from Katsushika Hokusai, The great wave off Kanagawa (Kanagawa oki namiura), (1830–34), from the Thirty-six views of Mt Fuji (Fugaku-sanjū-rokkei) National Gallery of Victoria, Melbourne Felton Bequest, 1909 (426-2)

Hokusai’s The great wave off Kanagawa remains the enduring image of Japanese art. The print depicts a giant wave with unmistakable frothing tentacles poised to smash a boat below. The boat’s occupants toil uncaring or unaware of the hovering deluge - the curve of their vessel matching the lines of the heaving sea around them. With the intense drama unfolding in the foreground, the central image of the work - the white-capped Mount Fuji - is easily missed, or mistaken for another ocean crest.

Although diminutive in scale, the importance of Hokusai’s “Great Wave” cannot be overstated. The work profoundly motivated the French Impressionist movement, which in-turn shaped the course of European Modernism, the artistic and philosophical movement that would define the early 20th century. As such, this small print exhibited at the National Gallery of Victoria from July provides a valuable link to the gallery’s recent Van Gogh exhibition.

The most immediate and attractive aspect of Hokusai’s wave is its colour. At 70 years old, Hokusai was a master and created the image using four printing blocks. The astounding power of the work belies its restrictive palette – it’s essentially a study in blue.

The story of this blue pigment highlights the role of cultural exchange at the heart of creative discovery and ranks among the more contradictory tales in the history of art. The vibrant hue, long considered to be quintessentially Japanese, was actually a European innovation.

Detail from Katsushika Hokusai, The great wave off Kanagawa (Kanagawa oki namiura), (1830–34), from the Thirty-six views of Mt Fuji (Fugaku-sanjū-rokkei). 25.7 × 37.7 cm. National Gallery of Victoria, Melbourne Felton Bequest, 1909 (426-2)

Colourful figures

In truth, it had been invented half a world away, 130 years before Hokusai’s wave broke, in an accident involving one of Europe’s most colourful figures: Johann Conrad Dippel. Born in the actual “Castle Frankenstein” in Germany in 1673, the enigmatic theologian and passionate dissector believed the souls of the living could be funnelled from one corpse to another, thus becoming the rumoured inspiration for Mary Shelley’s masterpiece, Frankenstein.

In his thirties, Dippel had become captivated by the proto-science of alchemy, but like so many in the profession, had failed to convert base metals into gold. He instead settled on the apparently easier task of inventing an elixir of immortality. The consequence was Dippel’s oil, a compound so toxic that two centuries later it would be deployed as a chemical weapon in World War II.

To cut costs in his Berlin laboratory, Dippel lab-shared with the Swiss pigment maker Johann Jacob Diesbach, a fellow scientist engaged in the lucrative business of producing colours. One fateful evening around 1705, when Diesbach was preparing a batch of crushed insects, iron sulphate and potash in a reliable recipe for a deep red pigment, he accidentally used one of Dippel’s implements infected by the noxious oil.

The following morning the pair found not the expected red, but a deep blue. The immense value of the substance was immediately clear. The recipe for Egyptian blue used by the Romans had been lost to history some time in the middle ages. Its substitute, lapis lazuli, consisting of crushed Afghan gemstones, sold at astronomical rates. So the discovery of a stable blue colour was literally more valuable than gold. Adding further worth, the pigment could be blended to produce entirely new colours, a process that the costly lapis lazuli did not allow.

The discovery sparked “blue fever” in Europe. Dippel, suddenly forced to flee legal action in Berlin for his controversial theological positions, failed to commercialise the newly named “Prussian blue”, but his dazzling co-invention was a secret too big to keep.

Within a few short years, the recipe had gone into factory production. It was used extensively in painting, wallpaper, flags, postage stamps, and became the official uniform colour of the Prussian Army. People seemed drunk on the stuff. Indeed, they were actually drinking it. By mid century, the British East India Company was dyeing Chinese tea Prussian Blue to increase its exotic appeal back in Europe .

Blue arrives in Asia

In the early 1800s, a Guangzhou entrepreneur deciphered the recipe and began manufacturing the pigment in China at a much lower cost. Despite Japan’s strict ban on all imports and exports, the colour found its way to the printmaking industry in Osaka, Japan where it was trafficked as “bero”, a derivation from the Dutch “Berlyns blaauw” (“Berlin blue”). Its vivid hue, tonal range and foreignness saw it explode in popularity just as it had in Europe.

Katsushika Hokusai, The Amida Falls in the far reaches of the Kisokaidō Road (Kiso no oku Amida-ga-taki) (1834–35) from the A tour to the waterfalls in various provinces (Shokoku taki meguri) series. The Japan Ukiyo-e Museum, Matsumoto

Hokusai was one of the first Japanese printmakers to boldly embrace the colour, a decision that would have major implications in the world of art. Using it extensively in his series Thirty Six Views of Mount Fuji (1830), of which the Great Wave was the first, the pigment especially lent itself to expressing both depth in water and distance, crucial atmospheric qualities to render land and seascapes.

Hokusai and his contemporary Hiroshige became renowned for their depictions of pure landscape form. But although extremely popular in mainstream society, these woodblock prints were seen as vulgar by the Japanese literati and beneath consideration for artistic merit.

When Japan’s isolationist policies finally ended under threat of war from the US Navy in 1853, the prints were used as wrapping paper for more worthy trade trinkets.

Following Paris’s International Exposition of 1867, their value dramatically shifted. A showcase at the inaugural Japanese Pavilion elevated the artistic status of woodblock prints and a craze for their collection quickly followed. Among the most prized were the striking blue landscapes, particularly by Hokusai and Hiroshige, that led European artists to incorrectly deem the colour as idiosyncratically Japanese.

It wasn’t just the colour, style and execution of Hokusai’s prints that made them so radically influential, but the subject matter too. His collection of “manga” sketches elevated everyday street life in to the realm of art, ideas that were a revelation for Edgar Degas and Henri de Toulouse-Lautrec. Both borrowed heavily from Hokusai’s depictions of marginal society and the bodies of women in repose.

Claude Monet was so seduced by the “Japonism” aesthetic he acquired 250 Japanese prints, including 23 by Hokusai. The obsession bled from Monet’s art to his life and the painter modelled his garden after a Japanese print while his wife sported a kimono around the house.

Perhaps the single most vividly identifiable influence upon the European modernist founders is Van Gogh’s celebrated Starry Night, which owes everything to Hokusai’s blue wave from its colour to the shape of its sky. In letters to his brother, Van Gogh professed the Japanese master had left a deep emotional impact on him.

Van Gogh’s Starry Night.

Hokusai’s European influence

The importance of Hokusai to the early European modernist movement is both immense and well mapped. Much less known is the extent to which Hokusai had himself borrowed from European image culture. Although in the artist’s lifetime, Japan was subject to Sakoku, the 250-year policy that forbade exchange with the outside world on penalty of death, a clandestine group of Japanese artists and scientists had dedicated themselves to studying the exotic mysteries of Western representation.

Hokusai drew influence from a particular “Rangakusha” (scholar of Dutch texts) painter named Shiba Kokan, who experimented with European principles of composition. In The Great Wave, Hokusai abandoned traditional Japanese isometric view, where motifs were scaled according to importance, and instead adopted the dynamic style of Western perspective featuring intersecting lines of sight.

This lent the work the dramatic sense of the wave about to break on top of the viewer. The embracing of his final works by Europeans is in part due to Hokusai’s use of a familiar compositional style.

Yet this historical truth lay dormant for decades as it deeply contradicted the European vision of Japan. In the Western imagination, Japan was a land preserved in amber, a pure and innocent people in close communion with nature whose isolation had sealed them from the horrors that industrialisation had wrought upon Europe.

In reality, Hokusai had skillfully blended European colour and structure with Japanese motifs and techniques into a seamless work of international appeal. Certainly, without Hokusai’s striking print, the great wave of European Modernism might never have happened.


The art of Hokusai will be showing at the National Gallery of Victoria until October 15 2017.

The Conversation

Hugh Davies has been funded to undertake creative research into game cultures in Japan through Asialink.

20 Jul 17:25

The Library of Congress opened its catalogs to the world. Here's why it matters

by Melissa Levine, Lead Copyright Officer, Librarian, University of Michigan
The Library of Congress is in Washington, D.C. Valerii Iavtushenko/Shutterstock.com

Imagine you wanted to find books or journal articles on a particular subject. Or find manuscripts by a particular author. Or locate serials, music or maps. You would use a library catalog that includes facts – like title, author, publication date, subject headings and genre.

That information and more is stored in the treasure trove of library catalogs.

It is hard to overstate how important this library catalog information is, particularly as the amount of information expands every day. With this information, scholars and librarians are able to find things in a predictable way. That’s because of the descriptive facts presented in a systematic way in catalog records.

But what if you could also experiment with the data in those records to explore other kinds of research questions – like trends in subject matter, semantics in titles or patterns in the geographic source of works on a given topic?

Now it is possible. The Library of Congress has made 25 million digital catalog records available for anyone to use at no charge. The free data set includes records from 1968 to 2014.

This is the largest release of digital catalog records in history. These records are part of a data ecosystem that crosses decades and parallels the evolution of information technology.

In my research about copyright and library collections, I rely on these kinds of records for information that can help determine the copyright status of works. The data in these records already are embodied in library catalogs. What’s new is the free accessibility of this organized data set for new kinds of inquiry.

The decision reflects a fresh attitude toward shared data by the Library of Congress. It is a symbolic and practical manifestation of the library’s leadership aligned with its mission of public service.

Some history

To understand the implications of this news, it helps to know a bit about the history of library catalog records.

Today, search engines let us easily find books we want to borrow from libraries or purchase from any number of sources. Not long ago, this would have seemed magical. Search engines use data about books – like the title, author, publisher, publication date and subject matter – to identify particular books. That descriptive information was gathered over the years in library catalog records by librarians.

Card catalog at the Library of Congress. Rich Renomeron/flickr, CC BY-NC-ND

The library’s action sheds light on this unseen but critical network. This infrastructure is invisible to most of us as we use libraries, buy books or use search engines.

For many, the idea of a library catalog conjures up the image of card catalogs. The descriptions contained in catalog records are “metadata” – information about information. Early catalog records date back to 1791, just after the French Revolution. The revolutionary government used playing cards to document property seized from the church. The idea was to make a national bibliography of library holdings confiscated during the Revolution.

For many years, library collections were organized individually. As the number of books and libraries grew, the increased complexity demanded a more consistent approach. For example, when the Library of Congress purchased Thomas Jefferson’s personal library in 1815, it arranged its collections around Jefferson’s personal system organized around the themes of memory, reason and imagination. (Jefferson based this on Francis Bacon’s own model.) The library sought to arrange its collections on that model into the 19th century.

Books on my shelf, marked with KF and HB. The K indicates that the book relates to law, the H that it relates to social science. The second letter indicates a subcategory. Melissa Levine, CC BY

As the number of books and libraries grew, a more systematic approach was needed. The Dewey Decimal System appeared in 1876 to tackle this challenge. It combined consistent numbers (“classes”) with particular topics. Each class can be further divided for more specific descriptions.

In the 1890s, the library developed the Library of Congress Classification System. It is still used today to predictably manage millions of items in libraries worldwide.

Catalogs, cards and computers

By the 1960s, systematic descriptions made the transition from analog cards to online catalog systems a natural step. Machine-Readable-Cataloging (or MARC) records were developed to electronically read and interpret the data in bibliographic cataloging records. The structured categorization coincided naturally with the use of computers.

Now, MARC records too are on the way out, making room for more modern and flexible standards.

The Library of Congress remains a primary – but not the only – source for catalog records. Individual libraries produce catalog records that are compiled and circulated through organizations like OCLC. OCLC connects libraries around the globe and offers an online catalog. WorldCat coordinates catalog records from many libraries into a cohesive online resource. Groups like these charge libraries through membership fees for access to the compiled data. Libraries, though, typically do not charge for the catalog records they produce, instead working cooperatively through organizations like OCLC. This may evolve as more shared effort and crowdsourced resources can be combined with the library’s data in ways that improve search and inquiry. Examples include SHARE and Wikipedia.

One month later

In the short time since the Library of Congress’ data release, we see inklings of what may come. At a Hack-to-Learn event in May, researchers showed off early experiments with the data, including a zoomable list of nine million unique titles and a natural language interface with the data.

For my part, I am considering how to use the library’s data to learn more about the history of publishing. For example, it might be possible to see if there are trends in dates of publication, locations of publishers and patterns in subject matter. It would be fruitful to correlate copyright information data retained by the U.S. Copyright Office to see if one could associate particular works with their copyright information like registration, renewal and ownership changes. However, those records remain in formats that remain difficult to search or manipulate. The records prior to 1978 are not yet available online at all from the U.S. Copyright Office.

Colleagues at the University of Michigan Library are studying the recently released records as a way to practice map-making and explore geographic patterns with visualizations based on the data. They are thinking about gleaning locations from subject metadata and then mapping how those locations shift through time.

There’s a growing expectation that this kind of data should be freely available. This is evidenced by the expanding number of open data initiatives, from institutional repositories such as Deep Blue Data here at the University of Michigan Library to the U.S. government’s data.gov. The U.K.‘s Open Research Data Task Force just released a report discussing technical, infrastructure, policy and cultural matters to be addressed to support open data.

The Library of Congress’ action demonstrates an overarching shift in use of technology to meet historical research missions and advance beyond. Because the data are freely available, anyone can experiment with them.

The Conversation

Melissa Levine has received funding from the Institute for Museum and Library Services.

19 Jul 18:52

More evidence that low-calorie sweeteners are bad for your health

by Rachel Adams, Senior Lecturer in Biomedical Science, Cardiff Metropolitan University
Monika Wisniewska/Shutterstock

Drinking beverages containing low-calorie sweeteners may not help you lose weight and may even be bad for your health, according to new research published in the Canadian Medical Association Journal. The researchers, who reviewed a number of studies, found that people who regularly consume low-calorie sweeteners (such as aspartame, sucralose and stevioside) tend have a higher risk of long-term weight gain, obesity, type 2 diabetes, hypertension and heart disease than those who don’t.

Obesity is a growing, global problem, and excess sugar consumption is suspected of being a major factor in this grim trajectory. In an effort to avoid the health consequences of consuming too much sugar, people have been switching to low-calorie sweeteners. Consumption of these sweeteners has increased significantly in recent years, and the trend is expected to continue.

However, a number of recent studies – including animal studies – have suggested that low-calorie sweeteners may not be that good for your health. For example, there is some evidence that they may negatively affect glucose metabolism, gut microbes and appetite control. But small, individual studies don’t always give a complete picture. In order to get a better idea of what low-calorie sweeteners are doing to our health, the researchers in this latest study conducted a systematic review with a meta-analysis. This means they pooled data from the best studies they could find and re-analysed the combined data in order to generate more reliable statistics.

For their review, the researchers analysed data from seven randomised controlled trials (the “gold standard” for clinical studies) and 30 observational studies. In all, the studies included 400,000 participants.

The trials, which included 1,003 participants who were followed for an average of six months, showed no consistent evidence that low-calorie sweeteners helped people manage their weight. The observational studies, which followed people for an average of ten years, found that people who regularly drank low-calorie sweetened drinks (one or more drinks a day) had an increased risk of moderate weight gain, hypertension, metabolic syndrome, type 2 diabetes and heart disease (including stroke).

Randomised controlled trials can prove that one thing causes another, but observational studies can only show an association between things. So we need to treat the findings from observational studies with more caution, as there may be other factors (known as “confounders”) that explain the associations. For example, it is possible that heavier people consume drinks with low-calorie sweeteners to try and lose weight, rather than that the consumption of these drinks cause an increase in weight.

The review also only looked at beverages that contained low-calorie sweeteners, not at the intake of these sweeteners in other foods. Nowadays, low-calorie sweeteners are found in a range of foods, including yogurts, sauces, baked goods and “health bars”. Many brands of toothpaste also contain them. It’s possible that the people in the observational studies who declared that they never drank beverages containing low-calorie sweeteners consumed these sweeteners in other foods.

Yogurts often contain low-calorie sweeteners. meaofoto/Shutterstock

How does it compare with sugar?

Unfortunately, none of the studies in the review compared consuming low-calorie sweetened drinks with sugary drinks. What is clear from earlier studies is that there is a close parallel between the rise in sugar consumption and increases in global obesity. And other studies have found that consuming sugar-sweetened beverages is associated with an increased risk of type 2 diabetes and coronary heart disease, independent of weight.

It has become increasingly clear: sugar is very bad for your health. So it seems logical to assume that if high calorie sugar is replaced with low-calorie sweetener then we reduce the risk of weight gain, type 2 diabetes and heart disease. What this latest study suggests is that this may not be the case.

The evidence against low-calorie sweeteners may not be watertight, but, if the latest review is correct, it might be best if we avoided them. Maybe we’ll just have to wean ourselves off sweet-tasting foods altogether, regardless of what they’re sweetened with.

The Conversation

Rachel Adams ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son poste universitaire.

17 Jul 13:20

Explainer: what is 'fair dealing' and when can you copy without permission?

by Nicolas Suzor, Associate professor, Queensland University of Technology
Fair dealing allows Australians to use copyrighted content for news and reporting. antb/Shutterstock

Copyright law sometimes allows you to use someone else’s work - as long as it’s fair. In Australia this is called “fair dealing”, and it’s different to the law in the US, which is called “fair use”.

These exceptions are safety valves in copyright law – they allow lots of beneficial uses that society has agreed copyright owners should not be able to charge for, or worse, prevent.

There’s a serious ongoing debate about whether Australia should update its copyright laws and introduce fair use. The current law is not easy to understand – our research shows that Australian creators are often confused about their rights – and many think we already have fair use.

Fair dealing: What can you do in Australia?

The key difference between “fair use” and “fair dealing” is that Australia’s “fair dealing” laws set out defined categories of acceptable uses. As we will see, “fair use” in the US is much more flexible.

Australian copyright law sets out five situations where use of copyrighted material without permission may be allowed:

  • research or study
  • criticism or review
  • parody or satire
  • reporting the news
  • provision of legal advice.

We’ll explain the first four, as they’re most useful to the average Australian.

Research or study

You do not need permission to copy a reasonable portion of copyrighted material if you are studying it or using it for research. You do not have to be enrolled in school or a university course to rely on the research or study exception.

For example:

  • you can make a copy of a chapter of a book to study it
  • you can print or take screenshots of content you find on the web for your research
  • you can include quotes or extracts of other work when you publish your research.

The main thing to watch out for is how much you copy. It’s fair to photocopy a book chapter but not the whole book.

Criticism or review

It is lawful to use a work without permission in order to critique or review it.

Criticism or review involves making an analysis or judgement of the material or its underlying ideas. It may be expressed in an entertaining way, or with strong opinion, and does not need to be a balanced expression to be fair.

For example, a film critic does not need permission to play a short clip from a film they are reviewing. They may also use film clips from other movies to compare or contrast.

Ozzy Man Reviews runs a popular channel that reviews existing material, relying on the fair dealing exceptions.

It’s also legal to quote an excerpt of a book or song lyrics, or to reference a photograph in another publication as part of a review or critique of the work.

You need to be really critiquing your source material. So, for example, a review video that is really just the highlights of a film or show probably won’t be fair.

This is something that tripped up Channel 10 in its clip show, The Panel. When the panellists discussed and critiqued the clips they showed, it was generally fair dealing. But when they just showed clips that were funny, a court found them liable for copyright infringement.

Reporting the news

You don’t need permission to use existing copyrighted material while reporting on current or historic events. The law is designed to ensure that people can’t use copyright to stifle the flow of information on matters of public interest.

The key issue to check here is whether a work has been used in a way that is necessary to report the news. If the material is just used incidentally, to illustrate a story or provide entertainment, it won’t count as fair dealing.

Parody or satire

It is legal to use another person’s copyrighted material without their permission to make fun of them, or to make fun of another person or issue.

Making something funny is not sufficient to rely on this exception. The use must be part of some commentary (express or implied) on the material or some broader aspect of society.

FriendlyJordies is known for his satirical videos that comment on and criticise politics and everyday life in Australia.

When is a use ‘fair’?

Fair dealing only applies when the use is “fair”.

When assessing fairness in Australia, there are a number of relevant considerations, including:

  • how important copying is to your work (“nature and purpose of the use”)
  • the type of work being copied (less original works may not be protected as strongly as more creative works)
  • whether it is easily possible to get a licence within a reasonable time at an ordinary commercial price
  • the effect of your copying on the potential market for the original
  • the amount taken from the original work
  • whether attribution has been given to the original author.

Generally, a use will be fair if you are copying for a valid reason, you don’t copy more than you need, you give attribution where possible, and your work is not directly competing in the market against the original.

Things to remember:

  • Is copying necessary? Copying has to be necessary for one of the purposes above. This means that it might be fair to copy part of a song to review it, but it won’t be fair if you’re just using the song as background music.
  • Copy no more than you need. Sometimes you need to copy the entirety of an existing work – if you’re critiquing a photograph, for example. Usually, though, you should only copy the parts that are necessary. You can’t get away with showing a whole TV episode in order to critique one scene.
  • It’s usually not fair if you’re competing with the original. This is often the most important factor. When you copy existing material for your own study, to report on the news, or to create a parody, you usually won’t be undercutting the market for the original. But if you’re just repackaging the original material in a way that might substitute for it – a consumer might be satisfied with your work instead of the original – then your use probably won’t be fair.

How is ‘fair use’ different – what can’t you do with fair dealing?

In the United States, the law is more flexible, because it can adapt to allow fair use for purposes that lawmakers hadn’t thought of in advance.

Some of the things that are legal without getting permission in the US but not in Australia include:

Adapting to new technologies: Fair use is flexible enough to adapt to change, but fair dealing is not. For example, in the US, fair use made it legal to use a VCR to record television at home in 1984. In Australia, this wasn’t legal until parliament created a specific exception in 2006 – just about the time VCRs became obsolete.

Artistic use: In Australia, it’s legal to create a parody or a critique, but not to use existing works for purely artistic purposes. For example, Australian law makes it largely unlawful for a collage artist to reuse existing copyright material to create something new.

Machinima uses game environments to create new stories – but is not legal in Australia without permission from the game’s publisher.

Uses that document our experiences: Media forms a big part of our lives, and when we share our daily experiences, we will often include copyright material in some way. Without fair use, even capturing a poster on a wall behind you when you take a selfie could infringe copyright.

In a famous example, Stephanie Lenz originally had an adorable 29-second clip of her baby dancing to a Prince song removed from YouTube, due to her use of the song. She was able to get it put back up under US fair use law – but an Australian wouldn’t have that right.

Stephanie Lenz’s “dancing baby” video is legal under US “fair use”, but would likely infringe copyright in Australia.

Technical and non-consumptive uses: The internet we love today is built on fair use. When search engines crawl the web, making a copy of every page they can in order to help us find relevant information, they’re relying on fair use.

Under Australian law, even forwarding an email without permission could be an infringement of copyright.

The copyright reform debate

Two recent government reports, from the Australian Law Reform Commission and the Productivity Commission, have recommended that Australia simplify its copyright law by introducing fair use.

Many of us copyright academics have written here extensively in support of fair use over the past few years, but there are still many myths about what the law would do.

It’s been suggested that introducing fair use here would provoke a “free for all” use of copyrighted work, but that hasn’t happened in the US. In fact, some of the same major studios that oppose fair use in Australia are at pains to point out that they support fair use in the US because it is vital to commercial production that happens there.

The Motion Picture Association of America, for example, says that “Our members rely on the fair use doctrine every day when producing their movies and television shows”.

To put it simply: we don’t think that fair use will harm creators.

The “fair” in fair use means that it’s not about ripping off creators – it mainly allows uses that are not harmful. But we do think that fair use would provide an important benefit for ordinary Australians – both creators and users.

Katherine Gough, a musician and law student at Queensland University of Technology, co-authored this article.

The Conversation

Nicolas Suzor is the recipient of an Australian Research Council DECRA Fellowship (project number DE160101542) and receives other project funding from the ARC. He also leads projects funded by industry groups, including the Australian Communications Consumer Action Network (ACCAN) and the Australian Digital Alliance. Nic is also the Legal Lead of the Creative Commons Australia project and the deputy chair of Digital Rights Watch, an Australian non-profit organisation whose mission is to ensure that Australian citizens are equipped, empowered and enabled to uphold their digital rights.

30 Jun 19:44

Top Canadian Court Permits Worldwide Internet Censorship

by vera

A country has the right to prevent the world’s Internet users from accessing information, Canada’s highest court ruled on Wednesday.

In a decision that has troubling implications for free expression online, the Supreme Court of Canada upheld a company’s effort to force Google to de-list entire domains and websites from its search index, effectively making them invisible to everyone using Google’s search engine

The case, Google v. Equustek, began when British Columbia-based Equustek Solutions accused Morgan Jack and others, known as the Datalink defendants, of selling counterfeit Equustek routers online. It claimed California-based Google facilitated access to the defendants’ sites. The defendants never appeared in court to challenge the claim, allowing default judgment against them, which meant Equustek effectively won without the court ever considering whether the claim was valid.

Although Google was not named in the lawsuit, it voluntarily took down specific URLs that directed users to the defendants’ products and ads under the local (Canadian) Google.ca domains. But Equustek wanted more, and the British Columbia Supreme Court ruled that Google had to delete the entire domain from its search results, including from all other domains such Google.com and Google.go.uk. The British Columbia Court of Appeal upheld the decision, and the Supreme Court of Canada decision followed the analysis of those courts.

EFF intervened in the case, explaining [.pdf] that such an injunction ran directly contrary to both the U.S. Constitution and statutory speech protections. Issuing an order that would cut off access to information for U.S. users would set a dangerous precedent for online speech.  In essence, it would expand the power of any court in the world to edit the entire Internet, whether or not the targeted material or site is lawful in another country. That, we warned, is likely to result in a race to the bottom, as well-resourced individuals engage in international forum-shopping to impose the one country’s restrictive laws regarding free expression on the rest of the world.

The Supreme Court of Canada ignored those concerns. It ruled that because Google was subject to the jurisdiction of Canadian courts by virtue of its operations in Canada, courts in Canada had the authority to order Google to delete search results worldwide. The court further held that there was no inconvenience to Google in removing search results, and Google had not shown the injunction would offend any rights abroad.

Perhaps even worse, the court ruled that before Google can modify the order, it has to prove that the injunction violates the laws of another nation thus shifting the burden of proof from the plaintiff to a non-party. An innocent third party to a lawsuit shouldn’t have to shoulder the burden or proving whether an injunction violates the laws of another country. Although companies like Google may be able to afford such costs, many others will not, meaning many overbroad and unlawful orders may go unchallenged. Instead, once the issue has been raised at all, it should be the job of the party seeking the benefit of an order, such as Equustek, to establish that there is no such conflict. Moreover, numerous intervenors, including EFF, provided ample evidence of that conflicts in this case.

Beyond the flaws of the ruling itself, the court’s decision will likely embolden other countries to try to enforce their own speech-restricting laws on the Internet, to the detriment of all users. As others have pointed out, it’s not difficult to see repressive regimes such as China or Iran use the ruling to order Google to de-index sites they object to, creating a worldwide heckler’s veto.

The ruling largely sidesteps the question of whether such a global order would violate foreign law or intrude on Internet users’ free speech rights. Instead, the court focused on whether or not Google, as a private actor, could legally choose to take down speech and whether that would violate foreign law. This framing results in Google being ordered to remove speech under Canadian law even if no court in the United States could issue a similar order.

The Equustek decision is part of a troubling trend around the world of courts and other governmental bodies ordering that content be removed from the entirety of the Internet, not just in that country's locale. On the same day the Supreme Court of Canada’s decision issued, a court in Europe heard arguments as to whether to expand the right-to-be-forgotten worldwide.

EFF was represented at the Supreme Court of Canada and the British Columbia Court of Appeal by David Wotherspoon of MacPherson Leslie & Tyerman and Daniel Byma of Fasken Martineau DuMoulin.

 

Related Cases: 
22 Jun 13:13

Washington Office at Annual 2017: Copyright

by Carrie Russell

Copyright is just as complex as always and librarians are expected to be knowledgeable and manage their institutions’ copyright issues. The Office for Information Technology Policy (OITP) presents three programs at Annual 2017 designed to help librarians new to the copyright specialist role find professional guidance:

Life-size female character with glasses a al Peanuts-style "Copyright Help 5 cents" booth at Annual 2017.

At Annual 2017, Look for the life-size “Lola Lola” character in the McCormick Place Exhibit Hall to find the “Ask A Copyright Question” booth. Source: Carrie Russell

“Ask a Copyright Question” Booth (Saturday and Sunday, June 24-25, 10:00 AM – 4:00 PM)

Is copyright getting in the way of effective teaching and learning? Do you have rogue teachers copying textbooks? Can you show a Netflix film in the classroom? Copyright experts at the “Ask a Copyright Question” Booth in the McCormick Place Exhibit Hall will be on hand to respond to questions you need to be answered when addressing copyright issues at your public, school, college or university library. Stop on by while you’re checking out the exhibits and pick up new (and free!) copyright education tools. Our experts are ready to give you an opinion on anything.

You Make the Call Copyright Game (Sunday, June 25, 1:00-2:30 PM)

Don’t miss this interactive copyright program in game show format, where panelists will respond to fair use questions, pop culture and potent potables. Panelists include Sandra Enimil, director of Copyright Resources Center at Ohio State University; Eric Harbeson, music special collections librarian at the University of Colorado; and Lindsey Weeramuni, from the Office of Digital Learning at MIT. Kyle Courtney from the Office for Scholarly Communication at Harvard University will keep us laughing and Marty Brennan, copyright and licensing librarian at UCLA, will be channeling bygone game show hosts like Gene Barry of the Match Game. Remember those long microphones? And yes, attendees will have their own buzzers!

Another view from the swamp: copyright policy update (Monday, June 26,  10:30-11:30 AM)

The U.S. Copyright Office is a unit of the Library of Congress, and now that a librarian had been appointed Librarian of Congress, rights holders have some concerns. Why are they so worried, and how is Congress going to respond? A panel of policy experts will address House Judiciary copyright review, including legislation that would make the Copyright Office Register more independent from the Library of Congress – and proposes that President Trump appoint the next Register instead of the Library of Congress. (Really?!?) Adam Eisgrau, managing director of ALA’s Office of Government Relations, and Krista Cox, director of Public Policy Initiatives at the Association of Research Libraries, will discuss U.S. copyright policy with special guest and international copyright expert Stephen Wyber from the International Federation of Library Associations. (What in the world is the EU up to?!?)

All of this brought to you by the OITP Copyright Education Subcommittee, which strives to make copyright entertaining.

The post Washington Office at Annual 2017: Copyright appeared first on District Dispatch.

22 Jun 13:13

Colorado copyright conference turns five

by Carrie Russell
University of Colorado's campus in Colorado Springs.

University of Colorado’s campus in Colorado Springs.

I had the great honor of being asked to speak about federal copyright at the Kraemer Copyright Conference at the University of Colorado (UCCS) in beautiful Colorado Springs. This locally produced and funded conference is now in its fifth year and has grown in popularity. No longer a secret, registration maxes out at 200, making the conference friendly, relaxed and relevant for all that attend.

And who attends? A growing number of primarily academic librarians responsible for their respective institutions’ copyright programs and education efforts. This job responsibility for librarians has grown significantly – in 1999 there were only two librarians in higher education with a copyright title. Now there are hundreds of copyright librarians, which is great because who other than librarians—devoted to education, learning, the discovery and sharing of information—should lead copyright efforts? (Remember! The purpose of the copyright law is to advance learning through the broad dissemination of copyright protected works.)

The conference is the brainchild of Carla Myers, a former winner of the Robert L. Oakley copyright scholarship and the scholarly communications librarian at the University of Miami who previously worked at UCCS. Funded by the Kraemer Family, conference registration is free – you just have to get there. (For the last two years, ALA has been one of the co-sponsors).

My assignment was to provide an update on copyright issues currently of attention in Congress. Registrants were most interested in pending legislation regarding the appointment of the next Copyright Register, which would move that responsibility from Librarian of Congress to the President. The new administration’s budget proposals for Institute for Museum and Library Services (IMLS), the National Endowment for the Humanities (NEH), and National Endowment for the Arts (NEA) has directed more attention than ever to the political environment of the Nation’s capital, and librarians have been more active in library advocacy.

There are rumors afoot that another regionally based copyright conference is being planned, which would be a welcome addition and contribution to the administration and study of libraries and the copyright law.

The post Colorado copyright conference turns five appeared first on District Dispatch.

21 Jun 20:00

Academics fear the value of knowledge for its own sake is diminishing

by Jennifer Chubb, Doctoral Researcher, University of York
Is“useful” knowledge the only knowledge worth knowing? shutterstock

A climate of “anti-intellectualism”, faltering levels of trust in “experts” and an era of “post-truth” provides a rather dreary depiction of the state of academia today.

Compound this with the reorganisation of higher education – where universities are run more like businesses – along with the politics of austerity, and it may be little surprise that the sector is said to be in crisis.

This is all coming at a time when there is an increased expectation for academics to be more accountable for their research by evidencing its economic and societal benefits – known as impact.

This expectation has received mixed responses from many people working in universities. At first, some academics crudely dismissed impact as a nasty government idea. Many researchers could not see how their work could align with it and, fearing a loss of freedom some claimed “science is dead”. This was even accompanied by the arrival of a hearse outside the offices of the Engineering and Physical Sciences Research Council in the UK – sending out the message loud and clear that the impact agenda was problematic and unwelcome. All of which reflected deep emotional and moral concerns within academia about the over management and politicisation of knowledge.

But on the flip side, impact has been welcomed by others for the opportunity it provides academics to make their work more visible and accessible.

The impact agenda

To find out more, our research looked at academics’ emotions in response to the impact agenda – both in the UK and Australia. As part of this, we carried out interviews with 51 professors and senior career-level academics.

Our findings confirm that while pockets of the academic community are deeply concerned about an impact agenda – both in terms of funding and assessment – these reactions do not reflect a lack of willing or sense of duty. Rather academics want to see disciplinary diversity respected and this reflected in terms of research policy.

What makes research valuable? Pexels

The academics we spoke to expressed a range of emotions regarding this increased focus on impact. These ranged from distrust to acceptance, and from excitement, to love and hate. For every academic who spoke of despair, a balance of commitment and even love for their work (and its potential for impact) was also expressed.

As one politics lecturer said:

It’s sort of where my heart lies – quite deliberately and specifically working to apply the research that you are doing to real world political and social challenges across domains of theory and practice.

An archaeology professor also expressed similar sentiments:

We are paid from the public purse and we should be doing research – we are ridiculously privileged to work on whatever we like and it’s wonderful.

To bend your mind a little bit to the fact that some of the stuff you do does have benefits outside the academy, and to put measures in place to make that happen, it’s a minor tax.

Justifying your job

But despite these positive sentiments, academics we spoke to also expressed concerns over their workloads and career security.

Others feared losing credibility and worried about being “exposed” or losing control of their work through public engagement. Though this is perhaps indicative of a lack of skill and confidence in this area. As well as a greater need for academics to understand how to communicate their research appropriately as opposed to “tokenistically”.

Academics also felt the impact agenda challenged them to justify their existence and their academic freedom – something which was felt on a very personal level. A music professor explained:

I don’t feel happy with it, and do I need to justify my job? How many levels do I have to justify it?

Useful knowledge

During the interviews, words such as “scary”, “threat”, “nervousness” and “worry” were used, as many spoke of their “frustrations”, “suspiciousness” and even “resentment” of the focus on impact.

Academics reported feeling sad, unhappy, jealous, anxious, demoralised and disillusioned by the impact agenda. And this sense of vulnerability seemed to be further exacerbated by risks of professional penalisation from their academic peers.

Who gets to decides what’s useful knowledge and what isn’t? Pexels

But it was clear from talking with these academics that these criticisms do not come from a place of entitlement or frustration at having to account for their work. Instead, this was in response to fears about the changing nature of their role and concerns for those whose work does not naturally align with what’s considered to be “useful”. And this increasing pressure to focus on impact at all costs could well damage academic morale, as one theatre professor described:

This agenda reinforces the notion that the only valuable thing in life is money. That is deeply worrying.

Ultimately, our research shows that most academics feel a duty to share their work, they want to make a difference, and they want to communicate their findings to wider audiences. But many are still uncomfortable with this idea of having to “sell” their work, as well as the preoccupation with what is “useful” – because after all how do you really decide what is or isn’t useful to society?

The Conversation

Jennifer Chubb consults for FastTrack Impact

21 Jun 19:56

Ancient DNA reveals how cats conquered the world

by Janet Hoole, Lecturer in Biology, Keele University
Shutterstock

Humans may have had pet cats for as long as 9,500 years. In 2004, archaeologists in Cyprus found a complete cat skeleton buried in a Stone Age village. Given that Cyprus has no native wildcats, the animal (or perhaps its ancestors) must have been brought to the island by humans all those millennia ago.

Yet despite our long history of keeping pet cats and their popularity today, felines aren’t the easiest of animals to domesticate (as anyone who’s felt a cat’s cold shoulder might agree). There is also little evidence in the archaeological record to show how cats became our friends and went on to spread around the world.

Now a new DNA study has suggested how cats may have followed the development of Western civilisation along land and sea trade routes. This process was eventually helped by a more concerted breeding attempt in the 18th century, creating the much-loved domestic short-haired or “tabby” cat we know today.

While the origin of the domesticated cat is still a mystery, it seems likely the process of becoming pets took a very long time. It seems that, because cats are so independent, territorial and, at times, downright antisocial, they were not so easy to domesticate as the co-operative, pack-orientated wolf. It’s likely that cats lived around humans for many centuries before succumbing to the lure of the fire and the cushion, and coming in from the cold to become true companions to humans.

The ancestors of today’s domestic cats encountered and interbred with various wildcat species. Ottoni et al., 2017/Nature, Author provided

The cat found in Cyprus corresponds to the Neolithic period of around 10,000 BC to 4,000 BC and the agricultural revolution. This was when people were beginning to settle down and become farmers instead of carrying on the nomadic hunter-gatherer existence that humans had followed for the previous 200,000 years or so. An earlier DNA study of other ancient remains confirms that domestic cats first emerged in what archaeologists call the Near East, the land at the eastern end of the Mediterranean where some of the first human civilisations emerged.

Of course, farming brings its own problems, including infestations of rats and mice, so perhaps it’s not surprising that it is at this time that we see the first occurrence of a cat buried in a human grave. It’s not hard to imagine that early farmers might have encouraged cats to stay around by helping them out with food during lean times of the year, and allowing them to come into their houses.

The gaps in the archaeological record mean that, after the Cyprus remains, evidence for domestic cats doesn’t appear again for thousands of years. More cat graves then start to appear among ancient Egyptian finds (although there is also evidence for tame cats in Stone Age China). It was in Egypt that cats really got their furry paws under the table and became not just part of the family but objects of religious worship.

To track the spread of the domestic cat, the authors of the new study, published in Ecology and Evolution, examined DNA taken from bones and teeth of ancient cat remains. They also studied samples from the skin and hair of mummified Egyptian cats (and you thought emptying the litter tray was bad enough).

They found that all modern cats have ancestors among the Near Eastern and Egyptian cats, although the contributions of these two groups to the gene pool of today’s cats probably happened at different times. From there, the DNA analysis suggests domestic cats spread out over a period of around 1,300 years to the 5th century AD, with remains recorded in Bulgaria, Turkey and Jordan.

‘Blotched’ tabby cat genes became more common alongside striped ‘mackerel’ patterns in the later Middle Ages. Ottoni et al., 2017/Nature, Ashmolean Museum, University of Oxford, Author provided

Over the next 800 years, domestic cats spread further into northern Europe. But it wasn’t until the 18th century that the traditional “mackerel” coat of the wildcat began to change in substantial numbers to the blotched pattern that we see in many modern tabbies. This suggests that, at that time, serious efforts to breed cats for appearance began – perhaps the origin of modern cat shows.

Another interesting finding is that domestic cats from earliest times, when moved around by humans to new parts of the world, promptly mated with local wildcats and spread their genes through the population. And, in the process, they permanently changed the gene pool of cats in the area.

This has particular relevance to today’s efforts to protect the endangered European wildcat, because conservationists often think interbreeding with domestic cats is one of the greatest threats to the species. If this has been happening all over the old world for the past 9,000 or so years, then perhaps it’s time to stop worrying about wildcats breeding with local moggies. This study suggests that none of the existing species of non-domesticated cats is likely to be pure. In fact, cats’ ability to interbreed has helped them conquer the world.

The Conversation

Janet Hoole does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

21 Jun 17:32

Peer Coaching for Professional Learning

by acrlguest

ACRLog welcomes a guest post from Marisa Méndez-Brady, Science Librarian, and Jennifer Bonnet, Social Sciences & Humanities Librarian, at the University of Maine.

Finding the time and resources to devote to professional learning can be a challenge, especially at institutions that are less geographically proximate to the broader library community. The University of Maine is a land and sea grant institution in the rural town of Orono, where opportunities to engage with peers at other colleges and universities take a concerted effort and may require additional financial resources to participate. While these constraints limit our ability to go to as many conferences as we would like, one day a year our department attends a gathering of Maine academic librarians where colleagues across the state present ideas that generate excitement and lead to further exploration.

During the 2016 Maine Academic Libraries Day, Bowdoin College librarian Beth Hoppe made a strong case for using the ACRL Framework to embrace non-prescriptive practices in our teaching, as part of a critical pedagogical approach to working with students.

Following this talk, we couldn’t stop thinking: how might we enhance the delivery of information literacy concepts in our own library instruction by more deliberately incorporating critical pedagogy? Motivated to improve our teaching techniques and extend our professional learning, the two of us embarked on a peer coaching project. Over the course of three months we used a study group model to brainstorm, design, and implement a suite of lesson plans that centered the diversity of student voices and experiences in our instruction sessions.

Peer coaching is commonly used in K-12 learning environments, and is a technique lauded by the instructional design community for its broad applicability. It is a non-evaluative, professional learning model in which two or more colleagues work collaboratively to: design curricula, create assessments, develop lesson plans, brainstorm ideas, problem solve, and reflect on current pedagogical practices (Robbins, 2015).

Although peer coaching can be formalized within a department or unit, we participated in an informal method known as the study group model, where two or more people engage in collaborative professional development for learning (PDL) around a subject of interest. We chose this model because it offers flexibility when it comes to constraints on time or finances, providing a sustainable method for professional development during the hectic instruction schedule of a typical semester. The graphic below illustrates different approaches to utilizing peer coaching for professional learning.

From https://www.polk-fl.net/staff/professionaldevelopment/documents/Chapter16-PeerCoaching.pdf

To shape our peer coaching project, we consulted instructional design literature, which (1) emphasizes the importance of creating professional learning that is individualized to the specific learning context and audience for the learning, and (2) focuses on content, pedagogy, or both (Guskey, 2009). We also integrated the three key components of effective peer coaching: a pre-conference to establish the goals for PDL; the learning process; and a post-conference to assess the PDL process.

The pre-conference in the context of peer coaching consists of meeting to establish PDL goals based on participant interest and applicability to one’s praxis. Our pre-conferencing took a two-pronged approach. First, we established an overarching goal to use the ACRL Framework to develop learner-centered teaching outcomes. Then, we held individual pre-conferences focused on the following Frames: (1) research as inquiry, (2) scholarship as conversation, and (3) searching as strategic exploration. We selected three upcoming instruction sessions (i.e., already scheduled in the library) that would be opportune for trying out new pedagogical approaches.

After we set each agenda, we turned from pre-conferencing to the learning process, which involved three study group meetings to design our lesson plans. In advance of each meeting, we selected relevant articles to read and reviewed two to three corresponding lesson plans in the Community of Online Research Assignments. The lesson plans we chose not only engaged with the Framework but revolved around students’ interests and experiences, which helped us consider teaching techniques that were non-prescriptive in practice and drew on critical pedagogical concepts. We then used the scheduled meeting time to adapt these lesson plans to fit the goals of our upcoming instruction sessions.

“When everyone in the classroom, teacher and students, recognizes that they are responsible for creating a learning community together, learning is at its most meaningful and useful.” – bell hooks, Teaching Critical Thinking: Practical Wisdom

The first lesson plan involved a teach-in that asked students to share their decision-making process when searching for information in both open and licensed resources (ACRL frame: research as inquiry), and was targeted at an upper-level undergraduate communications and marketing course. The second lesson plan focused on deconstructing citations and reverse engineering bibliographies, and was designed for an upper-level undergraduate wildlife policy class (ACRL frame: scholarship as conversation). The third lesson plan used one piece of information from a vaguely-worded news article as a jumping-off point for finding related information across various media, which we co-taught for a student club on campus (ACRL frame: searching as strategic exploration). Although these lesson plans were designed for specific contexts, they are broadly applicable across disciplines and academic levels.

We further engaged with critical pedagogy in a post-conference that succeeded each study group meeting. In the peer coaching context, the post-conference acts as an assessment of the study group experience for us (the learners) and emphasizes the role of self-reflection in gauging our own learning. Building on the work we started in the classroom (via each lesson plan), we took a feminist pedagogical perspective to self reflection that involved open-ended questions about process and practice, and addressed our own PDL outcomes.

“Feminist assessment is inherently reflective, and reflection itself is a feminist act.” Maria Accardi, Feminist Pedagogy for Library Instruction

We hope to continue using peer coaching in other areas of our praxis. Peer coaching offers a low stakes, low-cost option for professional development that leverages existing resources, draws on the interests and skills of colleagues, and allows for higher frequency contact among participant learners (versus a traditional yearly conference). We also found that the informal structure of the study group model supports flexible implementation and facilitates home-grown continuing education opportunities that are targeted to specific issues we face at our library.

So often, we absorb ideas at conferences, webinars, or through informal conversations. Yet, actualizing these ideas in our own institutional environments can be challenging due to issues like time, motivation, and support. Next time you discover a novel approach or way of thinking about your praxis, we encourage you to try peer coaching! We’d love to hear from you about how you use this professional learning strategy in your own environment.

09 Jun 12:38

It's Out! This is What a Librarian Looks Like

by birdie
Topic: 
From The Huffington Post news of the publication of This Is What a Librarian Looks Like by Kyle Cassidy.

Kudos to the authors and the participants! Tell us your thoughts about participating and the finished product in the comments below.

09 Jun 12:31

How TV cultivates authoritarianism – and helped elect Trump

by James Shanahan, Dean of the Media School, Indiana University

Many gallons of ink (and megabytes of electronic text) have been devoted to explaining the surprise victory of Donald Trump.

Reasons range from white working-class resentment, to FBI Director James Comey’s decision to reopen the Hillary Clinton email investigation, to low turnout. All likely played some role. It would be a mistake to think the election turned on one single factor.

However, a study we conducted during the campaign – just published in the Journal of Communication – suggests an additional factor that should be added into the mix: television.

We’re not talking about cable news or the billions in free media given to Trump or political advertising.

Rather, we’re talking about regular, everyday television – the sitcoms, cop shows, workplace dramas and reality TV series that most heavy viewers consume for at least several hours a day – and the effect this might have on your political leanings.

An authoritarian ethos

Studies from the past 40 years have shown that regular, heavy exposure to television can shape your views on violence, gender, science, health, religion, minorities and more.

Meanwhile, 20 years ago, we conducted studies in the U.S. and Argentina that found that the more you watch television, the more likely you’ll embrace authoritarian tendencies and perspectives. Heavy American and Argentinian television viewers have a greater sense of fear, anxiety and mistrust. They value conformity, see the “other” as a threat and are uncomfortable with diversity.

There’s probably a reason for this. Gender, ethnic and racial stereotypes continue to be prevalent in many shows. Television tends to distill complex issues into simpler forms, while the use of violence as an approach to solving problems is glorified. Many fictional programs, from “Hawaii Five-O” to “The Flash,” feature formulaic violence, with a brave hero who protects people from danger and restores the rightful order of things.

In short, television programs often feature an authoritarian ethos when it comes to how characters are valued and how problems are solved.

Viewing habits and Trump support

Given this, we were intrigued when, during the campaign, we saw studies suggesting that holding authoritarian values was a powerful predictor of support for Trump.

We wondered: If watching television contributes to authoritarianism, and if authoritarianism is a driving force behind support for Trump, then might television viewing – indirectly, by way of cultivating authoritarianism – contribute to support for Trump?

About two months before the party conventions were held, we conducted an online national survey with over 1,000 adults. We asked people about their preferred candidate. (At the time, the candidates in the race were Clinton, Sanders and Trump.)

We then questioned them about their television viewing habits – how they consumed it, and how much time they spent watching.

We also asked a series of questions used by political scientists to measure a person’s authoritarian tendencies – specifically, which qualities are more important for a child to have: independence or respect for their elders; curiosity or good manners; self-reliance or obedience; being considerate or being well-behaved. (In each pair, the second answer is considered to reflect more authoritarian values.)

Confirming our own earlier studies, heavy viewers scored higher on the authoritarian scale. And confirming others’ studies, more authoritarian respondents strongly leaned toward Trump.

More importantly, we also found that authoritarianism “mediated” the effect of watching a lot of television on support for Trump. That is, heavy viewing and authoritarianism, taken together in sequence, had a significant relationship with preference for Trump. This was unaffected by gender, age, education, political ideology, race and news viewing.

We’re not the first to note that entertainment can have political consequences. In a Slate article shortly after the election, writer David Canfield argued that prime-time television is filled with programming that is “xenophobic,” “fearmongering,” “billionaire-boosting” and “science-rejecting.” What we think of “harmless prime-time escapism,” he continued, actually “reinforces the exclusionary agenda put forth by the Trump campaign.” Our data reveal that this was not simply speculation.

None of this means that television played the decisive role in the triumph of Donald Trump. But Trump offered a persona that fit perfectly with the authoritarian mindset nurtured by television.

What we think of as “mere entertainment” can have a very real effect on American politics.

The Conversation

Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'ont déclaré aucune autre affiliation que leur poste universitaire.

08 Jun 19:26

SugarHero and the Snow Globe Cupcakes - Copyright and Food Videos

by noreply@blogger.com (Mathilde Pavis)
A Californian Court may soon hear on the copyright infringement of a "Facebook-style" cooking video featuring a cupcake recipe. Last week, internet chef Elizabeth LaBau, aka ‘SugarHero’, filed a copyright infringement claim against Television Food Network before the Central District Court of California. The crux of the dispute: LaBau's cooking video showcasing her flagship recipe “Snow Globe Cupcakes”  (the complaint can be found here in full). 

Snow Globe Cupcakes by SugarHero
(shot of SugarHero's 'how to' video
: www.sugarhero.com) 
LaBau’s complaint is clear: it does not seek copyright protection for the recipe embodied in the film, but for the film itself (here). This avenue may be one of the last ones left open to passionate foodies who dedicate their life to creating new recipes, as the US Copyright Office and US Courts have refused to extend copyright protection to recipes (here) or even recipe books (see, Publications International v Meredith Corporation 88 F 3d 473, at 480 (7th Circuit 1996) per Kanne J., contra. see, Belford v Scribner 144 US 488). Consequently, this new crave for 'how-to' cooking videos trending on social media may be more than just a marketing tool, it could also be a strategic legal move (depending on forthcoming the Court’s decision).

The complaint filed on LaBau’s behalf depicts a dispute by the likes of David and Goliath. The plaintiff is described as having dedicated “years of hard work and late nights” to her project ‘SugarHero’, “competing with numerous corporate food websites, often backed by large companies with deep pockets” in a “oversaturated food market place”. To break through, LaBau worked endlessly to develop “eye-catching” recipes paired with an effective social media strategy (para 16). This is where the “Facebook-style video” at the crux of this dispute comes in. 

And one for the road...
(photo by Serenitbee)
In 2014, LaBau developed her “Snow Globe Cupcake” recipe which was first published on the SugarHero website (www.SugarHero.com) (para 21). Though instantly popular, the recipe only went viral when LaBau published the same recipe on Facebook using the regular post format (photo and text) and linking it to the original recipe on her website (para 22). In 2016, LaBau produced a short cooking video of the recipe in “Facebook-style” to increase traffic to her website further (para 24). The complaint describes that this process took two months of production from the shooting to the editing, all of which LaBau performed herself (para 25). Shortly after posting her clip on the internet, SugarHero came across a similar video produced by Food Network, which had been published on December 22, 2016. LaBau contacted the Network to request “credit and attribution for her work, providing a link to her video for reference” (para 30). 

The complaint claimed that the “Food Network video” copied Labau’s work “shot-for-short” (para 35) by reproducing protected elements of her clip such as “the choice of shots, camera angles, colors, lighting, textual descriptors, and other artistic and expressive elements of [LaBau]’s work” (para 28). SugarHero’s complaint argued Food Network of free-riding on her “hard work”, “the time commitment of recipe development and the cost of ingredients, coupled with the time cost of photographing and videoing the cupcakes” which all in all amounted to a significant investment on her part (para 35).


SugarHero's shot (left), Food network's shot (right)
Source: BBC
The complaint covers the expected elements of copyright infringement for a cinematographic work, and as such does not put forward anything out of the ordinary. However, the decision may set an interesting precedent for the food industry - should the case reach this stage. If the Californian court sides with LaBau, the decision may re-open the doors of copyright protection in cases of highly-visual or unique recipes recorded in ‘how-to’ videos by their authors. Would that be a welcome development? At any rate, it is a timely conversation to hold as a case on sensory copyright (in the taste of cheese) is making its way to the CJEU, unearthing with it the legitimacy of intellectual property protection for culinary works (here). 

08 Jun 17:44

Major change at work can trigger loss and grief. Organisations must accept this

by Mias de Klerk, Professor: Organisational behaviour, human capital management, leadership development, Stellenbosch University
Employees are often unsettled by change in their organisations. Shutterstock

There is hardly an organisation in the world – big or small – that doesn’t have to adapt to changing circumstances. The pace of development in technology, the quick pace at which new rivals come on the scene, even the rapid turnover of leaders, all require shifts in the way things are done.

But it’s never easy to steer people through change. And, inevitably, there’s resistance. So how can organisations manage it in a way that gets them the outcomes they want?

The default when things don’t go well is to blame employers for being resistant to change. This may be convenient, but it doesn’t deal with the real issues.

Over the last few decades organisations around the world have been pushed into large-scale changes, such as downsizing, outsourcing, mergers and acquisitions, or restructuring. The success rate in large scale changes is around 20%.

Change is inevitable. But forced change is emotionally more intimidating and disturbing than is generally assumed. This predisposes employees to be negative about it. What’s very often missing when organisations announce major change is that they don’t recognise this. In fact they should be concerned with issues such as loss, emotional trauma, grief and mourning.

Leaders, managers and change consultants have a great deal to learn about the ways in which employees experience change and the sense of loss they suffer. Change has little chance of success unless the severity of loss is acknowledged, grief is enfranchised and mourning is encouraged.

Loss

Work is central to many people’s lives and their identities. Therefore forced changes to jobs or work structures are experienced particularly intensely.

People become emotionally attached to things, the more important these things are, the more individuals want to hold onto them. The awareness of loss is therefore much more profound and creates more anxiety.

Any change involves some sort of loss. There are tangible losses like loss of income when a person is retrenched or downgraded. And there are abstract losses such as loss of control, status or self-worth.

For the most part, the deeply felt emotional losses are ignored when dealing with change or in debates about resistance to change. Most studies about corporate rationalisation, focus mainly on costs and the performance of the survivors.

Where emotions from change are studied, the focus tends to be on the loss of a job. But the subjective losses and subsequent emotional experiences of individuals tend to be underplayed.

Grieving

Profound loss is associated with grief – a deep sorrow that causes piercing distress. Although the experience of grief is common, there are marked differences in how intensely and for how long people grieve. It’s more intense when there’s greater degree of attachment to what was lost. The rational size of the loss isn’t relevant – merely the emotional intensity with which the individual experiences the loss.

Organisations tend to be indifferent and reluctant to acknowledge the intensify of loss felt by individuals. Often demonstrating, or talking about emotions is taboo, and when it happens it’s interpreted as resistance to change. The indifference and carelessness of executives can compound the experience of emotional trauma. In the minds of many, grief is associated with weakness, cowardice or even hysterical exaggeration.

As a result, many employees fear that they’ll be seen as weaklings or disloyal if they show their hurt and pain.

When grieving is denied or discouraged, repression or suppression is the only alternative. This leads to individuals being unable to engage with change, and can even cause other pathologies. Research has shown that restructuring, especially downsizings, instils in affected people intense fear, anxiety, distrust, , perceptions of betrayal and rejection. These tend to transpire into lack of focus and higher rates of absenteeism and turnover. And occupational injuries and illnesses are much higher at workplaces that goes through transformations.

A study titled “Healing emotional trauma in organizations” describes how a group of executives were negatively affected. This is after they went through a restructuring that logically should have caused no distress. But they were unable to look forward to plan their strategy as they remained stuck in emotional trauma of the restructuring.

What can be done

Executives can’t expect employees to leave their emotions at the door when they come to work. They must embrace people’s sense of loss and help them adapt to it if they want change to be successful.

Organisations must build systems that ensure grieving and mourning are allowed so that employees can heal and move on through and past the change.

To ease the pain that comes from change, loss and pain must be publicly acknowledged and mourned in the organisation. Sharing destigmatises the loss and grief as the bereaved employees find validation from peers and managers through their narratives.

This must happen in a safe space, without logical explanations, platitudes or superficial suggestions. In the case study, a group of executives felt healed and prepared for the future after the opportunity to tell and share their stories. The anomaly is that nothing has changed rationally or logically to their situation, but psychologically they would be able to move on.

If safe and constructive environments are created, employees won’t find it necessary to vent their emotions in the passages, around the water cooler or in tea rooms.

The Conversation

Mias de Klerk does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

07 Jun 19:46

To smiley face or not: the complexity of email etiquette

by Ken Tann, Lecturer in Communication Management, The University of Queensland
Experts over time continue to disagree on what is the best email etiquette. www.shutterstock.com
The Conversation, CC BY-SA

Emails are ubiquitous in a modern, globalised workforce. However, a well-crafted email can make the sender appear approachable and competent, while a poorly constructed one is less persuasive, and leaves recipients less willing to comply with the request.

Alongside making requests and providing information, emails help us build rapport in the workplace and long-term business relationships. So it’s unsurprising that there’s a sizable market for help with email etiquette.

An internet search for “email etiquette” generates 433,000 results, while a search for books on email etiquette fetches 76 titles (on Amazon.com). However, the advice we get is often hazy, lacking justification, and may even be contradictory at times.

A 2003 study suggested that these different opinions on what to write in an email will converge over time, and that rules will emerge. But 14 years later, we still haven’t gone very far in producing or sticking to a standard.

Why there is no standard when it comes to email etiquette

The problem is emails are all written for very different purposes, including personal messages and invitations, advertising and customer inquiries, team announcements and company newsletters among other things. The setting also changes; what is acceptable in an academic’s emails is different from business emails.

The norms in emails also vary between internal and external communication, according to profession and across cultures.

It doesn’t help either that the conventions of email communication are constantly evolving. If there isn’t a one-size-fits-all template that we can apply, what can we rely on to guide email writing?

It’s all about the context

If you look at the context for each email it can give you a guide as to what to write. Take for example greeting and closing an email.

A common point of disagreement between commentators is the need for proper greetings and closings. On the one hand, our guidebooks tell us we should always include an appropriate greeting, while on the other, the emails we often see in the workplace seem to contain no greetings at all.

Openings and closings in emails are used to establish the relationship between the sender and the recipient, so this should be the first consideration. Employees who are addressing a distant colleague or someone with higher authority, for example, are more likely to include a greeting.

However, the relationship between sender and recipient develops as emails form a chain of exchanges. Chances are that most of the internal emails we send are linked to an earlier phone call or other emails. In that case the relationship and context would have been well-established already.

These forms of quick exchanges without a greeting or closing resemble an ongoing spoken conversation stretched over a few emails. So any further greetings would seem repetitive and excessive.

However, a sender may still choose to include greetings and closing expressions such as “Dear Mr X” and “Kind regards” to emphasise or downplay the difference in power or to put some distance between themselves and their recipient. Conversely, inclusive salutations such as “Hi Team” or “Hi everyone” and empathetic closes such as “Well done” could help to invoke solidarity among your coworkers and trigger them to act.

It’s also useful to consider the role that the email plays in the overall activity. Internal emails are often short and succinct, because they are sent to provide instructions or information as part of the daily workflow.

Once the purpose of such messages is identified, details can be summarised in point form and abbreviations, so your team members can easily retrieve the information they need without wading through lines of pleasantries. Keeping the message brief also makes it easier to read on smartphones.

However, when it comes to conducting delicate negotiations with customers, when it is necessary to build trust, assert power and establish relationships, emails would have to be longer. Senders would have to couch sensitive messages in formal language and stock phrases, to strengthen their authority by making the message sound impersonal.

The Conversation, CC BY

To smiley face or not?

Guidebooks also disagree on the use of newer features that deviate from more traditional forms of writing, such as emoticons. However, emoticons provide a useful way to manage solidarity by softening requests and minimising impositions at the workplace.

So let’s face it – regardless of what the guidebooks say, people are going to continue using them. This is because emails lack the non-verbal cues that we rely on in face-to-face interactions, such as facial expressions, and this creates the potential for ambiguity and uncertainty in how messages are interpreted. Emoticons are used to disambiguate the tone of a message where there are more than one way to interpret it.

However, senders may also appropriate and exploit the digital features in emails to pursue complex agendas. While the “CC” function ostensibly provides accountability and allows monitoring of work processes, senders may choose to copy in a colleague in a power-play to strengthen their authority and put pressure on the recipient.

It’s worth mentioning that in these examples, the email senders are not simply observing a set of rules for each context; they are actively shaping their context with each choice they make.

The things we choose to do or not do with emails are social actions. As a part of human interaction, emails are as nuanced and complex as the social world we write them in. It is unlikely that we can rely on a checklist or quick-fix rules to get them right, as appealing as that may sound.

The Conversation

Ken Tann does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

07 Jun 19:34

Modern humans evolved 100,000 years earlier than we thought – and not just in east Africa

by Matthew Skinner, Senior Lecturer in Evolutionary Anthropology, University of Kent
Jean-Jacques Hublin, MPI-EVA, Leipzig

According to the textbooks, all humans living today descended from a population that lived in east Africa around 200,000 years ago. This is based on reliable evidence, including genetic analyses of people from around the globe and fossil finds from Ethiopia of human-like skeletal remains from 195,000–165,000 years ago.

Now a large scientific team that I was part of has discovered new fossil bones and stone tools that challenge this view. The new studies, published in Nature, push back the origins of our species by 100,000 years and suggest that early humans likely spanned across most of the African continent at the time.

View looking south of the Jebel Irhoud site in Morocco, where the fossils were found. Shannon McPherron, MPI EVA Leipzig

Across the globe and throughout history, humans have been interested in understanding their origins – both biological and cultural. Archaeological excavations and the artefacts they recover shed light on complex behaviours – such as tool making, symbolically burying the dead or making art. When it comes to understanding our biological origins, there are two primary sources of evidence: fossil bones and teeth. More recently, ancient genetic material such as DNA is also offering important insights.

The findings come from the Moroccan site of Jebel Irhoud, which has been well known since the 1960s for its human fossils and sophisticated stone tools. However, the interpretation of the Irhoud fossils has long been complicated by persistent uncertainties surrounding their geological age. In 2004, evolutionary anthropologists Jean-Jacques Hublin and Abdelouahed Ben-Ncer began a new excavation project there. They recovered stone tools and new Homo sapiens fossils from at least five individuals – primarily pieces of skull, jaw, teeth and some limb bones.

Composite reconstruction of the earliest known Homo sapiens fossils from Jebel Irhoud (Morocco) based on micro computed tomographic scans of multiple original fossils.

Dating the fossils

Some of the Middle Stone Age stone tools from Jebel Irhoud (Morocco). Mohammed Kamal, MPI EVA Leipzig

To provide a precise date for these finds, geochronologists on the team used a thermoluminescence dating method on the stone tools found at the site. When ancient tools are buried, radiation begins to accumulate from the surrounding sediments. Whey they are heated, this radiation is removed. We can therefore measure accumulated radiation to determine how long ago the tools were buried. This analysis indicated that the tools were about 315,000 years old, give or take 34,000 years.

Researchers also applied electron spin resonance dating, which is a similar technique but in this case the measurements are made on teeth. Using data on the radiation dose, the age of one tooth in one of the human jaws was estimated to be 286,000 years old, with a margin of error of 32,000 years. Taken together, these methods indicate that Homo Sapiens – modern humans – lived in the far northwestern corner of the African continent much earlier than previously known.

But how can one be sure that these fossils belonged to a member of our species rather than some older ancestor? To address this question, the anatomists on the team used high-resolution computed tomography (CAT scans) to produce detailed digital copies of the precious and fragile fossils.

They then used virtual techniques to reconstruct the face, brain case and lower jaw of this group – and applied sophisticated measurement techniques to determine that these fossils possessed modern human-like facial morphology. In this way, they could be distinguished from all other fossil human species known to be in Africa at the time.

Virtual palaeoanthropology is able to correct distortions and fragmentations of fossil specimens.

The high-resolution scans were also used to analyse hidden structures within the tooth crowns, as well as the size and shape of the tooth roots hidden within the jaws. These analyses, which were the focus of my contribution, revealed a number of dental characteristics that are similar to other early fossil modern humans.

And although more primitive than the teeth of modern humans today, they are indeed clearly different from, for example, Homo heidelbergensis and Homo neanderthalensis. The discovery and scientific analyses confirm the importance of Jebel Irhoud as the oldest site documenting an early stage of the origin of our species.

Archaeology versus genetics

As a palaeoanthropologist who focuses on the study of fossil bones and teeth, I am often asked why we don’t simply address these questions of human origins using genetic analyses. There are two main reasons for this. Although incredibly exciting advances have been made in the recovery and analysis of genetic material from fossils that are several hundreds of thousands of years old, it seems that this is only likely to be possible under particular (and unfortunately rare) conditions of burial and fossilisation, such as a low and stable temperature.

That means there are fossils we may never be able to get genetic data from and we must rely on analyses of their morphology, as we do for other very interesting questions related to the earliest periods of human evolutionary history.

The fossils as they were found. Steffen Schatz, MPI EVA Leipzig

Also, understanding the genetic basis of our anatomy only tells us a small part of what it means to be human. Understanding, for example, how behaviour during our lives can alter the external and internal structure of hand bones can help reveal how we used our hands to make tools. Similarly, measuring the chemical composition and the cellular structure of our teeth can tell us what we were eating and our rate of development during childhood. It is these types of factors that help us really understand in what ways you and I are both similar and different to the first members of our species.

And of course, we should not forget that it is the archaeological record that is identifying when we started to make art, adorn our bodies with jewellery, make sophisticated tools and access a diverse range of plant and animal resources. There have been some intriguing suggestions that human species even older than Homo sapiens may have displayed some of these amazing behaviours.

More such research will reveal how unique we actually are in the evolutionary history of our lineage. So let’s encourage a new generation of young scientists to go in search of new fossils and archaeological discoveries that will finally help us crack the puzzle of human evolution once and for all.

The Conversation

Matthew Skinner was asked by The Conversation, as a member of the scientific team that conducted the study of the Jebel Irhoud remains, to summarise the main findings and their significance.

05 Jun 19:52

Canadian Jem and The Holograms Artist Gisele Lagace Denied Entry to the U.S. on Her Way to C2E2

by Charles Pulliam-Moore

While New Brunswick-based artist Gisele Lagace has worked on comics for Dynamite, IDW, Archie, and Marvel, she’s also a freelance illustrator. Like many freelancers, she saw Chicago’s C2E2 as an opportunity to do some important self-promotion and potentially sign on to do some commission work.

Read more...

14 Apr 12:46

The Reconstruction of Ulysses S. Grant

by Michael Durbin

The Reconstruction of Ulysses S. Grant:

In the second half of the 19th century, few Americans were better known–and revered–than the man whose face looks out today from the $50 bill. Ulysses S. Grant led Union troops to victory in the American Civil War, then thwarted attempts by President Andrew Johnson to suppress fundamental civil rights of newly freed black Americans. […]

12 Apr 12:42

The Private Memoirs and Confessions of a Justified Sinner (1824)

by Adam Green
James Hogg's masterpiece — part-gothic novel, part-psychological mystery, part-metafiction, part-satire, part-case study of totalitarian thought.
01 Mar 20:31

Future of Libraries – Need First Sale for ebooks

by Mary Minow

How will libraries hold onto ebooks and other digital files like mp3s so that readers and scholars in the future can still read them?  The current state of affairs relies on license agreements with publishers who in turn license to vendors, who in turn, license to libraries.  Hardly sustainable when files can and do disappear when either the publisher or the vendor no longer offer them.

Libraries rely on the right of first sale to lend print books, and need an analogous right in the world of ebooks and digital music. To that end, the American Library Association, the Association of College and Research Libraries, the Association of Research Libraries and the Internet Archive filed a brief on Feb. 14, 2017 in support of Redigi, a company that sells used mp3 files to music customers.  The brief argues that an evaluation of Fair Use should consider the rationale of the First Sale doctrine, and other specific exceptions. It argues that enabling the transfer of the right of possession should be favored under Fair Use.

It is essential to libraries, and the term existential would not be too great a term to use, to be able to own digital files, and care for them via preservation and library lends (e.g. to one person at a time) just as they do with print.  Can readers count on books being available a year or two or five after publication? The existence of libraries has made this possible from their inception until now.

The flexibility of digital content allows for an endless array of licensing opportunities (e.g. multiple simultaneous users) which is mutually beneficial to both publishers and users.  It is not practical to rely only on first sale for library delivery of econtent. The two modes for libraries to acquiring ebooks, licensing and first sale are not mutually exclusive but mutually dependent.

 

The post Future of Libraries – Need First Sale for ebooks appeared first on Stanford Copyright and Fair Use Center.

27 Feb 19:56

This Week's Big New Exoplanet Discovery Is Becoming Science Nerd Fanfic

by Rae Paoletta

On Wednesday, Earthlings were shocked—and certainly relieved—to finally get a push notification about planetary discovery, not political corruption. News broke that an international team of scientists had spied seven Earth-sized planets orbiting the nearby star TRAPPIST-1. Three of those planets are located in the…

Read more...

15 Feb 21:05

Ursula K. Le Guin Wants Everyone to Know the Huge Difference Between 'Alternative Facts' and Fiction

by Katharine Trendacosta

The word “alternative” appears both in the fun new craze sweeping the government (“alternative facts”) and in a few science fiction staple ideas (“alternate history” and “alternate universe,” for example). Despite that superficial similarity, legendary scifi author Ursula K. Le Guin wants to make sure no one confuses…

Read more...