Many countries around the world have a policy of reciprocal border treatment -- I once traveled to Uganda and the visa payment demanded at the border varied on your citizenship, based on what your country charged Ugandans to travel there; likewise, after the US started fingerprinting visitors, Brazil starting fingerprinting Americans (and only Americans!) at the border.
The EU, though, is a slightly different matter. Its 28 member-states, with 500,000,000 residents, are bound together in a (sometimes frayed and imperfect) solidarity pact. The USA has decided that citizens of Bulgaria, Croatia, Poland, Romania and Cyprus -- some of the EU's poorest states (mostly ex-Eastern Bloc states) -- need visas, and as of 2021 all US visitors to the core group of 26 "Schengen" EU states will need to apply for a visa before they are allowed to enter.
To apply for the ETIAS, US citizens will need a valid passport, an email account and a credit or debit card, the EU said. Minors, the website said, will still only need their normal passports to travel after the visas go into effect.
The Union said that the ETIAS visa is valid for three years and allows Americans to enter the Schengen Area as many times as necessary.
On the ETIAS website, the European Union said it "has recently decided to improve their security level to avoid any further problems with illegal migration and terrorism."
Last week, House Democrats introduced the Save the Internet Act, to enact the Net Neutrality protections favored by 83% of Americans; in response, Rep Greg Walden (R-OR, @repgregwalden, +1 (541) 776-4646) has proposed legislation rescinding Section 230 of the Communications Decency Act, "the most important law protecting internet speech", which says that online services are not required to pro-actively censor user postings that might contain illegal speech -- a vital protection that made it possible for sites like this one to have comment sections, and also enabled sites like Youtube and Snapchat to accept photos and videos from the public.
But as bad as SESTA/FOSTA was, Walden's proposal would be much, much worse. Eliminating Section 230 protection for wider classes of bad speech would make every public utterance subject to extremely broad, error-prone algorithmic censorship, while cementing the dominance of today's digital monopolists, because only the largest tech companies would be able to afford to run these buggy algorithms.
Thelma Galaxia said her friend, Michelle Cardenas, was driving each of their two children from Tijuana, where they live, to their schools in San Ysidro Monday morning, as they do nearly every day.
Galexia's 9-year-old daughter, Julia Isabel Amparo Medina, attends fourth grade at Nicoloff Elementary School and her 14-year-old son, Oscar Amparo Medina, attends ninth grade at San Ysidro High School. Both are passport-holding U.S. citizens.
When they got in line at the border at 4 a.m. Monday, traffic was moving slow. Cardenas told the four children to walk across the border instead. She was going to call them an Uber so they could make it to school on time.
But Oscar and Julia Medina never made it across, according to their mother.
Galaxia says CBP officers accused her daughter of lying about her identity. Officers told the girl she didn’t look like the girl in her passport card picture.
Julia Medina told NBC 7 that CBP officers accused her of being someone else, her cousin Melanie. The children said officers also accused Oscar Medina of smuggling and other crimes which he said he didn’t understand.
“My daughter told her brother that the officer told her that if she admitted that she was her cousin, she would be released soon so she could see her mom,” Galaxia said.
“I was scared. I was sad because I didn't have my mom or my brother. I was completely by myself,” Julia Medina said. She said she woke up several times throughout the night, sad because she wasn’t with her family.
Galaxia said officers made Oscar Medina sign a document that said his little sister was his cousin.
“That is not true,” Galaxia said. “She is my daughter. He was told that he would be taken to jail and they were going to charge him for human trafficking and sex trafficking.”
Oscar told NBC 7 he felt terrible for signing the document. He said he just wanted to see his sister.
When CBP officers told Galaxia that Oscar and Julia Medina were detained, she got the Mexican consulate involved.
And after the consulate got involved, the child was released, and the family reunited.
CBP was asked by the San Diego NBC news reporters “why officers detained a 9-year-old U.S. citizen and kept her from her mother for 36 hours,” and a spokesperson said CBP “would respond to questions when it had more details on the case.”
The Mexican consulate told NBC 7 it will also provide more information as to how they were able to reunite the family.
Timothy Geigner has published a detailed analysis of the judgment on Techdirt. It's quite an amazing read: the judge is very clear that no one is going to mistake Comicmix's parody for the Dr Seuss original, nor would they buy the parody as a substitute for Seuss, and the court is especially down on the Seuss estate's theory that the (terrible) decision in Oracle v Google means that mashups are illegal.
Examining the cover of each work, for example, Plaintiff may claim copyright protection in the unique, rainbow-colored rings and tower on the cover of Go! Plaintiff, however, cannot claim copyright over any disc-shaped item tilted at a particular angle; to grant Plaintiff such broad protection would foreclose a photographer from taking a photo of the Space Needle just so, a result that is clearly untenable under—and antithetical to—copyright law.
But that is essentially what Plaintiff attempts to do here. Instead of replicating Plaintiff’s rainbow-ringed disc, Defendants drew a similarly-shaped but decidedly nonSeussian spacecraft—the USS Enterprise—at the same angle and placed a red-and-pink striped planet where the larger of two background discs appears on the original cover. See Duvdevani Decl. Ex. 31, ECF No. 115-11, at 450. Boldly’s cover also features a figure whose arms and hands are posed similarly to those of Plaintiff’s narrator and who sports a similar nose and eyes, but Boldly’s narrator has clearly been replaced by Captain Kirk, with his light, combed-over hair and gold shirt with black trim, dark trousers, and boots.5 Id. Captain Kirk stands on a small moon or asteroid above the Enterprise and, although the movement of the moon evokes the tower or tube pictured on Go!’s cover, the resemblance is purely geometric. Id. Finally, instead of a Seussian landscape, Boldly’s cover is appropriately set in space, prominently featuring stars and planets. Id. In short, “portions of the old work are incorporated into the new work but emerge imbued with a different character.” See Mattel, Inc. v. Walking Mountain Prods., 353 F.3d 792, 804 (9th Cir. 2003).
During the week of March 25, the European Parliament will hold the final vote on the Copyright Directive, the first update to EU copyright rules since 2001; normally this would be a technical affair watched only by a handful of copyright wonks and industry figures, but the Directive has become the most controversial issue in EU history, literally, with the petition opposing it attracting more signatures than any other petition in change.org’s history.
How did we get here?
European regulations are marathon affairs, and the Copyright Directive is no exception: it had been debated and refined for years, and as of spring 2017, it was looking like all the major points of disagreement had been resolved. Then all hell broke loose. Under the leadership of German Member of the European Parliament (MEP) Axel Voss, acting as "rapporteur" (a sort of legislative custodian), two incredibly divisive clauses in the Directive (Articles 11 and 13) were reintroduced in forms that had already been discarded as unworkable after expert advice. Voss's insistence that Articles 11 and 13 be included in the final Directive has been a flashpoint for public anger, drawing criticism from the world's top technical, copyright, journalistic, and human rights experts and organizations.
Why can no one agree on what the Directive actually means?
"Directives" are rules made by the European Parliament, but they aren't binding law—not directly. After a Directive is adopted at the European level, each of the 28 countries in the EU is required to "transpose" it by passing national laws that meet its requirements. The Copyright Directive has lots of worrying ambiguity, and much of the disagreement about its meaning comes from different assumptions about what the EU nations do when they turn it into law: for example, Article 11 (see below) allows member states to ban links to news stories that contain more than a word or two from the story or its headline, but it only requires them to ban links that contain more than "brief snippets"—so one country might set up a linking rule that bans news links that reproduce three words of an article, and other countries might define "snippets" so broadly that very little changes. The problem is that EU-wide services will struggle to present different versions of their sites to people based on which country they're in, and so there's good reason to believe that online services will converge on the most restrictive national implementation of the Directive.
What is Article 11 (The "Link Tax")?
Article 11 seeks to give news companies a negotiating edge with Google, Facebook and a few other Big Tech platforms that aggregate headlines and brief excerpts from news stories and refer users to the news companies' sites. Under Article 11, text that contains more than a "snippet" from an article are covered by a new form of copyright, and must be licensed and paid by whoever quotes the text, and while each country can define "snippet" however it wants, the Directive does not stop countries from making laws that pass using as little as three words from a news story.
What's wrong with Article 11/The Link Tax?
Article 11 has a lot of worrying ambiguity: it has a very vague definition of "news site" and leaves the definition of "snippet" up to each EU country's legislature. Worse, the final draft of Article 11 has no exceptions to protect small and noncommercial services, including Wikipedia but also your personal blog. The draft doesn’t just give news companies the right to charge for links to their articles—it also gives them the right to ban linking to those articles altogether, (where such a link includes a quote from the article) so sites can threaten critics writing about their articles. Article 11 will also accelerate market concentration in news media because giant companies will license the right to link to each other but not to smaller sites, who will not be able to point out deficiencies and contradictions in the big companies' stories.
What is Article 13 ("Censorship Machines")?
Article 13 is a fundamental reworking of how copyright works on the Internet. Today, online services are not required to check everything that their users post to prevent copyright infringement, and rightsholders don't have to get a court order to remove something they view as a copyright infringement—they just have to send a "takedown notice" and the services have to remove the post or face legal jeopardy. Article 13 removes the protection for online services and relieves rightsholders of the need to check the Internet for infringement and send out notices. Instead, it says that online platforms have a duty to ensure that none of their users infringe copyright, period. Article 13 is the most controversial part of the Copyright Directive.
What's a "copyright filter?"
The early versions of Article 13 were explicit about what online service providers were expected to do: they were supposed to implement "copyright filters" that would check every tweet, Facebook update, shared photo, uploaded video, and every other upload to see if anything in it was similar to items in a database of known copyrighted works, and block the upload if they found anything too similar. Some companies have already made crude versions of these filters, the most famous being YouTube's "ContentID," which blocks videos that match items identified by a small, trusted group of rightsholders. Google has spent $100m on ContentID so far.
Why do people hate filters?
Copyright filters are very controversial. All but the crudest filters cost so much that only the biggest tech companies can afford to build them—and most of those are US-based. What's more, filters are notoriously inaccurate, prone to overblockinglegitimatematerial—and lacking in checks and balances, making it easy for censors to remove material they disagree with Filters assume that the people who claim copyrights are telling the truth, encouraging laziness and sloppiness that catches a lot of dolphins in the tuna-net.
Does Article 13 require "filters?"
Axel Voss and other proponents for Article 13 to remove references to them from the Directive in order to win a vote to remove them in the European Parliament. But the new text of Article 13 still demands that the people who operate online communities somehow examine and make copyright assessments about everything, hundreds of billions of social media posts and forum posts and video uploads. Article 13 advocates say that filters aren't required, but when challenged, not one has been able to explain how to comply with Article 13 without using filters. Put it this way: if I pass a law requiring you to produce a large African mammal with four legs, a trunk, and tusks, we definitely have an elephant in the room.
Will every online service need filters?
Europe has a thriving tech sector, composed mostly of "small and medium-sized enterprises" (SMEs), and the politicians negotiating the Directive have been under enormous pressure to protect these Made-In-Europe firms from a rule that would wipe them out and turn over permanent control over Europe's Internet to America's Big Tech companies. The political compromise that was struck makes a nod to protecting SME's but ultimately dooms them. The new rules grant partial limits on copyright liability only for the first three years of an online service's existence, and even these limits are mostly removed once a firm attains over 5m in unique visitors (an undefined term) in a given month, and once a European company hits annual revenues (not profits!) of €10m, it has all the same obligations as the biggest US platforms. That means that the 10,000,001st euro a company earns comes with a whopping bill for copyright filters. There are other, vaguer exemptions for not-for-profit services, but without a clear description of what they would mean. As with the rest of the law, it will depend on how each individual country implements the Directive. France’s negotiators, for example, made it clear that they believe no Internet service should be exempted from the Article’s demands, so we can expect their implementation to provide for the narrowest possible exemption. Smaller companies and informal organizations will have to prepare to lawyer up in these jurisdictions because that’s where rightsholders will seek to sue. A more precise, and hopefully equitable, solution could finally be decided by the European Court of Justice, but such suits will take years to resolve. Both the major rightsholders and Big Tech will strike their own compromise license agreements outside of the courts, and both will have an interest in limiting these exceptions, so it will come down to those same not-for-profit services or small companies to spend the costs required to win those cases and live in legal uncertainty until they have been decided.
What about "licenses" instead of "filters"?
Article 13 only requires companies to block infringing uses of copyrighted material: Article 13 advocates argue that online services won't need to filter if they license the catalogues of big entertainment companies. But almost all creative content put online (from this FAQ to your latest tweet) is instantly and automatically copyrighted. Despite what EU lawmakers believe, we don’t live in a world where a few large rightsholders control the copyright of the majority of creative works. Every Internet user is a potential rightsholder. All three billion of them. Article 13 doesn't just require online services to police the copyrights of a few giant media companies; it covers everyone, meaning that a small forum for dog fanciers would have to show it had made "best efforts" to license photos from other dog fancier forums that their own users might report—every copyright holder is covered by Article 13. Even if an online platform could license all the commercial music, books, comics, TV shows, stock art, news photos, games, and so on (and assuming that media companies would sell them these licenses), they would still somehow have to make "best effort" to license other user's posts or stop their users from reposting them.
Doesn't Article 13 say that companies shouldn't overblock?
Article 13 has some language directing European countries to make laws that protect users from false copyright takedowns, but while EU copyright sets out financial damages for people whose copyrights are infringed, you aren't entitled to anything if your legitimate posts are censored. So if a company like Facebook, which sees billions of posts a day, accidentally blocks one percent of those posts, that would mean that it would have to screen and rule on millions of users' appeals every single day. If Facebook makes those users wait for days or weeks or months or years for a ruling, or if it hires moderators who make hasty, sloppy judgments, or both, Article 13 gives those users no rights to demand better treatment, and even the minimal protections under Article 13 can be waved away by platforms through a declaration that users' speech was removed because of a "terms of service violation" rather than a copyright enforcement.
Do Article 13's opponents only want to "save the memes?"
Not really. It's true that filters—and even human moderators—would struggle to figure out when a meme crosses the line from "fair dealing" (a suite of European exceptions to copyright for things like parody, criticism and commentary) into infringement, but "save the memes" is mostly a catchy way of talking about all the things that filters struggle to cope with, especially incidental use. If your kid takes her first steps in your living room while music is playing in the background, the "incidental" sound could trigger a filter, meaning you couldn't share an important family moment with your loved ones around the world. Or if a news photographer takes a picture of police violence at a demonstration, or the aftermath of a terrorist attack, and that picture captures a bus-ad with a copyrighted stock-photo, that incidental image might be enough to trigger a filter and block this incredibly newsworthy image in the days (or even weeks) following an event, while the photographer waits for a low-paid, overworked moderator at a big platform to review their appeal. It also affects independent creators whose content is used by established rightsholders. Current filters frequently block original content, uploaded by the original creator, because a news service or aggregator subsequently used that content, and then asserted copyright over it. (Funny story: MEP Axel Voss claimed that AI can distinguish memes from copyright infringement on the basis that a Google image search for "memes" displays a bunch of memes)
What can I do?
Please contact your MEP and tell them to vote against the Copyright Directive. The Copyright Directive vote is practically the last thing MEPs will do before they head home to start campaigning for EU elections in May, so they're very sensitive to voters right now! And on March 23, people from across Europe are marching against the Copyright Directive. The pro-Article 13 side has the money, but we have the people!
It’s no secret that the 1980 comedy movie Airplane! is a parody. But for some reason, I’ve spent my entire life believing that Airplane! was a broad parody of 1970s disaster movies. And while the movie clearly took inspiration from some of those films, it’s actually a very specific parody of a movie from 1957 called …
by Robert Chilcott, Professor, Centre for Research into Topical Drug Delivery and Toxicology, University of Hertfordshire
Diners in France recently got more than they bargained for when poppy seed baguettes were found to contain a dose of opium that could take postprandial napping to a new extreme. Other than narcotics, there are a host of surprises lurking in everyday foodstuffs that you might not be aware of. Here are some of the less palatable ones. Bon appétit.
When it comes to food, “natural” is usually a byword for “good”. But some natural products are a bit disgusting. For example, a natural flavouring called castoreum is a thick, odorous secretion obtained from the anal glands of beavers. It is used to give a vanilla flavour to some dairy products and desserts.
Towards the end of the 19th century, beavers were nearly hunted to extinction to acquire this highly desirable food additive and fragrance. Fortunately, German chemists discovered that vanillin (one of the chemicals responsible for the taste of vanilla) could be extracted from the humble conifer.
Today, synthetic vanillin accounts for about 94% of all vanilla flavouring used in the food industry (37,286 tons), with natural vanilla extract accounting for most of the remaining 6%. Beavers can heave a sigh of relief. Their contribution to the food industry now accounts for a tiny fraction of natural vanilla flavouring and tends to be limited to luxury foods and beverages.
Another natural ingredient that might make you retch is rennet. It traditionally came from the mucous membrane of the fourth stomach (abomasum) of young ruminants, such as calves, lambs and goats. The enzymes separate milk into curds and whey – a key stage in the manufacturing process.
Traditional rennet is still used today, although alternatives (derived from mould, bacterial fermentation and plants such as nettles and ivy) are increasingly common, if not slightly more palatable.
Allowable food defects
We live in an era of unprecedented hygiene and expect our food to contain only the ingredients labelled on the packaging. But anyone who has foraged in the wild will know that nature likes to share its rich bounty. There is nothing surprising about taking a bite out of a freshly picked apple to find the remaining half of a (presumably very upset) insect.
Our basic foodstuffs are not grown in sterile conditions and so our diet is peppered with a variety of unintended side dishes, including soil, rodent hairs, faeces, mould, parasites and, of course, insects. The earthy nature of food production is acknowledged in the US through the publication of the Defect Levels Handbook that defines acceptable (non-hazardous) levels of these undisclosed morsels.
For example, two cupfuls of cornmeal may legitimately contain up to five whole insects, ten insect fragments, ten rodent hairs and five rodent poop fragments. It certainly puts that half-eaten apple into perspective.
Pollution – heavy metal
Lewis Carroll’s fictional Mad Hatter character may have been inspired by an occupational disease of milliners (hat makers) caused by exposure to mercury and its salts during a process called “carroting”.
This was commonly used on the pelts of small animals, such as beavers, to make the fur softer. Beavers clearly didn’t have a good time in the 19th century, but the effects of mercury on milliners was equally devastating, with up to half the working population afflicted by erethism, or “mad hatters disease”, the signs and symptoms of which included irritability and excitability, muscle spasms, loss of teeth, nails and hair, lack of coordination, confusion, memory loss and death.
While phased out from most industrial processes, mercury remains a significant air and water pollutant. Indeed, the release of industrial waste into the sea off the south coast of Japan resulted in the local population eating seafood containing methylmercury, the most toxic form of mercury. Because of this, several thousand people became victims of Minamata disease.
How did the seafood become so poisonous? The answer lies in an effect called bioaccumulation, the process whereby the concentration of a substance can substantially increase with each step up the food chain (see illustration). So next time you tuck into a tuna steak, try not to get too irritable or excitable about the hidden mercury.
Although pollutants like mercury, lead, cadmium and arsenic often make headlines as food contaminants, nature’s larder can accidentally contain a whole host of toxins. Many members of the rhododendron genus of flowering plants secrete grayanotoxins in their nectar. These neurotoxic substances are dutifully collected by bees who proceed to make honey, consumption of which can cause “mad honey disease” in humans. This rather unusual form of contamination can cause hallucinations, nausea and vomiting.
When we think of food poisoning, flowers rarely spring to mind, but rhododendron has been indirectly responsible for incapacitating entire armies. True flower power!
Natural born killers
Pickles and preserves have been used for centuries to extend the shelf life of food through the winter months. Unfortunately, badly preserved food can encourage the growth of Clostridium botulinum, which produces the world’s most toxic substances, collectively known as botulinum toxin, some of which can be fatal at a dose of 2ng – that’s two thousand millionths of a gram. To put that in perspective, the average lethal dose of potassium cyanide is about a tenth of a gram.
Eating contaminated food will cause botulism, which stops the nervous system functioning properly. Correspondingly, the condition is characterised by general muscle weakness and, eventually, paralysis and death.
Spores of C. botulinum are often found in honey. While relatively harmless to most people, the immune system of young infants is relatively ineffective against these bacteria, which can lead to a related condition known as infantile botulism. Indeed, this is why many government agencies advise against giving honey to children under a year old.
Robert Chilcott does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
I’m happy to share Flickr’s announcement today that all CC-licensed and public domain images on the platform will be protected and exempted from upload limits. This includes images uploaded in the past, as well as those yet to be shared. In effect, this means that CC-licensed images and public domain works will always be free on Flickr for any users to upload and share.
Flickr is one of the most important repositories of openly-licensed content on the web, with over 500M images in their collection, shared by millions of photographers, libraries, archives, and museums around the world. The company was an early adopter of CC licenses, and was bought by Yahoo! and later sold to Verizon. Last year, Flickr was sold again, this time to a family-owned photo service called SmugMug. Many were justifiably concerned about the future of Flickr, an essential component of the digital Commons.
Once the sale of Flickr was announced, CC began working closely with Don and Ben MacAskill of SmugMug, Flicker’s new owners, to protect the works that users have shared. Last November, Flickr posted that they were moving to a new paid service model that would restrict the number of free uploads to 1,000 images. Many, including Creative Commons, were concerned this could cause millions of works in the Commons to be deleted. We continued to work with Flickr, and a week later, they announced that CC-licensed images that had already been shared on the platform would be exempted from upload limits.
Today’s announcement takes that commitment one step further, and ensures that every CC-licensed or public domain image shared on Flickr is protected for all to use and re-use. It’s a significant commitment. Don and Ben MacAskill and the whole Flickr team have been supportive of CC and Flickr’s responsibility to steward the Commons from day one, and have been open and collaborative with Creative Commons all along.
For users of Flickr (and no doubt also for Flickr staff) it’s been a tumultuous time. Migrating to new business models is difficult, and will undoubtedly anger some users, especially those used to getting things for free. However, we’ve seen how unsustainable and exploitative free models can be, and I’m glad that Flickr hasn’t turned to surveillance capitalism as the business model for its sustainability plan – but that does mean they’ll have to explore other options.
Choosing to allow all CC-licensed and public domain works to be uploaded and shared without restrictions or limits comes at a real financial cost to Flickr, which is paid in part by their Pro users. We believe that it’s a valuable investment in the global community of free culture and open knowledge, and it’s a gift to everyone. We’re grateful for the ongoing investment and enthusiasm from the entire Flickr team, and their commitment to support users who choose to share their works. We will continue to work together to help educate Flickr’s users about their options when sharing works online, and to support the communities contributing to the growth and preservation of a vibrant collection of openly-licensed and public domain works.
The National Socialist Movement is one of America's oldest and most influential Holocaust denial/neo-Nazi movements, proprietors of one of the world's most prominent Holocaust denial websites and defendants in a case over members who participated in racist violence at the Charlottesville "Unite the Right" rally.
James Hart Stern is a Black civil rights activist who has previously -- through very unlikely circumstance -- become head of a prominent KKK organization, which he took over in order to dissolve it. Stern was able to do this because he became a confidant of former Ku Klux Klan Grand Wizard Edgar Ray Killen while they were cellmates, while Killen was imprisoned for murder and Stern was imprisoned for mail fraud. Killen ended up giving Stern power of attorney, which Stern used to take over the Klan and shut it down.
Jeff Schoep was the longterm president of the NSM, credited with reinvigorating it and growing it. He became acquainted with Stern through Stern's connection to Killen. He claims that he sought Stern's advice on his legal exposure from the Unite the Right suits, and that Killem convinced him that he should sign over control of the NSM to him in order to protect himself, and that Stern tricked him into signing over control.
According to Stern, Schoep was worried that NSM was riddled with "the most vulnerable, the most loose-cannon members that they had ever had in the organization" and that "somebody was going to commit a crime, and he was going to be held responsible for it." Stern says Schoep also felt unappreciated by the membership.
Once Stern took over the organization, he sought advice from Jewish leaders as to how best to use his new powers.
Stern has now acted: he has intervened in the pending litigation against NSM to ask that the judge find the group and its members guilty and punish them all accordingly (including Schoep). Next, he says he will transform the group's Holocaust-denial clearinghouse into a resource center for Holocaust remembrance. And he's offered control over the logins and passwords for NSM's social media accounts to the attorneys for the anti-racist activists who are suing NSM.
Though Schoep is no longer legally affiliated with NSM, he still faces the lawsuit because he is listed as a defendant in an individual capacity.
“It’s definitely not good for him, and it shouldn’t be good for him,” Stern said. “You spend 25 years terrorizing people, you can’t rebrand overnight. It doesn’t work like that.”
From California, where he runs Racial Reconciliation Outreach Ministries, Stern is still sorting through the legal intricacies his new leadership entails. He is currently listed as the attorney representing NSM in court filings, but a judge ruled Friday that he cannot be NSM’s lawyer because corporations are not legally authorized to represent themselves in court.
Stern said he is working on hiring an outside lawyer to refile his motion for a summary judgment on the lawsuit. He has also offered the plaintiff’s attorneys full access to NSM social media accounts, he said — because he claims to own those now, too.
“Say what you want about me,” Stern said. “But I’ve done this twice now.”
SpicyIP is arguably the leading blog for experts on India's copyright system, but links to it disappeared from Google's search index following a fraudulent claim of copyright infringement filed by Saregama, India's oldest record label.
The label claimed that an expert report from 2010 on the history of a Bollywood song called "Apni To Jaise Taise" infringed on the copyright to the song. It did not.
Luckily for SpicyIP, they have no shortage of copyright experts who can argue their case with Google's takedown system, and they were reinstated.
However, this is a bad omen for the future of free expression in India, which is contemplating legislation similar to Europe's catastrophic Article 13, which would automate censorship of anything claimed as copyrighted, without any requirement that these claims be truthful or made in good faith.
Regardless of the form of notice-and-takedown, it is apparent that placing an obligation to police copyright infringement on intermediaries create perverse economic incentives on private parties like Google or YouTube to over-comply and take down legal content. This is not solely attributable to the intermediaries’ practices themselves, but the policy and legal decisions which are created and are supposed to to strike a balance between access to knowledge and copyright protection in the digital age.
While we hope SaReGaMa has a stern word with its lawyers, it’s funny and (perhaps on the balance) appropriate that this takedown notice came to a copyright law blog, where we can discuss and dissect such procedures. Yet, had it happened to a non-lawyer, or even someone who had ceased to take interest in their old blog, as it often does, it would result in the permanent removal of public information from an index which serves as the gateway to the internet, due to the ‘mistakes’ of private parties whose interests do not coincide with public access. What does this mean for the future of access to knowledge? These are hard questions which Indian lawyers and policy makers need to grapple with, particularly when we have senseless obligations like the mechanism for ‘automated takedown of unlawful content’ being proposed for intermediaries by the Ministry of IT
Many blacks and whites also fail to see eye to eye regarding the use of blackface, which dominated the news cycle during the early part of 2019 due to a series of scandals that involve the highest elected leaders in Virginia, where I teach.
The donning of blackface happens throughout the country, particularly on college campuses. Recent polls indicate that 42 percent of white American adults either think blackface is acceptable or are uncertain as to whether it is.
One of the most recent blackface scandals has involved Virginia Gov. Ralph Northam, whose yearbook page from medical school features someone in blackface standing alongside another person dressed in a Ku Klux Klan robe. Northam has denied being either person. The more Northam has tried to defend his past actions, the clearer it has become to me how little he appears to know about fundamental aspects of American history, such as slavery. For instance, Northam referred to Virginia’s earliest slaves as “indentured servants”. His ignorance has led to greater scrutiny of how he managed to ascend to the highest leadership position in a racially diverse state with such a profound history of racism and white supremacy.
Ignorance is pervasive
The reality is Gov. Northam is not alone. Most Americans are largely uninformed of our nation’s history of white supremacy and racial terror.
As a scholar who researches racial discrimination, I believe much of this ignorance is due to negligence in our education system. For example, a recent study found that only 8 percent of high school seniors knew that slavery was the central cause of the Civil War. There are ample opportunities to include much more about white supremacy, racial discrimination and racial violence into school curricula. Here are three things that I believe should be incorporated into all social studies curricula today:
1. The Civil War was fought over slavery and one of its offshoots – the convict-lease system – did not end until the 1940s
The Civil War was fought over the South’s desire to maintain the institution of slavery in order to continue to profit from it. It is not possible to separate the Confederacy from a pro-slavery agenda and curriculums across the nation must be clear about this fact.
After the end of the Civil War, southern whites sought to keep slavery through other means. Following a brief post-Civil War period known as Reconstruction, white southerners created new laws that gave them legal authority to arrest blacks over the most minor offenses, such as not being able to prove they had a job.
While students may be taught about segregation and laws preventing blacks from voting, they often are not taught about the extreme violence whites enacted upon blacks throughout the Jim Crow era, which took place from 1877 through the 1950s. Mob violence and lynchings were frequent occurrences – and not just in the South – throughout the Jim Crow era.
During this same time, white society created negative stereotypes about blacks as a way to dehumanize blacks and justify the violence whites enacted upon them. These negative stereotypes included that blacks were ignorant, lazy, cowardly, criminal and hypersexual.
Blackface minstrelsy refers to whites darkening their skin and dressing in tattered clothing to perform the negative stereotypes as part of entertainment. This imagery and entertainment served to solidify negative stereotypes about blacks in society. Many of these negative stereotypes persist today.
3. Racial inequality was preserved through housing discrimination and segregation
During the early 1900s, a number of policies were put into place in our country’s most important institutions to further segregate and oppress blacks. For example, in the 1930s, the federal government, banks and the real estate industry worked together to prevent blacks from becoming homeowners and to create racially segregated neighborhoods.
This process, known as redlining, served to concentrate whites in middle-class suburbs and blacks in impoverished urban centers. Racial segregation in housing has consequences for everything from education to employment. Moreover, because public school funding relies so heavily on local taxes, housing segregation affects the quality of schools students attend.
All of this means that even after the removal of discriminatory housing policies and school segregation laws in the 1950s and 1960s, the consequences of this intentional segregation in housing persist in the form of highly segregated and unequal schools. All students should learn this history to ensure that they do not wrongly conclude that current racial disparities are based on individual shortcomings – or worse, black inferiority – as opposed to systematic oppression.
Americans live in a starkly unequal society where health and economic outcomes are largely influenced by race. We cannot begin to meaningfully address this inequality as a society if we do not properly understand its origins. The white supremacists responsible for sanitizing our history lessons understood this. Their intent was clearly to keep the country ignorant of its racist past in order to stymie racial equality. To change the tide, we must incorporate a more accurate depiction of our country’s racist history in our K-12 curricula.
Noelle Hurd does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Césares de Roma is a project to make hyperrealist sculptures of Roman emperors using existing portraits and sculptures as referencesThe latest creation is a silicone bust of Nerón Claudio César Augusto Germanicus, aka Nero, who wreaked havoc from 54 to 68 AD.
In the Smithsonian Magazine online comments, I encountered these opinions:
“I think the Smithsonian should not have published such an extreme postmodernist and anti-science article.”
“This was an astoundingly bad article that a good science editor should have blocked. The author is clearly knowledgeable about his field but lacks a clear understanding of the scientific method … a series of anti-science and postmodernist rants have been passed off as fact …”
“Without the unnecessary anti-science it would have been a good article.”
“The Smithsonian has gone new-age and the anti-science, regressive Left is apparently thriving there …”
Criticism in academia is healthy. But there was nothing “anti-science” about my article, which asserted that Traditional Knowledge and western science are often complementary. There is nothing anti-science about my work; as an archaeologist, it is heavily informed by science.
It seems only western science can be championed as objective, reliable and neutral.
Emerging from the Enlightenment in the late 17th century, science has provided us with a powerful suite of tools — from quantum mechanics to astrophysics, from chemistry to geology — with which to understand the world and everything in (and outside) it. Broadly framed, science is a method or means to systematically study of the world, including the smallest bits of it, through observation and experimentation to find the best explanation. This description holds true regardless of the culture or beliefs of the scientist.
As an archaeologist, I research the intersection of western and Indigenous ways of knowing the world. I have found that these seemingly different knowledge systems sometimes complement and sometimes contradict each other. I have learned that Indigenous people’s understandings of the world include knowledge gained through scientific methods.
This anti-science attitude even extends to my field. The television series Ancient Aliens (now in season 13) explains ancient technologies and places with complete disregard for scientific evidence.
Good science should yield many new insights about, and even reverse theories. Medical ideas have changed over the years as to whether salt, eggs, coffee, alcohol, etc. are bad or good for you. Such shifts can be explained by new evaluative techniques or larger and longer studies.
Does what has been called the “Replication Crisis” mean that science is not reliable? Of course not. Occasionally, experiments are methodologically flawed or sample sizes too small. These findings reiterate that science is a human enterprise, sometimes prone to personal bias and political motivations.
It is also easy to neglect how quickly new understandings of our world replace old ones.
The methods and goals of western science have been challenged by Indigenous peoples, who have often been the unwilling focus of scientific research (especially in areas like genetics and archaeology). Academics have also challenged scientific methods and goals. However, a critique of science is not a rejection of science.
Indigenous knowledge often complements, but sometimes contradicts the results of archaeology. Why should different methods and different results be shunned when science by design is meant to be challenged? Hypotheses are proposed, tested, accepted or rejected in order to produce reliable and replicable results.
Indigenous knowledge can aid in achieving this in three ways:
1) It strengthens the scientific process by making it less homogeneous in terms of its practitioners’ values and interests, thus increasing objectivity.
2) It offers alternative ideas that serve as multiple working hypotheses (a central concept in science) and move research towards unanticipated results.
3) It helps to affirm that both “scientific explanation” and “oral histories” are products of historical circumstance and cultural context, and subject to controls that ensure accuracy.
Science requires multiple perspectives
Were some of the readers against my article misreading what I was saying about Traditional Knowledge? Or are they against the ideas of Indigenous Knowledge systems?
Do those readers perceive Traditional Knowledge to be an attack on science or western society? Or might some of them be reflecting racist attitudes towards non-Western peoples — even when Traditional Knowledge includes essential aspects of science, such as empirical observation and rigourous testing?
Ultimately, science is a dynamic enterprise that progresses through failure. The late historian Stephen Jay Gould wrote: “How many current efforts, now commanding millions of research dollars and the full attention of many of our best sciences, will later be exposed as full failures based on false premises?”
Science is a multicultural enterprise that benefits from and indeed requires competing views. Indigenous observations, perspectives and values enrich, not threaten, our collective knowledge of the world.
George Nicholas received funding from the Social Sciences and Humanities Research Council to support the research conducted by the Intellectual Property Issues in Cultural Heritage (IPinCH) project (2008-2016).
The program followed 15 ordinary Australians who were seeking to deal with conditions including chronic pain, stress and anxiety. At the end of the experiment, many of the participants had shown improvement.
But if you’re considering dipping a toe into practising mindfulness, or taking the full plunge, there are several things you should consider first.
The origins of mindfulness can be found in Eastern traditions. One definition suggests it’s a way of orienting attention and awareness to the present, reminding oneself to stay present when the mind wanders, and carefully discerning those behaviours that are helpful from those that are not.
Contrary to popular belief, mindfulness is not a way to relax or manage emotions. During practice, you will most likely experience unrest, have unpleasant thoughts and feelings, and learn unexpected and unsettling things about yourself.
While relaxation can and does occur, it’s not always as expected and it’s not really the goal.
Mindfulness is not a quick fix
Problems that have developed over weeks, months, or years cannot be fixed overnight. Behaviour change is hard. The patterns we most want to change (such as addictive behaviours, dysfunctional relationships, anxious thinking) require the investment of serious time and effort.
Instructor Timothea Goddard championed the practice of Mindfulness Based Stress Reduction in Australia and facilitated the Catalyst participants’ mindfulness journey. She acknowledges doing up to an hour of practice a day can seem demanding. But if the challenges a person is dealing with are significant, this may be what’s required.
She adds that just like physical fitness, courses offering sustained daily practice may be more likely to offer greater transformation experiences.
You may imagine mindfulness to be like a beach holiday where you leave all the stress, pressure, and deadlines behind. It’s not.
Mindfulness practice creates awareness around the issues that most need our attention. Often we’re drawn to emotional and physical pain we’ve been avoiding.
One participant in The Mindfulness Experiment, Sam, found this difficult. “I want to forget about the areas that are painful, not concentrate on them,” she said.
Mindfulness provides a method, not to escape, but to explore pain or hardship with acceptance, curiosity, and emotional balance.
Mindfulness is not a panacea
Despite suggestions it will fix everything, there are many circumstances and conditions for which mindfulness is simply not effective or appropriate.
If your main reason for seeking out mindfulness is for mental illness or another medical condition, speak first to a medical professional. Meditation is not meant as a replacement for traditional medicine.
An individual session with a skilled instructor can help you work out whether mindfulness is going to be right for you generally, and which approach specifically might help you.
Mindfulness is not one size fits all. Personal attention before and during practice can make a huge difference, especially in a group. We know from psychotherapy research individual adjustments must be made.
Who created the program?
Perhaps this seems like a strange question; few therapy clients or surgery patients know who created the method being used and they often get better. But unlike therapy or medical procedures, meditation is not overseen by any regulatory agency.
Consider what you want to get from the program and whether there is evidence the program and instructor can help you to achieve those goals.
Those who do not have a regular mindfulness practice themselves may struggle to teach others to cultivate a practice effectively.
Programs that train people to provide structured meditation programs (such as Mindfulness Based Stress Reduction and Mindfulness Based Cognitive Therapy) require professional training, supervision, and extensive personal practice. While we don’t know if personal practice is necessary, it seems likely it is helpful in guiding others.
Nicholas T. Van Dam has received research funding from the Mind & Life Institute and the University of Melbourne. He is affiliated with Confluence (an alliance sponsored by the University of Divinity).
by Hany Farid, Professor of Computer Science, Dartmouth College
One month before the 2016 U.S. presidential election, an “Access Hollywood” recording of Donald Trump was released in which he was heard lewdly talking about women. The then-candidate and his campaign apologized and dismissed the remarks as harmless.
At the time, the authenticity of the recording was never questioned. Just two years later, the public finds itself in a dramatically different landscape in terms of believing what it sees and hears.
All contribute to a climate in which it is increasingly more difficult to believe what you see and hear online.
There are some things that you can do to protect yourself from falling for a hoax. As the author of the upcoming book “Fake Photos,” to be published in August, I’d like to offer a few tips to protect yourself from falling for a hoax.
1. Check if the image has already been debunked
Many fake images are recirculated and have previously been debunked. A reverse image search is a simple and effective way to see how an image has previously been used.
Unlike a typical internet search in which keywords are specified, a reverse image search on Google or TinEye can search for the same or similar images in a vast database.
Reverse image search engines cannot exhaustively index the vastly expansive, ever-changing content on the internet. So, even if the image is on the internet, there is no guarantee that it will have been found by the site. In this regard, not finding an image doesn’t mean it’s real – or fake.
You can improve the likelihood of a match by cropping the image to contain only the region of interest. Because this search requires you to upload images to a commercial site, take care when uploading any sensitive images.
2. Check the metadata
Digital images often contain rich metadata that can provide clues as to their provenance and authenticity.
Metadata is data about data. The metadata for a digital image includes the camera make and model; camera settings like aperture size and exposure time; the date and time when the image was captured; the GPS location where the image was captured; and much more.
The importance of the date, time and location tags is self-evident. Other tags may have a similarly straightforward interpretation. For example, photo-editing software may introduce a tag that identifies the software, or date and time tags that are inconsistent with other tags.
Several tags provide information about camera settings. A gross inconsistency between the image properties implied by these settings and the actual properties of the image provides evidence that the image has been manipulated. For example, the exposure time and aperture size tags provide a qualitative measure of the light levels in the photographed scene. A short exposure time and small aperture suggest a scene with high light levels taken during the day, while a long exposure time and large aperture suggest a scene with low light levels taken at night or indoors.
The metadata is stored in the image file and can be readily extracted with various programs. However, some online services strip out much of an image’s metadata, so the absence of metadata is not uncommon. When the metadata is intact, however, it can be highly informative.
3. Recognize what can and can’t be faked
When assessing if an image or video is authentic, it is important to understand what is and what is not possible to fake.
For example, an image of two people standing shoulder to shoulder is relatively easy to create by splicing together two images. So is an image of a shark swimming next to a surfer. On the other hand, an image of two people embracing is harder to create, because the complex interaction is difficult to fake.
While modern artificial intelligence can produce highly compelling fakes – often called deepfakes – this is primarily restricted to changing the face and voice in a video, not the entire body. So it is possible to create a good fake of someone saying something that they never did, but not necessarily performing a physical act that they never did. This, however, will surely change in the coming years.
4. Beware of sharks
After more than two decades in digital forensics, I’ve come to the conclusion that viral images with sharks are almost always fake. Beware of spectacular shark photos.
I believe that it’s critical for the technology sector to make broad and deep changes to content moderation policies. The titans of tech can no longer ignore the direct and measurable harm that has come from the weaponization of their products.
What’s more, those who are developing technology that can be used to easily create sophisticated fakes must think more carefully about how their technology can be abused and how to put some safeguards in place to prevent abuse. And, the digital forensic community must continue to develop tools to quickly and accurately detect fake images, videos and audio.
Lastly, everyone must change how they consume and spread content online. When reading stories online, be diligent and consider the source; the New York Evening (a fake news site) is not the same as The New York Times. Always be cautious of the wonderfully satirical stories from The Onion that often get mistaken for real news.
Check the date of each story. Many fake stories continue to recirculate years after their introduction, like a nasty virus that just won’t die. Recognize that many headlines are designed to grab your attention – read beyond the headline to make sure that the story is what it appears to be. The news that you read on social media is algorithmically fed to you based on your prior consumption, creating an echo chamber that exposes you only to stories that conform to your existing views.
Finally, extraordinary claims require extraordinary evidence. Make every effort to fact-check stories with reliable secondary and tertiary sources, particularly before sharing.
Ever dreamed of escaping the desk job life and jumping into a more hands-on, labor-intensive career? Mike Rowe, the former host of the eight-season-long Dirty Jobs and outspoken advocate for skilled laborers, can help. For the past 11 years, he's been offering scholarships to wannabe tradespeople to formally learn the ropes of blue-collar work. Training for jobs in such trades as plumbing, automotive technology, construction, and lots more are covered.
Slackers need not apply:
We’re looking for the next generation of aspiring workers who will work smart and hard. This program doesn’t focus on test scores, grades, or grammar. It’s about the people who share our values and understand the importance of work ethic, personal responsibility, delayed gratification, and a positive attitude.
Applicants must first enroll in a trade program and sign his S.W.E.A.T. Pledge ("Skills and Work Ethic Aren’t Taboo"):
Here is a compilation of our blog series, the Secret History of Copyright. The series will unlock some of the mysteries of the copyright world – including little-known laws, influencers, cases and much more!
Authors rights: the argument I’ve been waiting for
The most interesting ResearchGate filing isn’t its factual answer to the complaint, but rather the motion that ResearchGate made accompanying its answer. That motion, with the inconspicuous title of “Motion for Notice Under 17 U.S.C. § 501(b)” asks the court to open the door for something big: communicating about the litigation with the actual authors of the articles posted to ResearchGate. Imagine that!
ResearchGate begins its argument by pointing out the unusual nature of the case, and why it is so important to clearly sort out who owns rights (authors versus publishers) in the articles underlying the lawsuit:
A typical copyright infringement lawsuit about copyrighted material appearing online involves a content creator suing a website owner when an unauthorized third party has posted the creator’s work to the website without the creator’s permission. But here, [the publishers] are suing . . . ResearchGate for allowing scientists to share their own work. . . . Under Plaintiffs’ infringement theories, if ResearchGate is infringing Plaintiffs’ copyrights in the articles at issue here, so are those articles’ authors. Accordingly, a finding that the appearance of those articles on the ResearchGate site was infringing would necessarily mean that the people who conducted the research and wrote the articles did not have the right to share them.
The motion goes on to argue that many authors of these articles (almost all of which were co-authored) still hold a valid copyright interest in them that would allow those authors to legally post the articles to ResearchGate. Even assuming that the publishers obtained valid transfers of exclusive rights from the corresponding authors, ResearchGate argues that there is no evidence that the publishers also obtained a valid transfer of exclusive rights from co-authors of the papers. Thus, those co-authors are free to make what uses they want with their papers, including posting to ResearchGate.
Given that these authors may hold rights, ResearchGate argues that § 501(b) of the Copyright Act allows (and may even require) the court to order notification of those authors as third parties who have a “claim or interest” in the copyrighted works at issue. Section 501(b) provides that the court:
may require written notice of the action with a copy of the complaint provided to “any person shown . . . to have or claim an interest in the copyright,” and
shall require that such notice be served upon any person whose “interest is likely to be affected by a decision in the case,”
In addition to notification, the statute also provides for a way to actually bring third-parties into the lawsuit. It says that the court “may require the joinder, and shallpermit the intervention of any person having or claiming an interest in the copyright” (emphasis mine).
ResearchGate is, for now, just asking the court to order the plaintiffs to notify other potential copyright owners about the lawsuit. Specifically, ResearchGate is asking the court to “order Plaintiff’s “to serve ‘written notice of the action with a copy of the complaint upon’ each co-author of each journal article at issue in the lawsuit who is not a corresponding author. . . .” I don’t know exactly how many authors that is (as I’ve said previously, there are over 3,000 articles), but it’s probably a lot.
Procedure, procedure, procedure
You may think I’m getting all worked up over a little bit of civil procedure. Maybe. But I think it is important because over and over again we’ve seen large-scale copyright infringement suits fought between the large organizations (e.g., Authors Guild v. Google, Authors Guild v. HathiTrust, Elsevier v. SciHub, Cambridge University Press v. Becker (Ga. State)) without much input at all from the actual authors of the works that form the basis of those lawsuits. When those authors have been allowed to have a say, such as in the Google Books class action certification process, their input has meaningfully altered the outcome.
For the ResearchGate litigation, it seems like a good start to at least require the Plaintiffs to notify authors that their work is being used as the basis for a copyright infringement lawsuit. I would hope, once authors are notified, that the court would also allow those same authors to intervene, as the statute allows, to have their own say in how their works are shared with the world.
Last month, Democratic presidential hopeful Elizabeth Warren proposed an annual tax on the largest fortunes in America, with some of the cash generated by the tax being funneled into the IRS to catch dodgers who move or hide their money to escape the tax.
The Warren proposal was modelled on the work of French economist Thomas Piketty, who set out the case for a global wealth tax as an engine for economic growth and political stability in his blockbuster Capital in the 21st Century.
Now, Piketty has published an editorial endorsing Warren's proposal and connecting it to the history of US tax policy, arguing (with data to support his position) that historic tax rates -- which were much higher than they are now -- were the key to US growth and avoiding the malaise, instability and horrific wars that rocked Europe for a century.
He says that the Reagan/Bush tax-cuts (which kicked off the current race to a state of massively unstable and unfair inequality) "turned their backs on the egalitarian origins of the country, by counting on historical amnesia and by fuelling identity-based divisions."
To understand this, let’s look back. Between 1880 and 1910, while the concentration of industrial and financial wealth was gaining momentum in the United States, and the country was threatening to become almost as unequal as old Europe, a powerful political movement in favour of an improved distribution in wealth was developing. This led to the creation of a federal tax on income in 1913 and on inheritances in 1916.
Between 1930 and 1980, the rate applied on the highest incomes was on average 81% in the United States, and the rate applied to the highest inherited estates was 74%. Clearly this did not destroy American capitalism, far from it. It made it more egalitarian and more productive, at a time when the United States had not forgotten that it was their level of educational advancement and their investment in training and skills that was the backbone of their prosperity, and not the religion of property and inequality.
Reagan, then Bush and Trump subsequently endeavoured to destroy this heritage. They turned their backs on the egalitarian origins of the country, by counting on historical amnesia and by fuelling identity-based divisions. With the hindsight we have today, it is obvious that the outcome of this policy is disastrous. Between 1980 and 2020, the rise in per capita national income was halved in comparison with the period 1930-1980. What little growth there was, was swept up by the richest, the consequence being a complete stagnation in income for the poorest 50%. There is something obvious about the movement of return to progressive taxation and greater justice which is emerging today and which is long over-due.
More than 200 teen journalists have come together to write Since Parkland, which profiles each of the more than 1,200 children killed by guns in the USA since the Parkland shooting (not including suicides, kids killed by cops, or shooters who were themselves killed while committing shootings): "The reporting you will read in 'Since Parkland' is journalism in one of its purest forms — revealing the human stories behind the statistics — carried out on an exhaustive scale." A reminder that we do more to keep kids from getting their shots than we do to keep them from getting shot. (via Kottke)
The majority of Americans support net neutrality protections. Based on the comments in the most recent Congressional hearing on net neutrality, you could come away with the idea that most of Congress and most of the giant Internet Service Providers do, too. But if you read between the lines, what we really heard today was a definition of net neutrality designed to leave loopholes for companies to slither through.
When we talk about net neutrality protections, we often talk about the explicit bans on blocking, throttling, and paid prioritization. Those three terms got a lot of play at the hearing in the House. But net neutrality is not just those three things.
Even in trying to define net neutrality this way, people on the side of ISPs like Comcast, AT&T, and Verizon can’t give up the dream of paid prioritization. Once again, the industry trotted out the myth that a ban on paid prioritization prevents ISPs from giving special treatment to first responders or telemedicine. Joseph Franell of Eastern Oregon Telecom said that if he was in an accident, he would want his medical treatment given priority as Internet traffic, which the 2015 rules absolutely allowed. What they banned was prioritization for the purposes of making money, not for public safety or for specialized services.
Net neutrality is the principle that all Internet service providers (ISPs) should treat all traffic coming over their networks without discrimination. Violations of that principle include, but are not limited to, blocking, throttling, and paid prioritization. The value of the FCC’s 2015 Open Internet Order was not just in the banning of those specific practices, but also in giving the FCC ability to investigate actions that violate net neutrality but don't fall neatly into one of those three buckets.
Threats to public safety, for example, don’t necessarily present as clear cases of blocking, throttling, or paid prioritization. The case of Verizon throttling California firefighters during a wildfire might have not met the definition of throttling in the 2015 Order, but the FCC under Pai has likely given up the authority to investigate what happened there.
Public safety is tied to net neutrality, as Congressman Jerry McNerney of California noted when he said that during disasters, people go online to “check evacuation routes, see if their loved ones are safe, and find out if it’s even safe to breathe outside.” And if ISPs have made deals and decisions to make it faster to get to places with wrong or unhelpful information, that is a problem.
Interconnection points—places where two different networks exchange information with each other—are another place where we know ISPs can get around simple bans on just blocking, throttling, and paid prioritization. That is to say that while the bans prevent things occurring on a network, they do not stop providers from charging extra when content enters their network at the connection point, which is paying to get access to customers on the other network. That’s why California’s S.B. 822, written to replicate the whole protections of the 2015 Open Internet Order and not just these three bans, includes prohibitions on doing that. Discriminatory zero-rating—the practice of exempting certain types of traffic from counting against a data cap (often under an agreement between the ISP and web platform)—is another area that can fall outside these three bans. Not only has zero rating been shown to be harmful to low-income broadband users, but it raises the cost of data for all users.
Another claim made constantly was that net neutrality and the 2015 Open Internet Order made it difficult for ISPs to invest in Internet for rural and other underserved communities. It was included in cable lobby head Michael Powell’s written testimony as the statement that the 2015 order had “depressing effects on the market” and Farnell’s testimony that he has more offers to invest since the repeal. But it’s simply not true that net neutrality really hurt rural investment. We know from the mouths of ISPs, when talking to their own investors, that the order didn’t affect their planned investment at all. Besides, the challenges to investment have to do with physical barriers, not protections for consumers: now that the net neutrality rules no longer applies to them, they have still not deployed in rural markets.
As former FCC Chairman Tom Wheeler said during the hearing, legislation focused only on blocking, throttling, and paid prioritization "ignores the future and doesn’t even protect today. It doesn’t protect today because it says you are free to discriminate, just don’t do it this way.”
Net neutrality is a principle. It is not just a set of rules against specific discriminatory practices. Treating all Internet traffic equally is why we have the modern Internet. It’s not a new idea, but it’s in danger today.
The stories of questionable even outright false copyright claims and notices are legion. Nearly every YouTuber has experienced them, myself included, and the problem only seems to be getting worse despite repeated promises from YouTube to address the issue, including a push in 2016.
So what is going on? We may never really know because much of what is happening can only be known by YouTube. However, there are a few things we can say for certain and those can provide clues as to what is going on and what YouTube needs to do to correct its copyright chaos.
A Dual System
One of the reasons that copyright enforcement on YouTube seems so inconsistent is because it’s actually two systems in one, something I explained in a video I made in 2016.
The first system is the standard DMCA Notice and Takedown regime that it is required to have as part of U.S. law.
Under this system, rightsholders are able to file a DMCA notice with YouTube regarding allegedly infringing video. YouTube then removes the video and the user has the option of filing a counter notice, which will either result in the rightsholder filing a lawsuit or the video being restored.
This is also the system through which YouTubers can earn copyright “strikes”. These are the strikes that can cause YouTube to restrict accounts with active strikes and ban those that get too many. These can quickly endanger an account, such as what is currently ongoing with YouTuber Jameskii.
Content ID allows rightsholders and YouTube to automatically spot duplicated content and flag it. Rightsholders can then take a wide variety of steps including blocking the video, monetizing it and tracking it.
Content ID claims do not result in a copyright strike as they aren’t considered DMCA notices. However, the loss of monetization and blocking of videos is a burden unto itself and can cripple channels.
YouTubers have sought ways to avoid such claims but have largely been unsuccessful. Some use a “Copyright Deadlock” system pioneered by Jim Sterling that tries to get a video claimed by multiple entities so it remains ad and monetization free. However, that only works with a secondary income, such as a Patreon, and doesn’t allow the uploader to profit from the video.
Though bad actors who knowingly file false notices have been a problem on YouTube, the lion’s share of the anger is directed at Content ID and its questionable matches.
So why is Content ID causing so many problems? The answer is fairly straightforward.
The Problems with Content ID
Whenever you’re checking for matching content, whether it’s a traditional plagiarism check or a service like Content ID, your results are only as good as these two things:
The Matching Algorithm
The Database of Content You’re Matching Against
With Content ID, both elements seem to have issues. Sometimes it will falsely flag things as matching that can’t actually be matches, such as SmellyOctopus’ video, or match against things that shouldn’t be in the database, such as the Family Guy/Tecmo Bowl incident from May 2016.
And that is one of the key problems YouTube has: Quantity. With so much content being uploaded and so much in its Content ID database, finding matches is a significant challenge, especially when you have to look at both video and audio content.
While it’s clear that YouTube has a lot of room to improve, even the best system is going to make mistakes and, when you’re dealing with these kinds of numbers, even a low failure rate is going to make a large number of errors.
Fixing this problem isn’t easy, but there are a few things that YouTube can do.
The Path to Improvement
No system is ever going to be perfect and, though YouTube has been working to hone Content ID, there’s only so much it can do from a tech standpoint.
Instead, it may be past time to make some adjustments to the system to prevent or reduce some of the more common problems with Content ID.
Here are three simple suggestions to consider:
No Retroactive Content ID Claims: The Family Guy case should never have happened because the claimed videos were published well before the work in the Content ID database. It should be obvious that no Content ID claim should be made on videos uploaded before the source content was published. There are obvious exceptions, such as when new providers join Content ID, but, in general, retroactive Content ID claims should not happen.
Clean Up the Database: The Content ID database clearly has issues. YouTube depends upon rightsholders to provide the content that it checks for, but rightsholders often include content that either they don’t own, such as licensed works they used, or items that may unintentionally trigger false Content ID matches. Works that generate false Content ID matches need to be removed, edited and re-uploaded. Organizations that produce a large amount of false claims may need to be paused pending
Adjust the Minimum Time Threshold: Many of the most frustrating matches are for very short periods of time, usually just a few seconds. These not only raise fair use questions, but are often very difficult to match accurately. Ignoring very short matches would help avoid many of the most egregious matches.
The great thing about all of these changes is that none rely on YouTube magically making Content ID better. The question is whether YouTube’s partners would accept them, especially when they’re being asked to do more work preparing their files.
Ultimately, YouTube has to keep those rightsholders happy with Content ID. The alternative, a tidal wave of DMCA notices, is worse for everyone (including YouTubers) and as YouTube pushes to make itself a mainstream movie rental and music streaming site, it needs those partnerships.
In the end, while Content ID likely can’t get much smarter than it is, it may still be possible for YouTube to be smarter about how it uses it and that may help reduce the number of false claims.
To be clear, most of the false claims on YouTube are not the work of bad actors, but rather, mistakes. While some rightsholders are behaving downright recklessly, on the whole the problem is rightsholders putting too much faith in YouTube’s system and not doing enough to ensure their provided content doesn’t generate false matches.
Still, there are changes that YouTube can and should make to the way it handles copyright. Such changes could drastically reduce the false claims without significantly hurting the system’s ability to fight piracy.
A piracy-filled YouTube benefits no one (save pirates) but neither does one where original YouTubers have to deal with questionable copyright claims on their work.
If YouTube is to continue to thrive, it must find a way to solve this problem and it will have to work with rightsholders if it hopes to do it.
For the first time in twenty years, published works in the U.S. expired into the public domain. This anomaly was the direct result of the Copyright Term Extension Act that extended the length of copyright for works still in their renewal term at the time of the Act to 95 years. This effectively froze the replenishing of the public domain for twenty years. I remember giving copyright workshops with pictures of frozen ice, thinking the year 2019 was some futuristic date. The future is finally here.
But an important note to remember amidst the rejoicing: the length of copyright has not shrunk back. We’ve just finally waited it out long enough for those 1923 works to join their brethren in the public domain. The works published in 1922 joined the public domain back twenty years ago. Hm.
Back at the party, the Internet Archive celebrated the Public Domain Day in style last Friday, with flappers from the 1920s, treats made from recipes in the 1920s and an impressive list of speakers (below). Cory Doctorow gave a rousing closing keynote, in which he spoke about grifters, who use paperwork to somehow shift your stuff to the grifter’s stuff, giving many examples in the world of intellectual property.
We tweeted the Larry Lessig portion of the event and he was joined many other speakers captured in the livestream:
Lawrence Lessig – Harvard Law Professor
Cory Doctorow – Author & Co-editor, Boing-Boing
Pam Samuelson – Berkeley Law Professor
Paul Soulellis – Artist & Rhode Island School of Design Professor
Jamie Boyle – Duke Law Professor & Founder, Center for the Study of the Public Domain
Brewster Kahle – Founder & Digital Librarian, Internet Archive
Corynne McSherry – Legal Director, Electronic Frontier Foundation
Ryan Merkley – CEO, Creative Commons
Jennifer Urban – Berkeley Law Professor
Joseph C. Gratz – Partner, Durie Tangri
Jane Park – Director of Product and Research, Creative Commons
Cheyenne Hohman – Director, Free Music Archive
Ben Vershbow – Director, Community Programs, Wikimedia
Jennifer Jenkins – Director, Center for the Study of the Public Domain
Rick Prelinger – Founder, Prelinger Archives
Amy Mason – LightHouse for the Blind and Visually Impaired
Paul Keller – Communia Association
Michael Wolfe – Duke Lecturing Fellow, Center for the Study of the Public Domain
Daniel Schacht – Co-chair of the Intellectual Property Practice Group, Donahue Fitzgerald LLP
Guest Post by Arnetta Girardeau, Duke University Libraries, Copyright & Information Policy Consultant
As you may have already heard, January 1, 2019 marked a very, very special “Public Domain Day.” When Congress extended the term of copyright in 1998 through the Copyright Term Extension Act, it set off a long, cold public-domain winter. For twenty years, no work first published in the United States entered the public domain. But now, spring is here! On January 1, 2019, works first published in 1923 became free to use. And in 2020, works first published in 1924 will enter the public domain, and so on and so on! It’s exciting stuff. What does that mean to us as creators, makers, teachers, or writers? It means that we suddenly have access to more materials to rework, reuse, and remix! Works such as Charlie Chaplain’s The Pilgrim, Agatha Christie’s Murder on the Links, and “The Charleston.”
At Duke, we’re celebrating this introduction of new materials into the Public Domain with a competition to showcase what our community can do with the public domain. We want to see how Duke faculty, staff, and students can use items from 1923 and earlier, all of which are now in the Public Domain! We have provided a few representative images along with this post, but feel free to create with any works that you find that are in the public domain (if you have questions about what is and isn’t in the public domain, you can contact us and we’d be happy to talk!) Selected entries will be posted on the blog and on Library social media. We have a small number of giveaways to thank you for participating.
What can you do?
Write new lyrics to a song
Create a wallpaper for your mobile phone
Make a work of art
Create a score for a silent movie made in 1923.
What else do you need to know?
Any member of the Duke community may enter. Faculty, staff, students, and retirees are all welcome.
Multiple entries are allowed;
Send in entries between January 9 and January 31 at midnight;
You can read more about the Public Domain in this article by the Duke Law Center for the Study of the Public Domain.
Thanks to the David M. Rubenstein Rare Book & Manuscript Library for providing many of the images!
If you have any questions about entering the showcase, or how to incorporate other people’s work into your own, consult the Arnetta Girardeau, Copyright and Information Policy Consultant, at firstname.lastname@example.org.
The Museum began this immersive project by collecting and sharing hundreds of interviews, quotes, and existing archival footage from the prolific artist. GS&P used these extensive materials to train an AI algorithm to “learn” aspects of Dali’s face, then looked for an actor with the same general physical characteristics of Dali’s body. The AI then generates a version of Dali’s likeness to match the actor’s face and expressions. To educate visitors while engaging with “Dali Lives,” the Museum used authentic writings from Dali himself – coupled with dynamic present-day messages – reenacted by the actor.
I appreciate all the work Guy Jones puts into restoring old movies. He replaces the herky-jerky motion with a more natural looking motion and adds sound that matches the action. Here's a short film of High Street in Marseille, France as it looked on April 11, 1896. There's an advertisement on a horse-driven tram for "Chocolat Russe Du Bebe" but when I google it, the only results are for a "Polar Bear Milk Hat" and "Pregnant Dwarf Hamster Behavior."
Researchers at MIT's Lincoln Laboratory, a United States Department of Defense research facility, developed laser systems that can "transmit various tones, music and recorded speech at a conversational volume" to specific people without the recipient wearing any special equipment. Basically, the operator points a laser at someone from a distance and that individual hears the transmitted audio even though others in the area don't. Conspiracy theorists, start your engines. From the Optical Society of America:
"Our system can be used from some distance away to beam information directly to someone's ear," said research team leader Charles M. Wynn. "It is the first system that uses lasers that are fully safe for the eyes and skin to localize an audible signal to a particular person in any setting..."
The new approaches are based on the photoacoustic effect, which occurs when a material forms sound waves after absorbing light. In this case, the researchers used water vapor in the air to absorb light and create sound...
One unique aspect of this laser sweeping technique is that the signal can only be heard at a certain distance from the transmitter. This means that a message could be sent to an individual, rather than everyone who crosses the beam of light. It also opens the possibility of targeting a message to multiple individuals.
I had the honor of delivering the closing keynote, after a roster of astounding speakers. It was a big challenge and I was pretty nervous, but on reviewing the saved livestream, I'm pretty proud of how it turned out.
On January 1, 2019, something happened in the United States that hadn’t happened in 20 years. However, it was easy to miss, especially if you don’t follow copyright news, and its importance is still the subject of much debate.
With the new year, new works entered the public domain.
However, since the year lapsing is 1923, it’s not exactly a gangbuster list. Highlights include Safety Last!, the famous silent film starring Harold Lloyd that features the clock tower scene, The Ten Commandments by Cecil B. DeMille (the 1923 silent one, not the much more famous 1956 version), the composition for Charleston, the song that accompanies the dance of the same name, and various lesser-known works by Agatha Christie, E. E. Cummings, Virginia Woolf and more.
But what does the first public domain day in two decades really mean? The answer for most creators is not much. However, there are some potentially interesting implications for the larger copyright picture, especially as we look down the road.
How Did We Get Here?
Under normal circumstances, new works lapse into the public domain every year. As the clock strikes midnight on January 1st, works that now fall outside of their copyright term become public domain.
However, in 1998, Congress passed and President Bill Clinton signed the Sonny Bono Copyright Term Extension Act (SBCTEA). The act did one very simple thing, it added 20 years to the copyright term. This meant works of individual authorship would be protected 70 years after the creator’s death instead of 50 and works of corporate authorship would be protected for 95 years instead of 75 years.
While the term itself is actually one of the most common globally, it was still significantly controversial at the time. The act is often derisively called the Mickey Mouse Copyright Extension Act because of the works that saw a reprieve was Steamboat Willie, which was set to expire in 2004 but now has protection until 2024.
But one effect of the act was that, for 20 years, no new works would enter the public domain. Any works that were preparing to enter in 1999 would have to wait 20 years. Those 20 years have now expired and works are entering the public domain as they would have 20 years ago without the extension.
How big of a deal you feel this is likely depends on your views of the current copyright term and rules. However, for creators it really doesn’t mean that much. Though thousands or works from 1923 are entering the public domain, the copyright term on newer creations remains unchanged and very little happens to living creators.
But that doesn’t mean changes aren’t coming, they just may not be what many expect.
The Actual Impacts
The 1923 collection to enter the public domain isn’t particularly exciting. Very few of the works involved are still commercially viable. However, that won’t be the case for much longer.
The most-discussed example is Steamboat Willie, the cartoon that marked Mickey Mouse’s first appearance and was the work that many allege the SBCTEA was meant to protect. Though Disney’s role in passing the SBCTEA is greatly exaggerated, there’s not much doubt that they were a beneficiary of the law, this cartoon being the prime example.
However, January 1, 2024 won’t be the day that Mickey Mouse becomes public domain. Barring another change in the law (which is unlikely) the cartoon will lapse, but the character will be very much protected.
First and foremost, it will be because the Mickey from Steamboat Willie is a very different character than the one we know today. As we see today with Sherlock Holmes, when some of a character’s work is public domain and some of it protected, it can get very complicated as to what others can use.
Though the Supreme Court ruled in 2003 that you can’t use trademark to extend a copyright past its expiration, that only applies to the work itself. Still, one can easily predict that there will be significant legal battles over the Mikey Mouse character and just what the boundaries are regarding its copyright status (or lack thereof).
And Mickey Mouse is, in many ways, just the tip of the iceberg. As more classic but still commercially-viable works enter the public domain, there is going to be more and more legal battles over them. As we saw with the long, protracted battle over Happy Birthday To You, it can be difficult to prove something is in the public domain, even if there are multiple arguments for why it should be.
In short, as new works enter the public domain, what we’re actually going to be looking at is a lot of litigation. This isn’t necessarily a bad thing as it may help clarify areas of copyright law that are currently muddled, but it means that the reopening of the public domain isn’t likely to be smooth or quiet.
So brace yourself, not for a radical change in copyright law, but for new legal battles. Though we’ve had our share of public domain-related legal battles over the past two decades, it’s about to get a lot more heated.
Regardless of what you think about the current term of copyright, the new admissions to the public domain don’t really have a major impact for living creators.
However, as more and more works do lapse we are going to be entering somewhat uncharted territory. Though most works cease being commercially viable after a few years or decades, many very lucrative works have held on to their value for much longer.
Couple that with modern licensing and rights tracking, these works have owners that are still exploiting them and are highly motivated to not lose them.