Shared posts

22 Apr 15:33

Winning Korea is winning the Silk Road: Empires of the Silk Road: 2 of 2: A History of Central Eurasia from the Bronze Age to the Present 1by Christopher I. Beckwith

by The John Batchelor Show
Tom Roche




U.S. Army soldiers assigned to the 24th Infantry Regiment move up to the firing line in Korea, July 18, 1950. U.S. Army photo)

Twitter: @BatchelorShow

Winning Korea is winning the Silk Road: Empires of the Silk Road: 2 of 2: A History of Central Eurasia from the Bronze Age to the Present 1by Christopher I. Beckwith

22 Apr 15:33

Winning Korea is winning the Silk Road: Empires of the Silk Road: 1 of 2: A History of Central Eurasia from the Bronze Age to the Present 1by Christopher I. Beckwith

by The John Batchelor Show
Tom Roche




English: Vought F4U-4 Corsair fighters of U.S. Navy fighter squadrons VF-114 Executioners (V-4XX) and VF-113 Stingers (V-3XX, on the catapult), assigned to Carrier Air Group 11 (CVG-11), on the aircraft carrier USS Philippine Sea (CV-47) during the Korean War in 1951/52. Another Essex-class carrier of Task Force 77 is visible in the background.

Date 1951/52

Source U.S. DefenseImagery [1] photo VIRIN: HA-SN-98-06975

Author USN)

Twitter: @BatchelorShow

Winning Korea is winning the Silk Road: Empires of the Silk Road: 1 of 2: A History of Central Eurasia from the Bronze Age to the Present 1by Christopher I. Beckwith

20 Apr 17:24

How Shoddy Reporting and Anti-Russian Propaganda Coerced Ecuador to Silence Julian Assange

by M.C. McGrath
Tom Roche

excellent on several levels:

* on Twitter analytics, e.g., retweet time lags (temporal analysis), like-to-tweet ratios

* detailed analysis of a specific anti-dissent campaign (against Catalan independence by Spanish media)

* detailed takedown of Hamilton 68, which ...

* ... links to very excellent Taibbi article on Russiagate and the US new cold war as targeting dissent (just like the old US cold war) @ (archived @ )

Julian Assange has been barred from communicating with the outside world for more than three weeks. On March 27, the government of Ecuador blocked Assange’s internet access and barred him from receiving visitors other than his lawyers. Assange has been in the Ecuadorian embassy in London since 2012, when Ecuador granted him asylum due to fears that his extradition to Sweden as part of a sexual assault investigation would result in his being sent to the U.S. for prosecution for his work with WikiLeaks. In January of this year, Assange formally became a citizen of Ecuador.

As a result of Ecuador’s recent actions, Assange — long a prolific commentator on political debates around the world — has been silenced for more than three weeks, by a country that originally granted him political asylum and of which he is now a citizen. While Ecuador was willing to defy Western dictates to hand over Assange under the presidency of Rafael Correa — who was fiercely protective of Ecuadorian sovereignty even if it meant disobeying Western powers — his successor, Lenín Moreno, has proven himself far more subservient, and that mentality — along with Moreno’s increasingly bitter feud with Correa — are major factors in the Ecuadorian government’s newly hostile treatment of Assange.

Yet many of the recent media claims about Assange that have caused this standoff — which have centered on the alleged role of Russia in the internal Spanish conflict over Catalan independence — range from highly dubious to demonstrably false. The campaign to depict Catalan unrest as a plot fueled by the Kremlin, Assange and even Edward Snowden have largely come from fraudulent assertions in the Spanish daily El País and  highly dubious data claims from the so-called Hamilton 68 dashboard. The consequences of these false and misleading claims — this actual “fake news” — have been multifaceted and severe, not just for Assange, but for diplomatic relations among multiple countries.

The Guardian reported last week that doctors who recently visited Assange concluded his health condition has become “dangerous.” The journalist Stefania Maurizi of La Repubblica yesterday confirmed that Assange “is still in the Ecuadorian Embassy in London and unable to access the internet and to receive visitors,” while the official WikiLeaks account provided further details about the restrictions Assange faces:

Ordinarily, Western commentators would be lining up to denounce a country like Ecuador for blocking the communications and internet access of one of its own citizens. But because the person silenced here is Assange, whom they hate, their heartfelt devotion to the sacred principles of free speech and a free press vanish.

(When Ecuador first granted asylum to Assange, both the Ecuadorian government and Assange’s lawyers have always said that Assange would board the next flight to Stockholm if the Swedish government gave assurances it would not extradite him to the U.S. Although Swedish prosecutors last year dropped the sex assault investigation into Assange, Trump CIA Director Mike Pompeo has vowed to do everything possible to destroy WikiLeaks and prevent it from publishing further, while the U.K. government — an ally of the Trump administration — has vowed to arrest Assange on bail charges if he leaves the embassy.)

Evidence has now emerged that the cutting off of Assange’s communications with the outside world is the byproduct of serious diplomatic pressure being applied to the new Ecuadorian president, pressure that may very well lead, perhaps imminently, to Assange being expelled from the embassy altogether. The pressure is coming from the Spanish government in Madrid and its NATO allies, furious that Assange has expressed opposition to some of the repressive measures used to try to crush activists in support of Catalan independence.

The day after blocking Assange’s communications with the outside world, Ecuador issued a statement alleging that Assange’s public statements are “putting at risk” Ecuador’s relations with other states. The Ecuadorian government has previously expressed dissatisfaction with some of Assange’s political activities and statements, but the breaking point appears to have been a series of tweets from Assange about the arrest in Germany earlier this month of former President of Catalonia Carles Puigdemont.

Beginning in September, Assange had been tweeting regularly about the referendum for independence in Catalonia. Back then, Ecuador released a statement criticizing these tweets and emphasizing that “Ecuadorian authorities have reiterated to Mr. Assange his obligation not to make statements or activities that could affect Ecuador’s international relations.”

But why did these tweets about Catalonia, of all of Assange’s tweets about politics in other countries and the role he played in the 2016 U.S. election, lead — after five years — to such a response? And why now? And why and how did the West succeed in convincing so many of its citizens that the movement for Catalan independence — which has been a source of internal conflict in Spain for years — was now suddenly fueled by a Kremlin plot with Putin as the puppet master?

The answers provide a vivid example of how claims about “fake news,” Western propaganda, and disinformation can be used as a tactic for political manipulation. The circumstances leading to recent events extend beyond Julian Assange and understanding them requires background on political pressures in Spain, Ecuador, the United States, and the United Kingdom, and how these intersect with Assange’s case.

The tensions between Ecuador and Assange center on the debate in Spain over Catalan independence. On October 1, 2017, the autonomous region of Catalonia held a referendum for independence. The Spanish government declared this referendum illegal. Protests and arrests of Catalan activists ensued, as well as the seizure of ballots and raids on polling stations by the government in Madrid.

In the midst of this crisis, former Spanish Prime Minister Felipe González reportedly requested that Spain’s most powerful media conglomerate, Grupo PRISA, which owns El País, “offer a firm response” to the independence movement in Catalonia. The media corporation complied, devoting its full resources to opposing Catalan secession.

El País, days later, began depicting Catalan activists as a tool of the Kremlin. The paper published an article alleging that not only Assange, but also Edward Snowden, were helping Russian propaganda networks spread “fake news” about Catalonia. El País repeated these claims in subsequent stories, which were echoed in reports from other anti-separatist organizations, such as the Spanish think tank Elcano Royal Institute, Atlantic Council’s Digital Forensics Research Lab, and NATO’s StratCom.

El País, Sept. 25, 2017

Once El País endorsed these fantastical allegations — that Assange and Snowden were helping to lead a Kremlin campaign to promote Catalan separatism — they were cited by or brought before legislative bodies around the world, in Spain, the U.K., and the U.S. But while these accusations are being taken seriously, they — like many claims about “fake news” and foreign online propaganda campaigns — are not being critically scrutinized or journalistically verified, and have little evidentiary support.

When pressed, many of those advancing these claims admit they have no definitive evidence of Russian government interference during the referendum in Catalonia, capable of citing only what they regard as biased reporting from RT and Sputnik. This exchange, from a House of Commons hearing on “fake news,” illustrates this point:

Yet — as they do with most Western problems these days — they continue to depict the conflict in terms of Russian propaganda because, they themselves acknowledge, they cannot comprehend the tensions in Spain except as a Kremlin operation. As Mira Milosevich-Juaristi from Elcano said when testifying before the Comisión Mixta de Seguridad Nacional in Spain: “The complexity and combination and coordination, which all occur at the same time, need an actor either governmental or near the government to coordinate it.”

Accusations that online support for Catalan independence was a Russian plot led by Assange and Snowden became a widespread belief. From the start, it was based overwhelmingly on a crude “dashboard” calling itself “Hamilton 68″ that is maintained by the Alliance for Securing Democracy. The “dashboard” purports to track “the activities of 600 Twitter accounts linked to Russian influence efforts online.”

That Catalonia was trending among the supposedly pro-Kremlin accounts monitored by the dashboard was offered by El País as what the paper called “definitive proof” — definitive proof — “that those who mobilize the army of pro-Russian bots have chosen to focus on the Catalan independence movement.”

But from its inception, the dubiousness of Hamilton 68 was self-evident. As The Intercept reported when the group was first formed, it was a brand-new foreign policy advocacy organization created by the exact people with the worst records of lying and militarism in Washington: Bill Kristol, former CIA officials, GOP hawks, and Democratic Party neocons.

Even worse, Hamilton 68 was, and remains, incredibly opaque about its methodology, refusing even to identify which accounts they designate as “promoting Russian influence online.” These marked accounts not only include “accounts that clearly state they are pro-Russian or affiliated with the Russian government,” but also “accounts run by people around the world who amplify pro-Russian themes either knowingly or unknowingly” (which often includes any dissent from the U.S. foreign policy orthodoxies endorsed by the neocons and CIA officials who created the group, now branded as “pro-Russian”).

Despite all of these glaring reasons for skepticism, U.S. media outlets repeatedly ingested the claims of this brand-new, sketchy group about what “Russian bots” are saying without an iota of critical thought or questioning, constructing one headline after the next based exclusively on the claims of this murky, shady group. As Rolling Stone’s Matt Taibbi recently documented, “More and more often now, the site’s pronouncements turn into front-page headlines.”

But in recent months, the credibility of Hamilton 68 has been widely challenged. Journalists and researchers have identified numerous inaccurate stories that were based on Hamilton 68 data, examined the involvement of highly ideological actors in the development of the tool, and questioned the “secret methodology” of Hamilton 68 and specifically its bizarre refusal to disclose the list of 600 accounts on which it bases its data. Some additionally note that the narrative about Russian bots and trolls is increasingly used as a tool to discredit a wide variety of legitimate political movements around the world.

Often, these media stories based on Hamilton 68, that fuel hysteria over Russian control of the west, are founded upon a poor understanding of what can and cannot be affirmed based on the data. The U.S. media’s abuse of this data finally caused even one of the creators of Hamilton 68 to express frustration with these inaccurate conclusions based on the most superficial understanding of the data, saying: “It’s somewhat frustrating because sometimes we have people make claims about it or whatever — we’re like, that’s not what it says, go back and look at it.”

Misunderstanding social media analytic tools is a common problem in fake news reporting and can lead to erroneous conclusions with real political consequences. One illustrative example is the claim by El País – in its seminal article depicting Catalan independence as fueled by the Kremlin — that “a detailed analysis of 5,000 of Assange’s followers on Twitter provided by TwitterAudit, reveals that 59% are false profiles.”

El País’s claim is a demonstrable, obvious fraud. This assertion was entirely inaccurate because the data was from an inactive account with no tweets. Assange created a personal account years ago, but only started using it to tweet for the first time on February 14, 2017. But the Twitter Audit data used by El País to make this extraordinary claim — that Assange’s following is mostly composed of bots — is from February 2014, three years before anything was tweeted from the account.

To put it mildly, the archaic data used by El Pais regarding Assange’s account is wildly inaccurate for claims about his current followers. After reassessing @JulianAssange’s followers on November 24, 2017, Twitter Audit now shows that 92% of his followers are real.

Twitter Audit still claims that 8% of Assange’s followers are bots or otherwise fake, but this is relatively low considering that recent scientific studies estimate that “between 9% and 15% of active Twitter accounts are bots.”

It is also low relative to the results for other accounts with many followers. For example, Twitter Audit estimates that 25% of the @el_pais followers, 17% of @BarackObama’s followers, 27% of @RealDonaldTrump’s followers, and 48% of @EmmanuelMacron’s followers (as of nine months ago) may be fake accounts. Social media tools like Twitter Audit generally only provide rough heuristics, which are meaningless in isolation.

Correctly interpreting the results of social media analytics tools requires not only closely examining the data and understanding the limitations of the tools, but also comparing to known benchmarks from scientific studies and controls.

One example of this dubious methodological approach is the claim from both El País and DFRLab that a particular tweet by Julian Assange spread suspiciously quickly. On September 15, Assange tweeted: “I ask everyone to support Catalonia’s right to self-determination. Spain cannot be permitted to normalize repressive acts to stop the vote.”

El País argued, in the excerpt provided above, that “in the case of the tweet from Assange, as with many of his messages on the social media platform, it received 2,000 retweets in an hour and obtained its maximum reach – 12,000 retweets, in less than a day. The fact that the tweet went viral so quickly is evidence of the intervention of bots, or false social media profiles, programmed simply to automatically echo certain messages.”

The claim that retweet rates should gradually accelerate over time may make intuitive sense, but neither El País nor DFRLab provided any citations to research on these dynamics. In fact, at least one study has found that these supposedly intuitive hypotheses about retweet rates are wrong, and instead “half of retweeting occurs within an hour, and 75% under a day.” In the case of this particular tweet from Assange, only one-sixth of the retweets occurred in the first hour.

The overall spread of the tweet is also normal relative to Assange’s other tweets. Each of Julian Assange’s tweets between August 1 and December 12, 2017 was seen by 232,249.63 people on average. The specific tweet that El País scrutinized received 761,410 impressions (that is, this particular tweet was shown to other Twitter users 761,410 times). This is a bit higher than normal, but not disproportionately so; Assange’s most popular tweets regularly receive 3 or 4 million impressions, 4 to 6 times as many as the tweet that El País said must have been amplified by bots.

Shoddy claims of this sort could be avoided if journalists reporting on “fake news” attempted to follow reliable empirical methods in even a basic way — or at least the commands of basic rationality — by verifying whether the behavior that is claimed to be unusual actually is out of the ordinary. It is especially important to use valid methodology when reporting on allegedly “fake news” because misleading the public due to mistakes or exaggerated claims can lead to escalation of political tensions; ironically, deceitful attempts to identify and warn of the menace of Fake News, such as those peddled here by El Pais, can easily transform into a dissemination of Fake News.

(A full report with more detailed data regarding these empirical claims was recently submitted by one of the authors of this article, McGrath, to the U.K. Committee on Fake News.)

Reciting claims that match a commonly stated narrative, such as Russian interference in various western problems, is no substitute for fact-based analysis. One of the more methodologically unsound tactics used is to depict not only RT and Sputnik, but anyone who is quoted or even retweeted by them, as assisting in the spread of Russian state propaganda. During his testimony for the U.K. fake news committee, David Alandete from El Pais stated that “RT and Sputnik are at the center of this. Assange and Snowden are a very handy source for them; anything that Assange says is a quote and a headline.”

Assange was mentioned multiple times in RT and Sputnik’s coverage about Catalonia, but the stories quoting Assange comprised only a small minority of their discussions of these political events. Analysis of Sputnik and RT’s stories based on both Media Cloud’s data and their tweets reveal that only 1% to 3% of RT and Sputnik’s stories about Catalonia also mention Assange. Additionally, most of these references to Assange are centered around a few isolated quotes and events, in contrast to RT and Sputnik’s continuous coverage of the situation in Catalonia more generally.

Rather ironically (given the claims about bots and trolls promoting messages about independence in Catalonia), there is clear evidence of Twitter bots spreading messages about the crisis in Spain — but those were non-Russian bots and they were spreading propaganda that was opposed to Catalan independence. This is hardly the first time that Western governments and its allies have been caught using fake online activity to spread Western propaganda, but few western commentators care except when Russia or other U.S. adversaries do it.

Whatever else is true, it is likely that bot manipulation has been used in inflame the crisis in Spain — but they have been used to spread anti-Catalan messaging in line with Madrid authorities. On September 11, 2017, the user with the name @marilena_madrid tweeted a link to a story that Spain’s ABC published several months previously, which emphasized Puigdemont’s lack of legitimacy with EU institutions: a key anti-independence theme from Madrid authorities:

This @marilena_madrid tweet was retweeted over 15,000 times, but “liked” only 99 times. Researchers working on Twitter bot detection have discovered that bots often have low likes-to-tweets ratios of exactly this type:

In contrast to Assange’s tweets, which receive 1.14 “likes” per retweet on average — reflecting that they are spread overwhelmingly by actual humans — this anti-Catalan tweet from @marilena_madrid has a “likes” per retweet ratio of only 0.0062. A large number of the retweeters have random gibberish usernames such as @M9ycMppdvp5AhJb, @hdLrUNkGitXyghQ, and @fQq96ayN3rikTw, which also indicates that they may be bots.

Most of the accounts that retweeted @marilena_madrid, as well as @marilena_madrid’s account itself, now appear to be suspended by Twitter. Unfortunately, it is not possible to view data about suspended accounts except from on pages previously archived or cached, so there is limited data to study this particular set of bots in more detail. Thus, it is not possible to assess if this was an individual who had their tweets promoted by bots, a social media strategy by ABC to spread their stories with bots and trolls, or a state-sponsored propaganda campaign. Nevertheless, this case exemplifies the need for more multisided analysis about who is really using bot campaigns to propagandize the public.

It is certainly likely that one could find some actual Russian bots posting messages supportive of Catalan independence, but the magnitude has been wildly exaggerated by El Pais and many other Western institutions to the point of fabrication.

If, as appears to be true, these unsupported allegations about spreading disinformation during the referendum in Catalonia are being used as a tool for political manipulation in the case of Julian Assange, it is working. The escalation of tensions with Spain, which has strong diplomatic ties to Ecuador, threatens Assange’s asylum in a way that the longstanding pressure from the United States and United Kingdom could not. The intersection of these issues has lead to a rapidly deteriorating diplomatic situation, founded in part on exaggerated and inaccurate claims of disinformation during the referendum in Catalonia, where Ecuador is being forced to choose between maintaining their relations with other states and upholding Assange’s asylum.

The pattern of events seen here is not specific to Julian Assange, Ecuador, Spain, or any one country. It is a global, systemic problem. This situation an illustrative example of what happens when political tensions around internal divisions, in this case Spain and Catalonia, build and break. In the aftermath, people struggle to explain what they see as an injustice, and conclude that some foreign person or group interfered to bring about a problematic situation by spreading propaganda or disinformation.

This narrative grows and shifts the focus away from internal problems and divisions by unifying people against this new external enemy, much the way patriotism surges during a war. This sentiment can then be exploited as a tactic of manipulation for those seeking to support their own agendas, and leveraged to pressure external parties. It is also an extremely powerful tool for stigmatizing any internal, domestic dissent aligned with, if not controlled by, the foreign villain. Meanwhile, internal tensions continue to build and conflicts escalate, with their actual causes ignored in favor of pleasing, simplistic, self-vindicating storylines about foreign interference.

Correction: April 20, 2018
A previous version of this article incorrectly stated that the Spanish newspaper ABC was owned by Grupo PRISA. It is owned by Vocento.

Top photo: People gather to protest against the National Court’s decision to imprison civil society leaders without bail in Barcelona, Spain on Oct. 17, 2017.

The post How Shoddy Reporting and Anti-Russian Propaganda Coerced Ecuador to Silence Julian Assange appeared first on The Intercept.

19 Apr 14:34

The Constitutional Issues of the US Attack Against Syria

19 Apr 14:30

The Vicious Cycle Undermining the Common Good of the US Society. Then, The Science and Spirit of the Ocean. 

19 Apr 14:21

Behemoth: A History of the Factory and the Making of the Modern World

19 Apr 14:15

Dealing with Russia: Are sanctions and expulsions enough?

Tom Roche

Daniel Fried from Atlantic Council "is concerned that the West is not doing enough to counter Russian aggression."

With the expulsion of diplomats and new sanctions the West is taking a harder line against Russia, but Cold War history suggests that more needs to be done to achieve a real change in momentum.
19 Apr 01:44

Before the Syrian crisis: The Fall of the Ottomans: 1 of 6: The Great War in the Middle East. by Eugene Rogan

by The John Batchelor Show
Tom Roche



(Photo: Battle of Nicopolis in 1396. Painting from 1523.)

Twitter: @BatchelorShow

Before the Syrian crisis: The Fall of the Ottomans: 1 of 6: The Great War in the Middle East. by Eugene Rogan

"To have written a page-turner as well as an accurate and comprehensive history of the Ottoman struggle for survival is a remarkable achievement." --Wall Street Journal

By 1914 the powers of Europe were sliding inexorably toward war, and they pulled the Middle East along with them into one of the most destructive conflicts in human history. In The Fall of the Ottomans, award-winning historian Eugene Rogan brings the First World War and its immediate aftermath in the Middle East to vivid life, uncovering the often ignored story of the region's crucial role in the conflict. Unlike the static killing fields of the Western Front, the war in the Middle East was fast-moving and unpredictable, with the Turks inflicting decisive defeats on the Entente in Gallipoli, Mesopotamia, and Gaza before the tide of battle turned in the Allies' favor. The postwar settlement led to the partition of Ottoman lands, laying the groundwork for the ongoing conflicts that continue to plague the modern Arab world. A sweeping narrative of battles and political intrigue from Gallipoli to Arabia, The Fall of the Ottomans is essential reading for anyone seeking to understand the Great War and the making of the modern Middle East.

19 Apr 01:44

Before the Syrian crisis: The Fall of the Ottomans: 1 of 6: The Great War in the Middle East. by Eugene Rogan

by The John Batchelor Show
Tom Roche



(Photo:Sultan Mehmed II's entry into Constantinople; painting by Fausto Zonaro (1854–1929) )

Twitter: @BatchelorShow

Before the Syrian crisis: The Fall of the Ottomans: 1 of 6: The Great War in the Middle East. by Eugene Rogan

"To have written a page-turner as well as an accurate and comprehensive history of the Ottoman struggle for survival is a remarkable achievement." --Wall Street Journal

By 1914 the powers of Europe were sliding inexorably toward war, and they pulled the Middle East along with them into one of the most destructive conflicts in human history. In The Fall of the Ottomans, award-winning historian Eugene Rogan brings the First World War and its immediate aftermath in the Middle East to vivid life, uncovering the often ignored story of the region's crucial role in the conflict. Unlike the static killing fields of the Western Front, the war in the Middle East was fast-moving and unpredictable, with the Turks inflicting decisive defeats on the Entente in Gallipoli, Mesopotamia, and Gaza before the tide of battle turned in the Allies' favor. The postwar settlement led to the partition of Ottoman lands, laying the groundwork for the ongoing conflicts that continue to plague the modern Arab world. A sweeping narrative of battles and political intrigue from Gallipoli to Arabia, The Fall of the Ottomans is essential reading for anyone seeking to understand the Great War and the making of the modern Middle East.

18 Apr 03:05

Peter Calthorpe Is Still Fighting Sprawl—With Software

by Richard Florida

The architect and urban designer Peter Calthorpe was an advocate of transit-oriented development (TOD) and smart growth long before those concepts were buzzwords. In fact, as one of the founders of the Congress for the New Urbanism and the author of the first TOD guidelines (as well as numerous influential books), Calthorpe has done as much as anyone to re-focus American urbanism on walkable, dense, sustainable, transit-rich environments.

Now, in an attempt to spread this knowledge even more widely, Calthorpe has released a new urban-planning software, UrbanFootprint, which will soon be available to virtually every city and government agency in California. (The company behind the software, which is co-led by Calthorpe and Joe DiStefano, is separate from the planning firm Calthorpe Associates.) Calthorpe compares UrbanFootprint to Sim City because it allows non-experts to model the impacts of different urban planning scenarios, such as zoning changes and road reconfigurations. He hopes it will help planners, politicians, and citizens’ groups communicate the benefits of the kind of urban environments he has spent his career advocating for and creating.

In the interview that follows, Calthorpe discusses some of the most pressing challenges in urbanism today, including NIMBYism, the emergence of autonomous vehicles, and the urbanization of the developing world.

What originally got you interested in urbanism?

I was born in London and I spent the first five years of my life amidst rubble and coal-laden air, kind of a nightmare domain. Then we moved to Florida and it was like the Wizard of Oz, you know—I landed, and the world became Technicolor.

We moved to Palo Alto in the early ‘60s. In those days, it was a landscape filled with orchards. As I grew up, I watched it erode into subdivisions and office parks, and it became clear that it was just being degraded. I guess I grew up being sensitive to the physical environment and the kinds of ways communities shape people and their sense of identity.

I became a radical teenager and I looked at what was going on from an environmental standpoint, and felt that there was really great harm at work in the way people were living. I just kept on from there. I fell in with Buckminster Fuller’s idea of whole systems: You just can’t think of one issue at a time.

I really quickly realized that urban design had a much bigger impact on all the important outcomes than individual buildings. So I started the Congress for the New Urbanism with Andrés Duany. What we proposed was a viable alternative to sprawl.

That gathered a huge amount of momentum, and as we made the case for the alternative to sprawl, we realized that we had to be able to speak in many languages. There was energy and carbon and climate change, or land or fiscal impacts to cities; mobility; air quality; health. The amazing thing about the city is that it touches everything.

The Bay Area when you were growing up was one of the most interesting places in the world: the explosion of the tech scene, the early hackers, the birth of the personal computer. And on top of that there was the counterculture movement and the music scene. How did those things influence you?

They totally influenced me. It was a community that connected the dots all the time. Stewart Brand was the guy who started the Whole Earth Catalog, but he also started The WELL [an acronym for “Whole Earth ‘Lectronic Link”], which was [one of] the very first email [services]. Stewart was a really good friend and mentor to me. The community of people who were willing to think in a broad way, but also to absolutely reject norms and posit completely different approaches, was ubiquitous. It was all over the place.

You had the Human Potential Movement in its heyday there. You had environmentalists who were really beginning to think about things as an ecology for the first time. And you had political activism, sometimes really extreme, like the Black Panthers.

The underlying principle was to question the norms, and when you question norms, you get innovation. That’s the foundation of innovation.

What do you think of what’s happening now in the Bay Area—its crisis of housing affordability and displacement, and the great shift in how the tech industry is perceived?

I don’t think technology and the city are at odds. I think that there’s a deeper phenomenon: We really still have not shaken the suburban sprawl paradigm that was born after World War II. The environmental community, some dimensions of it, are just anti-growth, which inhibits housing. Since the recession, jobs have grown in the Bay Area 11 times faster than housing. We’ve added over 600,000 jobs, and only 56,000 housing units.

But I also think that there’s something beyond that. It’s what we’re watching with Trump and all the rest of that tribalism, which is people being fearful and marking their territory and wanting to protect it. You know, a NIMBY is a little bit like a miniature version of Trump and his Mexico wall. It’s like he’s got a neighborhood and he wants to build a wall around it, and nobody else should be able to come now. We all have to realize this is a very powerful human emotion.

The only way you overcome that kind of feeling is with powerful coalitions. If enough groups can see common cause, and identify co-benefits and get behind a change, they can overcome that negative impulse to say, “Change is bad and I’ve got mine.”

You recently introduced a new urban planning software called UrbanFootprint. It’s got the backing of venture capitalists and is being used by the State of California. What makes it unique, and what kinds of projects will it be used for?

UrbanFootprint is a cloud-based software built to help planners, designers, architects, and advocates create sustainable, resilient communities. It supports a board range of stakeholders to enhance cities with the agility of data science and scenario-building. [They can use] UrbanFootprint’s extensive data library anywhere in the United States to assess existing environmental, social, and economic conditions in just a few minutes.

Once users get a sense of current conditions, they can lead community input to create alternative land-use and policy scenarios. Then they can evaluate their impacts across a range of key community metrics, including emissions, water use, energy use, land consumption, [pedestrian] and transit accessibility, and more.

We recently announced our partnership with the State of California, which will bring UrbanFootprint free of charge to over 500 cities, counties, and regional agencies. We’re thrilled to support California’s public planning efforts with this technology.

Land-use patterns in Madison, Wisconsin, visualized on UrbanFootprint (UrbanFootprint)

You’ve done a lot of work on transit and how it shapes urban form, and you have a growing interest in autonomous rapid transit. Tell us more about that.

The really exciting thing is that we’re going to have really affordable transit, in the form of autonomous rapid transit, coming online much quicker, quite frankly, than private autonomous vehicles. In dedicated rights of way, these things are safe, because they’re not mixing with ordinary vehicles. And because they’re driverless, the operation costs are so small that we can afford to build many more miles of this stuff.

We can basically grid our cities and suburbs with high-quality transit. The moment you do that, you have the armature for infill and redevelopment. So all the strict commercial environments—the six-lane roads lined with parking lots and single-story buildings, which, by the way, are losing value rapidly as Amazon steals all the economic activity from them—are these ribbons of opportunity throughout our metropolitan regions. By adding autonomous rapid transit and zoning for a range of housing types, we can solve the housing problem and the transportation problem.

But if you just let autonomous vehicles be privately owned or run as taxis, they’re going to make the situation much worse: anywhere from 30 percent to a doubling of vehicle miles traveled. It’s just like any amazing technology. If you use it well, you benefit, and if you use it poorly it can be catastrophic.

What do you think of this bill in California, SB 827, that would upzone all the areas near high-quality mass transit?

I don’t like the law as it’s written now, because it entitles redevelopment on single-family lots. I think that’s just a political formula for disaster. It won’t go anywhere. Disrupting existing single-family neighborhoods is really the last thing you want to do politically, and, quite frankly, [that you] need to do. The amount of land in ... single-story parking-lot environments, that nobody is in love with and that many people would actually like to see change, is more than enough to satisfy housing needs.

You’ve done a lot of work in China and the rapidly urbanizing parts of the world. What are some of the opportunities and challenges for good urbanism in these places?

In China, at the highest levels of government, the policy has completely shifted. Mixed-use, small blocks, [and] transit-oriented development are all now the norm and required. They appreciate that they can’t continue to add more ring roads. I think Beijing’s up to six of them now, and they’re all still in gridlock. And, by the way, in Beijing only 35 percent of households own cars. Just imagine if there was more auto ownership.

It’s just an impossible proposition, a high-density city whose mobility is dependent on private cars. It’s an oxymoron. It never works. And I think the Chinese really recognize that now.

Jane Jacobs, who had a very big influence on both of us, and who was so optimistic when she was young, called her last book Dark Age Ahead. Would you say you’re optimistic about the future of our cities in the United States and elsewhere? Or are you pessimistic?

I’m both. I think you can’t be intelligent and not be both. I am optimistic because cities offer the best, least-cost solution to so many of our challenges. Better than just technological fixes, healthy urban forms can resolve multiple issues simultaneously. For example, housing [units] placed in walkable mixed-use areas near transit and jobs cost no more than scattered subdivisions, but they are more affordable to homeowners, need less energy and water, cost less for cities to service, generate less carbon, and create stronger communities. These are the co-benefits I was talking about, co-benefits that can generate new, powerful coalitions.

I am pessimistic when I realize most cities don’t have the tools or processes to uncover these synergies—and too often default to piecemeal planning driven by professional and agency silos.

16 Apr 15:38

Democracy Now! 2018-04-13 Friday

Tom Roche

segment 2/3 is Lori Wallach, who is excellent as usual

Democracy Now! 2018-04-13 Friday

  • Headlines for April 13, 2018
  • Syrian Researcher: Focus on Alleged Chemical Attack Ignores War's Ongoing Deaths by Airstrikes, Bullets
  • As Trump Reconsiders TPP Stance, Fair Trade Advocates Say Real Fight Is over NAFTA Renegotiation
  • Nearly 4 People Are Evicted Every Minute: New Project Tracks U.S. Eviction Epidemic & Effects

Download this show

16 Apr 15:37

Syria, Trump’s Tweets, and the History of the US-Russia Relations

Tom Roche

guests (from )
Anthony D’Agostino is a professor of History at San Francisco State University and an expert on the history of Russia; he is the author of many books including The Russian revolution and The Rise of Global Powers: International Politics in the Era of World Wars.

Timothy Snyder is the Richard C. Levin Professor of History at Yale University; he is the author of several books including On Tyranny and his latest, The Road to Unfreedom: Russia, Europe, America.

16 Apr 03:11

Two minutes to midnight: did the US miss its chance to stop North Korea’s nuclear programme? – podcast

An unprecedented US mission to Pyongyang in 1999 promised to defuse Kim’s nuclear threat. But it all came to nothing – and then the hawks took power • Read the text version here
15 Apr 05:27

#137 - Song 5: A Song of Rice and Flour

Tom Roche

very excellent. Chris Stewart is consistently good in this podcast, but this was a particularly excellent episode.

"Rice is great if you're really hungry and want eat 2,000 of something." - Mitch Hedberg

15 Apr 05:15

AskHistorians Podcast 108 - Poor Whites in the Antebellum American South

Tom Roche

very excellent--one of the best AHPs--just too short

Today we chat with Dr. Keri Leigh Merritt about the topic of her new book, Masterless Men: Poor Whites and Slavery in the Antebellum South (Cambridge University Press, 2017).


Dr. Merritt is on Twitter as @KeriLeighMerrit and her professional website is


You can join the discussion on the subreddit here.

13 Apr 22:10

Is There a Culture War Against Populism?

Tom Roche


Is it a positive wave or a troubling pattern? In this age of anxiety over joblessness and immigration, populist leaders in Hungary, Poland, Turkey, Sweden and the Philippines are tapping in. Is populism, as the 1960's American historian Richard Hofstadter called it, "a paranoid style of politics"? Or is it what others describe as "the essence of democratic politics"?
12 Apr 19:55

Jacobin Radio w/ Suzi Weissman: Walmart Workers

by Jacobin magazine
Tom Roche


Imagine you walk into a warehouse where the workers are on break, and you stumble into a vigorous, nuanced discussion of Marx’s notion of surplus value, how it relates to organizing on the shop floor, and how it applies to flexible and often female labor. Then the conversation turns to Gramsci and workers' councils in Turin. This is exactly what Carolina Bank Muñoz found when she visited a warehouse in Chile to study how unions responded to Walmart’s entry into Chile. The majority of Walmart's workers in Chile are unionized, and we talk to Carolina about her book, Building Power From Below, Chilean Workers Take on Walmart, and how Chilean retail and warehouse workers organized rank-and-file-led unions and win real economic gains along with respect and dignity on the job.

Then Nelson Lichtenstein joins the discussion on Walmart and organizing retail workers in the US. Nelson has several books on Walmart — as the face of capitalism in the twenty-first century, transforming American politics and business. Nelson anticipated a day of reckoning for Walmart as challenges to its "business model" grow at home and abroad — as the Chilean case shows. We'll also get Nelson's take on the "Amazon threat" and the state of US labor today in the Trump era.

12 Apr 19:55

The Dig: Petro-Capitalism with Timothy Mitchell Part I

by Jacobin magazine
Tom Roche

very excellent

Historian and political theorist Timothy Mitchell joins Dan for the first of a two-part interview on his book Carbon Democracy: Political Power in the Age of Oil, published in 2011 by Verso. In this first episode, we talk about how the rise of coal made both industrial capitalism and newly powerful worker resistance possible; and how the shift to oil then facilitated the persistence of imperialism in a decolonizing world while thwarting worker organizing. On the next show, we'll discuss a lot more, including how oil companies and Western governments made autocratic governments and conservative Islamists key partners in creating the very global order that we now find in such profound crisis. Thanks to Verso Books. Check out The End of Policing by Alex S. Vitale and Police: A Field Guide by David Correia and Tyler Wall And support this podcast with $ at, where you can also check out the first edition of our new weekly newsletter.

12 Apr 19:50

Roman Slavery

Tom Roche


Melvyn Bragg and guests discuss the role of slavery in the Roman world, from its early conquests to the fall of the Western Empire. The system became so entrenched that no-one appeared to question it, following Aristotle's view that slavery was a natural state. Whole populations could be marched into slavery after military conquests, and the freedom that Roman citizens prized for themselves, even in poverty, was partly defined by how it contrasted with enslavement. Slaves could be killed or tortured with impunity, yet they could be given great responsibility and, once freed, use their contacts to earn fortunes. The relationship between slave and master informed early Christian ideas of how the faithful related to God, informing debate for centuries. With Neville Morley Professor of Classics and Ancient History at the University of Exeter Ulrike Roth Senior Lecturer in Ancient History at the University of Edinburgh And Myles Lavan Senior lecturer in Ancient History at the University of St Andrews Producer: Simon Tillotson.
11 Apr 04:56

Democracy Now! 2018-04-09 Monday

Tom Roche

Glenn Greenwald for the hour

Democracy Now! 2018-04-09 Monday

  • Headlines for April 09, 2018
  • Glenn Greenwald on Syria: U.S. & Israel Revving Up War Machine Won't Help Suffering Syrian Civilians
  • Glenn Greenwald: Brazil's Right Wing Jailed Ex-President Lula Because They Couldn't Win at the Polls
  • "Apartheid, Rogue, Terrorist State": Glenn Greenwald on Israel's Murder of Gaza Protesters, Reporter

Download this show

10 Apr 21:29

William R. Polk, “Crusade and Jihad: The Thousand-Year War Between the Muslim World and the Global North” (Yale UP, 2018)

by Charles Coutinho
Tom Roche

rather shallow, but still an interesting listen

Crusade and Jihad: The Thousand-Year War Between the Muslim World and the Global North (Yale University Press, 2018) is an ambitious attempt to cover, in one volume, the entire history of the relationship between the ‘Global North’—China, Russia, Europe, Britain,…
09 Apr 13:12

The First Frontier: 2 of 2: The Forgotten History of Struggle, Savagery, and Endurance in Early America by Scott Weidensaul

by The John Batchelor Show
Tom Roche



(Photo: Map of the New York tribes before European arrival: Iroquoian tribes Algonquian tribes

Twitter: @BatchelorShow

The First Frontier: 2 of 2: The Forgotten History of Struggle, Savagery, and Endurance in Early America by Scott Weidensaul

Frontier: the word carries the inevitable scent of the West. But before Custer or Lewis and Clark, before the first Conestoga wagons rumbled across the Plains, it was the East that marked the frontier - the boundary between complex Native cultures and the first colonizing Europeans.Here is the older, wilder, darker history of a time when the land between the Atlantic and the Appalachians was contested ground - when radically different societies adopted and adapted the ways of the other, while struggling for control of what all considered to be their land.

The First Frontier traces two and a half centuries of history through poignant, mostly unheralded personal stories - like that of a Harvard-educated Indian caught up in seventeenth-century civil warfare, a mixed-blood interpreter trying to straddle his white and Native heritage, and a Puritan woman wielding a scalping knife whose bloody deeds still resonate uneasily today. It is the first book in years to paint a sweeping picture of the Eastern frontier, combining vivid storytelling with the latest research to bring to life modern America's tumultuous, uncertain beginnings.

09 Apr 13:12

The First Frontier: 1 of 2: The Forgotten History of Struggle, Savagery, and Endurance in Early America by Scott Weidensaul

by The John Batchelor Show
Tom Roche



(Photo: Iroquois painting of Tadodaho receiving two Mohawk chiefs

Twitter: @BatchelorShow

The First Frontier: 1 of 2: The Forgotten History of Struggle, Savagery, and Endurance in Early America by Scott Weidensaul

Frontier: the word carries the inevitable scent of the West. But before Custer or Lewis and Clark, before the first Conestoga wagons rumbled across the Plains, it was the East that marked the frontier - the boundary between complex Native cultures and the first colonizing Europeans.Here is the older, wilder, darker history of a time when the land between the Atlantic and the Appalachians was contested ground - when radically different societies adopted and adapted the ways of the other, while struggling for control of what all considered to be their land.

The First Frontier traces two and a half centuries of history through poignant, mostly unheralded personal stories - like that of a Harvard-educated Indian caught up in seventeenth-century civil warfare, a mixed-blood interpreter trying to straddle his white and Native heritage, and a Puritan woman wielding a scalping knife whose bloody deeds still resonate uneasily today. It is the first book in years to paint a sweeping picture of the Eastern frontier, combining vivid storytelling with the latest research to bring to life modern America's tumultuous, uncertain beginnings.

08 Apr 17:35

Why I’m suing over my dream internship – podcast

It’s time to end a system that excludes the less privileged from the arts, media and politics • Read the text version here
06 Apr 13:46

The Saudi-US Relationship at 75

Tom Roche

unexpectedly candid given guest is from Brookings

The relationship between Saudi Arabia and the United States has had many near death moments. With unprecedented challenges facing both nations the alliance is as precarious as ever.
06 Apr 03:06

A quick history of France

Tom Roche

very shallow ... don't waste your time

Historian and author John Julius Norwich reflects on some of the key moments in France’s history and relates a few of the more unusual and scandalous stories he uncovered while researching his latest book.

02 Apr 07:39

Roger Lowenstein, F**k Your Stock Portfolio

by (Dean Baker)
Tom Roche

excellent comparison of asset bubbles to

> some master counterfeiter who, along with his conspirators, is able to slip trillions of dollars of phony money into circulation.

> As long as this gang of counterfeiters is able to get away with it, they are creating trillions of dollars of wealth. This money is generating demand in the economy, although the money is going first and foremost to meet their needs and desires.

> When the counterfeiters get uncovered and their money is destroyed, the economy has lost trillions of dollars of what it had considered wealth.

I realize it would be too much to ask that people who write on economics for major news outlets have any clue about how the economy works. I say that seriously; I have been commenting on economic reporting for more than two decades. Being a writer on economics is not like being a custodian or bus driver where you have to meet certain standards. The right family or friends can get you the job and there is virtually no risk of losing it as a result of inadequate performance.

But Roger Lowenstein performs a valuable service for us in the Washington Post this morning when he unambiguously equates the value of the stock market with the country’s economic well-being. It seems that Mr. Lowenstein is unhappy that Donald Trump’s recent tariff proposals sent the market plummeting. The piece is titled, “when the president tanks your stock portfolio.” It holds up Trump’s tariff plans as a uniquely irresponsible act because of its impact on stock prices.

Okay, let’s step back for a moment and ask what the stock market is supposed to be telling us. The stock market is not a measure of economic well-being even in principle. It is ostensibly a measure of the value of future corporate profits, nothing more.

Suppose the successful teacher strike in West Virginia spills over into strikes in other states, as now appears likely. Suppose this increased labor militancy spills over to the private sector and organized workers are able to gain back some of the money lost to capital in the last dozen years. That would not be good news for Mr. Lowenstein’s stock portfolio, but it would certainly be good news for the vast majority of the people in the country.

But this is the result of private actors, Lowenstein is upset about a president’s action’s tanking the stock market. Well, let’s give another one that would likely have an even larger negative impact on Mr. Lowenstein’s stock portfolio.

Suppose the next president announces that she will raise the corporate income tax rate back to 35 percent from its current 21 percent level. Any bets on what this does to stock prices?

Read More ...

29 Mar 15:41

Reuven Lerner: My new course, “Understanding and Mastering Git,” is now available

Tom Roche

> preview a number of the course videos for free from the course sales page.

Ah, Git.  It’s one of the best and most important tools I use as a software developer.   Git is everything I want in a version-control system: It’s fast. It lets me collaborate. I can work without an Internet connection.  I can branch and merge easily, using a variety of techniques.  I can take a personal project and turn it into a large, collaborative one with minimal effort. And it’s cross platform, meaning that I know my clients and colleagues will be able to use it.

So, what’s the problem?  Git’s learning curve is extremely steep.  Until you understand what Git is doing, and how it works, you cannot use it effectively.  Moreover, until you understand what Git is doing, you will likely be puzzled and frustrated by its commands, messages, and documentation.

I’m thus delighted to unveil my latest online course: Understanding and mastering Git.  This course, which I have taught to numerous companies all around the world for more than a decade, includes:

  • Nearly 80 video lectures, for a total of more than 7 hours of videos (preview a number of videos here, on the course page)
  • Dozens of exercises, to help you practice and understand working with Git
  • 11 slide decks (in PDF format), the same ones I use when teaching.

If you have been frustrated by Git, or consider the commands you’ve been using to be a form of black magic, then this course is for you.  It walks you through Git’s commands, objects, and methods for collaboration.

This course has been battle-tested for a decade at some of the world’s best-known companies.  If you want to get the most out of Git, I’m sure that my course will help.  And if it doesn’t?  E-mail me, and I’ll give you a 100% refund.

Don’t let Git frustrate you any more.  Understand it.  Master it.  Tame it.   Learn from my “Understanding and mastering Git” course, today:

Not sure if this course is for you?  That’s fine: You can preview a number of the course videos for free from the course sales page.

Also: If you’re a student or pensioner, then you qualify for a discount on the course.  Just e-mail me, and I’ll send you a special discount code.

And finally: If you live in a country outside of the top 30 per-capita GDP countries in the world, then e-mail me and I’ll send you a special discount code to make the course more affordable to you.

You can and should learn Git, and I want to help you to learn it.  Try my course, and discover why so many software engineers won’t even think about using something else.

The post My new course, “Understanding and Mastering Git,” is now available appeared first on Lerner Consulting Blog.

28 Mar 04:14

Davide Moro: Test automation framework thoughts and examples with Python, pytest and Jenkins

Tom Roche

Pytest + Jenkins

In this article I'll share some personal thoughts about Test Automation Frameworks; you can take inspiration from them if you are going to evaluate different test automation platforms or assess your current test automation solution (or solutions).

Despite it is a generic article about test automation, you'll find many examples explaining how to address some common needs using the Python based test framework named pytest and the Jenkins automation server: use the information contained here just as a comparison and feel free to comment sharing alternative methods or ideas coming from different worlds.

It contains references to some well (or less) known pytest plugins or testing libraries too.

Before talking about automation and test automation framework features and characteristics let me introduce the most important test automation goal you should always keep in mind.

Test automation goals: ROI

You invest in automation for a future return of investment.
Simpler approaches let you start more quickly but in the long term they don't perform well in terms of ROI and vice versa. In addition the initial complexity due to a higher level of abstraction may produce better results in the medium or long term: better ROI and some benefits for non technical testers too. Have a look at the test automation engineer ISTQB certification syllabus for more information:

So what I mean is that test automation is not easy: it doesn't mean just recording some actions or write some automated test procedures because how you decide to automate things affects the ROI. Your test automation strategy should consider your tester technical skills now and future evolutions, considerations about how to improve your system testability (is your software testable?), good test design and architecture/system/domain knowledge. In other words be aware of vendors selling "silver bullet" solutions promising smooth test automation for everyone, especially rec&play solutions: there are no silver bullets.

Test automation solution features and characteristics

A test automation solution should be enough generic and flexible, otherwise there is the risk of having to adopt different and maybe incompatible tools for different kind of tests. Try to imagine the mess of having the following situation: one tool or commercial service for browser based tests only based on rec&play, one tool for API testing only, performance test frameworks that doesn't let you reuse existing scenarios, one tool for BDD only scenarios, different Jenkins jobs with different settings for each different tool, no test management tool integration, etc. A unique solution, if possible, would be better: something that let you choose the level of abstraction and that doesn't force you. Something that let you start simple and that follow your future needs and the skill evolution of your testers.
That's one of the reasons why I prefer pytest over an hyper specialized solution like behave for example: if you combine pytest+pytest-bdd you can write BDD scenarios too and you are not forced to use a BDD only capable test framework (without having the pytest flexibility and tons of additional plugins).

And now, after this preamble, an unordered list of features or characteristics that you may consider for your test automation solution software selection:
  • fine grained test selection mechanism that allows to be very selective when you have to choose which tests you are going to launch
  • parametrization
  • high reuse
  • test execution logs easy to read and analyze
  • easy target environment switch
  • block on first failure
  • repeat your tests for a given amount of times
  • repeat your tests until a failure occurs
  • support parallel executions
  • provide integration with third party software like test management tools
  • integration with cloud services or browser grids
  • execute tests in debug mode or with different log verbosity
  • support random tests execution order (the order should be reproducible if some problems occur thanks to a random seed if needed)
  • versioning support
  • integration with external metrics engine collectors
  • support different levels of abstraction (e.g., keyword driven testing, BDD, etc)
  • rerun last failed
  • integration with platforms that let you test against a large combination of OS and browsers if needed
  • are you able to extend your solution writing or installing third party plugins?
Typically a test automation engineer will be able to drive automated test runs using the framework command line interface (CLI) during test development but you'll find out very soon that you need an automation server for long running tests, scheduled builds, CI and here it comes Jenkins. Jenkins could be used by non technical testers for launching test runs or initialize an environment with some test data.


What is Jenkins? From the Jenkins website:
Continuous Integration and Continuous Delivery. As an extensible automation server, Jenkins can be used as a simple CI server or turned into the continuous delivery hub for any project.
So thanks to Jenkins everyone can launch a parametrized automated test session just using a browser: no command line and nothing installed on your personal computer. So more power to non technical users thanks to Jenkins!

With Jenkins you can easily schedule recurrent automatic test runs, start remotely via external software some parametrized test runs, implement a CI and many other things. In addition as we will see Jenkins is quite easy to configure and manage thanks to through the web configuration and/or Jenkins pipelines.

Basically Jenkins is very good at starting builds and generally jobs. In this case Jenkins will be in charge of launching our parametrized automated test runs.

And now let's talk a little bit of Python and the pytest test framework.

Python for testing

I don't know if there are some articles talking about statistics on the net about the correlation between Test Automation Engineer job offers and the Python programming language, with a comparison between other programming languages. If you find a similar resource share with me please!

My personal feeling observing for a while many Test Automation Engineer job offers (or any similar QA job with some automation flavor) is that the Python word is very common. Most of times is one of the nice to have requirements and other times is mandatory.

Let's see why the programming language of choice for many QA departments is Python, even for companies that are not using Python for building their product or solutions.

Why Python for testing

Why Python is becoming so popular for test automation? Probably because it is more affordable for people with no or little programming knowledge compared to other languages. In addition the Python community is very supportive and friendly especially with new comers, so if you are planning to attend any Python conference be prepared to fall in love with this fantastic community and make new friends (friends, not only connections!). For example at this time of writing you are still in time for attending PyCon Nove 2018 in the beautiful Florence (even better if you like history, good wine, good food and meet great people): 
You can just compare the most classical hello world, for example with Java:
public class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Hello, World!");
and compare it with the Python version now:
print("Hello, World!")
Do you see any difference? If you are trying to explain to a non programmer how to print a line in the terminal window with Java you'll have to introduce public, static, void, class, System, installing a runtime environment choosing from different version, installing an IDE, running javac, etc and only at the end you will be able to see something printed on the screen. With Python, most of times it comes preinstalled in many distributions, you just focus on what to need to do. Requirements: a text editor and Python installed. If you are not experienced you start with a simple approach and later you can progressively learn more advanced testing approaches.

And what about test assertions? Compare for example a Javascript based assertions:
with the Python version:
assert b != c
So no expect(a).not.toBeLessThan(b), expect(c >= d).toBeTruthy() or expect(e).toBeLessThan(f): with Python you just say assert a >= 0 so nothing to remember for assertions!

Python is a big fat and very powerful programming language but it follows a "pay only for what you eat" approach.

Why pytest

If Python is the language of your choice you should consider the pytest framework and its high quality community plugins and I think it is a good starting point for building your own test automation solution.

The pytest framework ( makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries.

Most important pytest features:
  • simple assertions instead of inventing assertion APIs (.not.toEqual or self.assert*)
  • auto discovery test modules and functions
  • effective CLI for controlling what is going to be executed or skipped using expressions
  • fixtures, easy to manage fixtures lifecycle for long-lived test resources and parametrized features make it easy and funny implementing what you found hard and boring with other frameworks
  • fixtures as function arguments, a dependency injection mechanism for test resources
  • overriding fixtures at various levels
  • framework customizations thanks to pluggable hooks
  • very large third party plugins ecosystem
I strongly suggest to have a look at the pytest documentation but I'd like to make some examples showing something about fixtures, code reuse, test parametrization and improved maintainability of your tests. If you are not a technical reader you can skip this section.

I'm trying to explain fixtures with practical examples based on answers and questions:
  • When should be created a new instance of our test resource?
    You can do that with the fixture scope (session, module, class, function or more advanced options like autouse). Session means that your test resource will live for the entire session, module/class for all the tests contained in that module or class, with function you'll have an always fresh instance of your test resource for each test
  • How can I determine some teardown actions at the end of the test resource life?
    You can add a sort of fixture finalizer after the yield line that will be invoked at the end of our test resource lifecycle. For example you can close a connection, wipe out some data, etc.
  • How can I execute all my existing tests using that fixture as many as your fixture configurations?
    You can do that with params. For example you can reuse all your existing tests verifying the integration with different real databases, smtp servers. Or if you have the web application offering the same features deployed with a different look&feel for different brands you can reuse all your existing functional UI tests thanks to pytest's fixture parametrization and a page objects pattern where for different look&feel I don't mean only different CSS but different UI components (e.g. completely different datetime widgets or navigation menu), components disposition in page, etc.
  • How can I decouple test implementation and test data? Thanks to parametrize you can decouple them and write just one time your test implementation. Your test will be executed as many times as your different test data
Here you can see an example of fixture parametrization (the test_smtp will be executed twice because you have 2 different fixture configurations):
import pytest
import smtplib

                        params=["", ""])
def smtp(request):
    smtp = smtplib.SMTP(request.param, 587, timeout=5)
    yield smtp
    print("finalizing %s" % smtp)

def test_smtp(smtp):
    # use smtp fixture (e.g., smtp.sendmail(...))
    # and make some assertions.
    # The same test will be executed twice (2 different params)

 And now an example of test parametrization:
import pytest
@pytest.mark.parametrize("test_input,expected", [
    ("3+5", 8),
    ("2+4", 6),
    ("6*9", 42), ])
def test_eval(test_input, expected):
    assert eval(test_input) == expected
For more info see:
This is only pytest, as we will see there are many pytest plugins that extend the pytest core features.

Pytest plugins

There are hundreds of pytest plugins, the ones I am using more frequently are:
  • pytest-bdd, BDD library for the pytest runner
  • pytest-variables, plugin for pytest that provides variables to tests/fixtures as a dictionary via a file specified on the command line
  • pytest-html, plugin for generating HTML reports for pytest results
  • pytest-selenium, plugin for running Selenium with pytest
  • pytest-splinter, a pytest-selenium alternative based on Splinter. pPytest splinter and selenium integration for anyone interested in browser interaction in tests
  • pytest-xdist, a py.test plugin for test parallelization, distributed testing and loop-on-failures testing modes
  • pytest-testrail, pytest plugin for creating TestRail runs and adding results on the TestRail test management tool
  • pytest-randomly, a pytest plugin to randomly order tests and control random seed (but there are different random order plugins if you search for "pytest random")
  • pytest-repeat, plugin for pytest that makes it easy to repeat a single test, or multiple tests, a specific number of times. You can repeat a test or group of tests until a failure occurs
  • pytest-play, an experimental rec&play pytest plugin that let you execute a set of actions and assertions using commands serialized in JSON format. Makes test automation more affordable for non programmers or non Python programmers for browser, functional, API, integration or system testing thanks to its pluggable architecture and many plugins that let you interact with the most common databases and systems. It provides also some facilitations for writing browser UI actions (e.g., implicit waits before interacting with an input element) and asynchronous checks (e.g., wait until a certain condition is true)
Python libraries for testing:
  • PyPOM, python page object model for Selenium or Splinter 
  • pypom_form, a PyPOM abstraction that extends the page object model applied to forms thanks to declarative form schemas
Scaffolding tools:
  • cookiecutter-qa, generates a test automation project ready to be integrated with Jenkins and with the test management tool TestRail that provides working hello world examples. It is shipped with all the above plugins and it provides examples based on raw splinter/selenium calls, a BDD example and a pytest-play example 
  • cookiecutter-performance, generate a tox based environment based on Taurus bzt for performance test. BlazeMeter ready for distributed/cloud performance tests. Thanks to the bzt/taurus pytest executor you will be able to reuse all your pytest based automated tests for performance tests

Pytest + Jenkins together

We've discussed about Python, pytest and Jenkins main ingredients for our cocktail recipe (shaken, not stirred). Optional ingredients: integration with external test management tools and selenium grid providers.

Thanks to pytest and its plugins you have a rich command line interface (CLI); with Jenkins you can schedule automated builds, setup a CI, let not technical users or other stakeholders executing parametrized test runs or building test always fresh test data on the fly for manual testing, etc. You just need a browser, nothing installed on your computer.

Here you can see how our recipe looks like:

Now lets comment all our features provided by the Jenkins "build with parameters" graphical interface, explaining option by option when and why they are useful.

Target environment (ENVIRONMENT)

In this article we are not talking about regular unit tests, the basis for your testing pyramid. Instead we are talking about system, functional, API, integration, performance tests to be launched against a particular instance of an integrated system (e.g., dev, alpha or beta environments).

You know, unit tests are good they are not sufficient: it is important to verify if the integrated system (sometimes different complex systems developed by different teams under the same or third party organizations) works fine as it is supposed to do. It is important because it might happen that 100% unit tested systems doesn't play well after the integration for many different reasons. So with unit tests you take care about your code quality, with higher test levels you take care about your product quality. Thanks to these tests you can confirm an expected product behavior or criticize your product.

So thanks to the ENVIRONMENT option you will be able to choose one of the target environments. It is important to be able to reuse all your tests and launch them against different environments without having to change your testware code. Under the hood the pytest launcher will be able to switch between different environments thanks to the pytest-variables parametrization using the --variables command line option, where each available option in the ENVIRONMENT select element is bound to a variables files (e.g., DEV.yml, ALPHA.yml, etc) containing what the testware needs to know about the target environment.

Generally speaking you should be able to reuse your tests without any modification thanks to a parametrization mechanism.If your test framework doesn't let you change target environment and it forces you to modify your code, change framework.

Browser settings (BROWSER)

This option makes sense only if you are going to launch browser based tests otherwise it will be ignored for other type of tests (e.g., API or integration tests).

You should be able to select a particular version of browser (latest or a specific version) if any of your tests require a real browser (not needed for API tests just for making one example) and preferably you should be able to integrate with a cloud system that allows you to use any combination of real browsers and OS systems (not only a minimal subset of versions and only Firefox and Chrome like several test platforms online do). Thanks to the BROWSER option you can choose which browser and version use for your browser based tests. Under the hood the pytest launcher will use the --variables command line option provided by the pytest-variables plugin, where each option is bound to a file containing the browser type, version and capabilities (e.g., FIREFOX.yml, FIREFOX-xy.yml, etc). Thanks to pytest, or any other code based testing framework, you will be able to combine browser interactions with non browser actions or assertions.

A lot of big fat warnings about rec&play online platforms for browser testing or if you want to implement your testing strategy using only or too many browser based tests. You shouldn't consider only if they provide a wide range of OS and versions, the most common browsers. They should let you perform also non browser based actions or assertions (interaction with queues, database interaction, http POST/PUT/etc calls, etc). What I mean is that sometimes only a browser is not sufficient for testing your system: it might be good for a CMS but if you are testing an IoT platform you don't have enough control and you will write completely useless tests or low value tests (e.g., pure UI checks instead of testing reactive side effects depending on eternal triggers, reports, device activity simulations causing some effects on the web platform under test, etc).

In addition be aware that some browser based online testing platforms doesn't use Selenium for their browser automation engine under the hood. For example during a software selection I found an online platform using some Javascript injection for implementing user actions interaction inside the browser and this might be very dangerous. For example let's consider a login page that takes a while before the input elements become ready for accepting the user input when some conditions are met. If for some reasons a bug will never unlock the disabled login form behind a spinner icon, your users won't be able to login to that platform. Using Selenium you'll get a failing result in case of failure due to a timeout error (the test will wait for elements won't never be ready to interact with and after few seconds it will raise an exception) and it's absolutely correct. Using that platform the test was green because under the hood the input element interaction was implemented using DOM actions with the final result of having all your users stuck: how can you trust such platform?

OS settings (OS)

This option is useful for browser based tests too. Many Selenium grid vendors provide real browser on real OS systems and you can choose the desired combination of versions.

Resolution settings (RESOLUTION)

Same for the above options, many vendor solutions let you choose the desired screen resolution for automated browser based testing sessions.

Select tests by names expressions (KEYWORDS)

Pytest let you select the tests you are going to launch selecting a subset of tests that matches a pattern language based on test and module names.

For example I find very useful to add the test management tool reference in test names, this way you will be able to launch exactly just that test:
Or for example all test names containing the login word but not c92411:
login and not c92411
Or if you organize your tests in different modules you can just specify the folder name and you'll select all the tests that live under that module:
Under the hood the pytest command will be launched with -k "EXPRESSION", for example
-k "c93466"
It is used in combination with markers, a sort of test tags.

Select tests to be executed by tag expressions (MARKERS)

Markers can be used alone or in conjunction with keyword expressions. They are a sort of tag expression that let you select just the minimum set of tests for your test run.

Under the hood the pytest launcher uses the command line syntax -m "EXPRESSION".

For example you can see a marker expression that selects all tests marked with the edit tag excluding the ones marked with CANBusProfileEdit:
edit and not CANBusProfileEdit
Or execute only edit negative tests: 
edit and negative
Or all integration tests
It's up to you creating granular keywords for features and all you need for select your tests (e.g., functional, integration, fast, negative, ci, etc).

Test management tool integration (TESTRAIL_ENABLE)

All my tests are decorated with the test case identifier provided by the test management tool, in my company we are using TestRail.

If this option is enabled the test results of executed tests will be reported in the test management tool.

Implemented using the pytest-testrail plugin.

Enable debug mode (DEBUG)

The debug mode enables verbose logging.

In addition for browser based tests open selenium grid sessions activating debug capabilities options ( For example verbose browser console logs, video recordings, screenshots for each step, etc. In my company we are using a local installation of Zalenium and BrowserStack automate.

Block on first failure (BLOCK_FIRST_FAILURE)

This option is very useful for the following needs:
  • a new build was deployed and you want to stop on the very first failure for a subset of sanity/smoke tests
  • you are launching repeated, long running, parallel tests and you want to block on first failure
The first usage let you gain confidence with a new build and you want to stop on the very first failure for analyzing what happened.

The second usage is very helpful for:
  • random problems (playing with number of repeated executions, random order and parallelism you can increase the probability of reproducing a random problem in less time)
  • memory leaks
  • testing system robustness, you can stimulate your system running some integration tests sequentially and then augment the parallelism level until your local computer is able to sustain the load. For example launching 24+ parallel integration tests on a simple laptop with pytest running on virtual machine is still fine. If you need something more heavy you can use distribuited pytest-xdist sessions or scale more with BlazeMeter
As you can imagine you may combine this option with COUNT, PARALLEL_SESSIONS, RANDOM_ENABLE and DEBUG depending on your needs. You can test your tests robustness too.

Under the hood implemented using the pytest's -x option.

Parallel test executions (PARALLEL_SESSIONS)

Under the hood implemented with pytest-xdist's command line option called -n NUM and let you execute your tests with the desired parallelism level.

pytest-xdist is very powerful and provides more advanced options and network distributed executions. See for further options.

Switch from different selenium grid providers (SELENIUM_GRID_URL)

For browser based testing by default your tests will be launched on a remote grid URL. If you don't touch this option the default grid will be used (a local Zalenium or any other provider) but in case of need you can easily switch provider without having to change nothing in your testware.

If you want you can save money maintaining and using a local Zalenium as default option; Zalenium can be configured as a selenium grid router that will dispatch capabilities that it is not able to satisfy. This way you will be able to save money and augment a little bit the parallelism level without having to change plan.

Repeat test execution for a given amount of times (COUNT)

Already discussed before, often used in conjunction with BLOCK_FIRST_FAILURE (pytest core -x option)

If you are trying to diagnose an intermittent failure, it can be useful to run the same test or group of tests over and over again until you get a failure. You can use py.test's -x option in conjunction with pytest-repeat to force the test runner to stop at the first failure.

Based on pytest-repeat's --count=COUNT command line option.

Enable random test ordering execution (RANDOM_ENABLE)

This option enables random test execution order.

At the moment I'm using the pytest-randomly plugin but there are 3 or 4 similar alternatives I have to try out.

By randomly ordering the tests, the risk of surprising inter-test dependencies is reduced.

Specify a random seed (RANDOM_SEED)

If you get a failure executing a random test, it should be possible to reproduce systematically rerunning the same tests order with the same test data.

Always from the pytest-randomly readme:
By resetting the random seed to a repeatable number for each test, tests can create data based on random numbers and yet remain repeatable, for example factory boy’s fuzzy values. This is good for ensuring that tests specify the data they need and that the tested system is not affected by any data that is filled in randomly due to not being specified.

Play option (PLAY)

This option will be discussed in a dedicated blog post I am going to write.

Basically you are able to paste a JSON serialization of actions and assertions and the pytest runner will be able to execute your test procedure.

You need just a computer with a browser for running any test (API, integration, system, UI, etc). You can paste how to reproduce a bug on a JIRA bug and everyone will be able to paste it on the Jenkins build with parameters form.

See pytest-play for further information.

If you are going to attending next Pycon in Florence don't miss the following pytest-play talk presented by Serena Martinetti:

How to create a pytest project

If you are a little bit curious about how to install pytest or create a pytest runner with Jenkins you can have a look at the following scaffolding tool:
It provides a hello world example that let you start with the test technique more suitable for you: plain selenium scripts, BDD or pytest-play JSON test procedures. If you want you can install page objects library. So you can create a QA project in minutes.

Your QA project will be shipped with a Jenkinsfile file that requires a tox-py36 docker executor that provides a python3.6 environment with tox already installed; unfortunately tox-py36 is not yet public so you should implement it by your own at the moment.
Once you provide a tox-py36 docker executor the Jenkinsfile will create for you the build with parameters Jenkins form for you automatically on the very first Jenkins build for your project.


I hope you'll find some useful information in this article: nice to have features for test frameworks or platform, a little bit of curiosity for the Python world or  new pytest plugin you never heard about.

Feedback and contributions are always welcome.

Tweets about test automation and new articles happens here:
28 Mar 02:21

Why Silicon Valley billionaires are prepping for the apocalypse in New Zealand – podcast

How an extreme libertarian tract predicting the collapse of liberal democracies – written by Jacob Rees-Mogg’s father – inspired the likes of Peter Thiel to buy up property across the Pacific • Read the text version here