Shared posts

10 Jun 00:01

The Google H1 Fritz Chip.

by Stanislav

Edit: Step-by-step replication instructions for the skeptical experimenter.
This article is a review of what I have been able to discover regarding the Google H1, aka Cr50, aka the “G Chip”, found in all Chromebooks of recent manufacture, including the Asus C101PA, my current candidate for a full delousing attempt.

To my knowledge, there has been no detailed public discussion of this NSA-imposed atrocity anywhere on the Net, aside from The Logs. This article is intended as a reference point for the aficionado, the casual explorer of pseudo-”open” hardware, and the merely curious. The Chromebooks are probably the closest thing to be had on the current market to an inexpensive, portable, and “cleanable” computer with reasonable performance.

However, the Cr50 — a recent addition to the product line — is explicitly designed to get in the way of a full “liberation”. It is an item quite similar, in purpose and scope, to Intel’s “glued on with broken glass, For Your Own Good!” on-die “ME” boobytrap.

Yet, unlike Intel, Google — in its fetishistic pseudo-openness — appears to have published at least a portion of the source for the device’s firmware, making it a somewhat more promising target for attack and demolition than Intel’s equivalent CPU-integrated turd. But we will dig deeper into this, further below. First, let’s review the “big picture”:

The Cr50 device is a classic “Fritz chip” — i.e. a hardware “policeman”, built into a computing device (typically, though not always, a consumer product), so as to specifically and deliberately act against the purchaser’s interests, by subverting the Laws of Sane Computing in these three ways:

  1. Prevention of the full control of the machine by its physical owner, typically by inhibiting attempts to install modified firmware. This practice is also known as “Tivoization”, and is often justified to the gullible under the banner of “security”.
  2. Enablement of one or more types of “NOBUS” back door (Official NSA terminology! “No One But US“, a kind of NATO Reich “Gott Mit Uns” motto.) In practice this means that the folks with the magic keys, can trivially circumvent any and all protection mechanisms on the box, locally and often remotely; while the uninitiated — including the person who bought and paid for the hardware — typically remain unaware of the backdoor’s very existence.) A Fritzed machine serves its true master — USG — first, and only secondarily serves the hapless purchaser.
  3. Last but not least: prevention of a clueful hardware owner’s attempts to “jailbreak” — to disable, remove, or circumvent the Fritz chip itself. Often there is an attempt to conceal the very existence of the mechanism. (Google is peculiar in that it is fond of deliberately, if subtly, taunting the purchaser of its pseudo-open devices with the existence of its Fritz chip.) Perpetrators will often deliberately litter the Net with disinformation regarding the nature and purpose of the Fritz chip, in an effort to discourage circumvention and spur on sales; the chumps will “buy their own cross” while there is still a semblance of choice on the market; afterwards, the choice evaporates, and only Fritz-laden products remain available. The latter process has already run its course in the x86 PC world; and is verging on completion in the low-power ARM portable market.

So, back to the Cr50: this device appears to be present in all of the currently-produced Chromebooks, and is — per the vendor’s published source — able to rewrite all firmware, under the control of an external debug snake, or other, yet-undiscovered triggers; start and stop the CPU; master the I2C bus, on which, among other things, are to be found the sound card’s microphone; upgrade its own firmware; and other interesting things that may or may not align with the machine owner’s wishes at a particular moment. Possible usage scenarios include, but are not limited to, enablement of “lol enforcement” surreptitious searches and buggings of “borrowed” machines (and this is merely one obvious scenario.)


In re “glue with broken glass”, the machine owner cannot simply remove or cut the traces to the Cr50: it has been placed in control of power supply bringup; the valuable debug interface; and other essentials.

But it is the upgrade process in particular that interests me, as it is the locked gate to potentially neutering the boobytrap. But can the end user rewrite the Cr50 firmware?

Let’s begin with the disinfo! A Google employee informed me that “nobody has the cr50 key”. Is this actually true?

How about No?

From the horse’s mouth:

static const uint32_t LOADERKEY_A[RSA_NUM_WORDS + 1] = {
0xea54076f, 0xe986c871, 0x8cffffb4, 0xd7c50bda, 0x30700ee0, 0xc023a878,
0x30e7fdf8, 0x5bb0c06f, 0x1d25d80f, 0x18e181f7, 0xfbf7a8b0, 0x331c16d4,
0xeb042379, 0x4cef13ec, 0x5b2072e7, 0xc807b01d, 0x443fb117, 0xd2e04e5b,
0xcb984393, 0x85d90d9d, 0x0332dcb8, 0xd42ccacf, 0x787e3947, 0x1975095c,
0x2d523b0b, 0xf815be95, 0x00db9a2c, 0x6c08442b, 0x57a022bb, 0x9d5c84ed,
0x46a6d275, 0x4392dcf8, 0xfa6812e3, 0xe0f3a3e6, 0xc8ff3f61, 0xd518dbac,
0xbba7376a, 0x767a219a, 0x9d153119, 0x980b16f8, 0x79eb5078, 0xb869924d,
0x2e392cc2, 0x76c04f32, 0xe35ea788, 0xcb67fa62, 0x30efec79, 0x36f04ae0,
0x2212a5fc, 0x51c41de8, 0x2b0b84db, 0x6803ca1c, 0x39a248fd, 0xa0c31ee2,
0xb1ca22b6, 0x16e54056, 0x086f6591, 0x3825208d, 0x079c157b, 0xe51c15a6,
0x0dd1c66f, 0x8267b8ae, 0xf06b4f85, 0xc68b27ab, 0x31bcd5fc, 0x34d563b7,
0xc4d2212e, 0x1e770199, 0xaf797061, 0x824d4853, 0x526e18cd, 0x4bb8a0dc,
0xeb9377fe, 0x04fda73c, 0x2933f8a6, 0xe16c0432, 0x40ea1bd5, 0x9efcd77e,
0x92be9e55, 0x003c1128, 0x48442cf9, 0x80b4fb31, 0xfe1e3df3, 0x1d28e14d,
0xe99c0f9d, 0x521d38c2, 0x0082c4f1, 0xcff25d56, 0x0d3e7186, 0xe72b98f0,
0xefaa5689, 0x74051ed5, 0x6b7e7fff, 0x822b1944, 0x77a94732, 0x8d0b9aaf,

const uint32_t LOADERKEY_B[RSA_NUM_WORDS + 1] = {
0xeea8b39f, 0xdfa457a1, 0x8b81fdc3, 0xb0204c84, 0x297b9db2, 0xaa70318d,
0x8cd41a68, 0x4aa0f9bb, 0xf63f9d69, 0xf0fe64b0, 0x96e42e2d, 0x5e494b1d,
0x066cefd0, 0xde949c16, 0xc92499ed, 0x92229990, 0x48ac3b1a, 0x1dfc2388,
0xda71d258, 0x826ddedf, 0xd0220e70, 0x6140dedf, 0x92bcdec7, 0xcdf91c22,
0xaa110aed, 0xc371c2f9, 0xa3fedf2a, 0xfd2c6a07, 0xe71aabce, 0x7f426484,
0x0ac51128, 0x4bab4ca2, 0x0162d0b9, 0x49fef7e3, 0xeda8664e, 0x14b92b7a,
0x0397dbb7, 0x5b9eb94a, 0x069b5059, 0x3851f46b, 0x45bbcaba, 0x0b812652,
0x7cd8b10b, 0xcaeccc32, 0x0ffd08e3, 0xfe6f0306, 0x8c02d5f7, 0xafdc4595,
0xe0edda47, 0x0cc821db, 0x50beeae5, 0xb9868c18, 0xefd2de11, 0xdfecd15c,
0xa8937a70, 0x223d9d95, 0x1b70848b, 0x54fa9176, 0x8bf012ef, 0xd37c1446,
0xf9a7ebeb, 0xbf2dfa9a, 0xdc6b8ea0, 0xe5f8bc4d, 0x539222b5, 0x192521e4,
0xe7088628, 0x2646bb56, 0x6fcc5d70, 0x3f1cd8e9, 0xae9cec24, 0xf53b6559,
0x6f091891, 0x5342fa61, 0xbfee50e9, 0x211ad58a, 0xd1c5aa17, 0x252dfa56,
0x17131164, 0x4630a459, 0x2f681f51, 0x3fb9ab3c, 0x6c8e0a70, 0xa34a868b,
0xe960e702, 0xa470d241, 0x00647369, 0xa4c25391, 0xd1926cf9, 0x5fce5488,
0xd171cb2e, 0x8a7c982e, 0xc89cbe39, 0xc0e019d8, 0x82cd1ebe, 0x68918fce,

Anyone with the private factors to either RSA modulus, can reflash the Cr50 firmware trivially, via the debug cable. The vendor’s flash update utility accepts any candidate update that passes the board revision and version increment check; however, the update will be written to a temporary buffer, and RSA-signature-tested prior to being copied into the “read only” (i.e. active) partition of the flash. Got the key? reflash to your heart’s delight. No key? no update. Just like in other “Tivos”, e.g., the Apple iPhone, but in this case with an extra helping of Open Sores artificial flavouring!

But this is not even the only backdoor: there are at least two. The second one known to me thus far is the “RMA unlocker”. Anyone with access to a certain elliptic key can reset the Cr50 into a manufacturing test mode, and do whatever he likes.

Google even seems to offer an accidentally-”public” API for requesting this type of reset. Let’s try it and see what happens:

$ python -g -i "BOB E25-A6A-A7I-E9Q-A4R"
Running cr50_rma_open version 2
SERIALNAME: 02034029-90EBD060
DEVID: 0x02034029 0x90ebd060
testing /dev/ttyUSB0
Reset flags: 0x00000800 (hard)
Reset count: 0
Chip: g cr50 B2-C
RO keyid: 0xaa66150f(prod)
RW keyid: 0xde88588d(prod)
DEV_ID: 0x02034029 0x90ebd060
Rollback: 2/2/2

found device /dev/ttyUSB0
DEVICE: /dev/ttyUSB0
RMA support added in: 0.3.3
Running Cr50 Version: 0.3.4
Cr50 setup ok
ccd: Restricted
wp: enabled
rma_auth output:

If the server fails to debug the challenge make sure the RLZ is

Sadly, the result of loading this URL was… a GMail login prompt. So I log in with a GMail account, and get:

Failed to start authentication.

Quod licet Iovi, non licet bovi! or what exactly did you expect?

And so, dear reader, if you know how to disable this landmine — or are merely interested in advancing the state of the art in vermin removal — join us on #trilema! (Ask one of the speaking folks, for voice.)

To be continued.

24 May 16:49

Cultural radar leads resistance to extinction of reason

by Melanie

Across our species there seems to be an innate ability to lock into a cultural radar based in reality which those who subscribe to socially destructive or sinister ideologies can’t detect. In the former Soviet Union, millions were able somehow to access and absorb ideas that were ruthlessly censored. Sometimes political prisoners smuggled out their writings from jail.

The intellectual dark web is our contemporary equivalent to those samizdat channels that stood up to the mind-bending process of repudiating not just certain ideas but the exercise of reason itself.

Dave Rubin says the intellectual dark web is the response to the crumbling of establishment media and politics. He describes it as an “ideas revolution”. This is not so much about bringing a particular set of ideas to the fore. It’s about saying that the freedom to think for oneself is crucial. In this era of subjective and coerced conformity, that’s the really revolutionary idea.

To read my whole Times column (£) please click here.

The post Cultural radar leads resistance to extinction of reason appeared first on

19 May 16:48

Há muito tempo que não escrevo, mas hoje é um d...

by Debora Sakama Dutra
Há muito tempo que não escrevo, mas hoje é um dia especial.
Há quase quinze anos, numa noite clara de verão, os jornais anunciavam um grande evento astronômico e minha pequena família passava por uma grande provação em outro país.
Em busca de trabalho, meu marido conseguiu, pela providência de Deus, um emprego na Suíça. Foi uma oportunidade única e também um tempo de muita provação e aprendizado num mundo pós atentado 11 de setembro. Sentimos na pele a dor de sermos estrangeiros em terra estranha, vistos com desconfiança porque meu marido, aos olhos de muitos, parecia árabe, e crescia a idéia de que os imigrantes roubavam os empregos dos suíços.

Estávamos quase completando o primeiro ano de casados quando chegamos a um país que se fechava para a imigração legal de pessoas provindas de fora da União Européia. Tentamos aprender a lidar com leis, regras escritas e não escritas, lingua, moeda, impostos, comida, aromas, clima... mergulhamos na delicada tentativa de navegar um mapa mental de outra cultura, pisando em ovos o tempo todo, num recíproco mal compreender e ser mal compreendido, vivendo o choque cultural. Em meio a tudo isso Deus nos levou ao encontro de uma pequena igreja batista reformada, fiel e que foi nosso ponto de apoio, oásis e abrigo seguro nos tempos difíceis que estavam por vir. Meses antes de nosso primeiro filho nascer, meu marido perdeu o emprego e, por razões que ninguém entendeu direito, não recebeu o auxílio desemprego. Ficamos como que à deriva; ele ia a várias entrevistas de trabalho, mas a contratação não acontecia porque a Suíça se preparava para entrar no espaço de Schengen, os europeus tinham prioridade. O seguro desemprego estava no limbo burocrático, esperávamos uma resposta do governo e Deus providenciou um advogado "pro bono". Nosso filho nasceu e a resposta sobre o seguro desemprego chegou 5 meses depois. Nesse meio tempo vivemos com a indenização da empresa. Como meu marido ajudava financeiramente algumas pessoas, a Suíça é um país muito caro – os suíços costumavam dizer que são pobres dentro de suas fronteiras e tornam-se automaticamente ricos fora delas –, e ainda precisávamos lidar com o pagamento do parto num sistema de saúde bem diferente, já vivíamos em extrema frugalidade; então não foi problema continuarmos a viver da mesma forma. Os irmãos da igreja queriam nos ajudar financeiramente, mas não queríamos ser-lhes pesados. Meu filho não teve chá de bebê – isso não existia por lá –, usou roupas emprestadas e doadas, uma irmã da igreja me emprestou algumas roupas de grávida, Deus nos ensinou a sermos fiéis no pouco e aprendemos o que é viver o cuidado amoroso dos irmãos, a viver com a certeza em Deus em a meio às incertezas da vida.

Poucos meses antes de retornarmos ao Brasil, no meio do verão, Marte estaria bem próximo à Terra, algo que acontece a cada 15 anos, mas naquele ano o planeta estaria mais próximo do que nos últimos 60 000 anos. Decidimos observar o fenômeno, morávamos no limite da cidade e fomos ao bosque vizinho. Com nosso bebê de três meses no ‘canguru’, saímos em busca de algo que não conseguíamos ver; algumas pessoas tiveram a mesma idéia e esbarrávamos em pequenos grupos que se espalhavam pelos caminhos. O passeio noturno parecia frustrado, assim como nossa aventura na Suíça; sentamo-nos em um banco e oramos. Me lembro de agradecermos a Deus, que tinha permitido que tudo acontecesse, pelos tesouros que a traça e a ferrugem não destroem recebidos naquela pequena igreja, de confiarmos nosso futuro a Deus e agradecermos pelo nosso pequeno filho. Ao pegarmos a estrada que saía do bosque vimos uma grande lua cheia e uma grande estrela, que a princípio confundimos com um avião: era Marte, tudo mais era silêncio. Meu coração aquietou-se.
Voltamos para casa, certos de que acima de qualquer dificuldade ou tribulação está a soberania de Deus e que Ele permitira todas as coisas, apesar da frustração que sentíamos, Ele cuidava de nós. Alguns dias depois recebemos a notícia que meu marido só conseguiria trabalho se a empresa contratante comprovasse que não havia um trabalhador europeu capaz de preencher a vaga, a crise já chegará à Europa e a bolha da telefonia explodira, as portas fecharam.

Voltamos ao Brasil e continuamos a encontrar muitas situações difíceis, lutas, desemprego, desesperanças, doenças, depressões, injustiças, incompreensões e pela graça de Deus tempos de esperança, amigos, desconhecidos que nos ajudaram, família e irmãos da fé. Quinze anos se passaram, em todo o tempo Deus nos tratou, desbastou arestas, nos confrontou com ídolos e pecados guardados zelosamente no fundo d'alma, com medos, e continua a tratar de feridas antigas, quase cicatrizadas, que teimosamente reabrem. Doeu, dói e todas as vezes que Ele tratar doerá, porque envolve pecado, rebeldia, negação, frustração, tristeza, confronto e sofrimento, mas é certo que houve e haverá alívio, cura, salvação, alegria, paz, fé, amor e esperança, tesouros preciosos graciosamente dados e a certeza de um fardo, esvaziado de coisas que nos impedem, mas cheio de histórias, bênçãos, por vezes partilhado com irmãos, marido, filho, um fardo que fica cada vez mais leve para a vida eterna.

Hoje nosso filho completa 15 anos, logo Marte estará mais próximo novamente, pretendo mostrá-lo a ele e contar as bênçãos.
03 May 15:45

Thomas Munro: SERIALIZABLE in PostgreSQL 11... and beyond

Thanks to the tireless work of Google Summer of Code student Shubham Barai with the support of reviewers, a mentor and a committer, PostgreSQL 11 will ship with predicate lock support for hash indexesgin indexes and gist indexes.  These will make SERIALIZABLE transaction isolation much more efficient with those indexes, filling in some of the feature combination gaps and quirks that exist in my favourite RDBMS.

It seems like a good time to write a bit about SERIALIZABLE and how it interacts with other selected PostgreSQL features, including indexes.

A bit of background

Violin VL100.pngIf you want to read something a little less dry than the usual papers about transaction isolation, I recommend ACIDRain: Concurrency-Related Attacks on Database-Backed Web Applications which, among other things, discusses a transaction isolation-based attack that bankrupted a bitcoin exchange.  It also makes some interesting observations about some of PostgreSQL's rivals.  Even excluding malicious attacks, I've been working with databases long enough to have heard plenty of transaction isolation screw-up stories that I can't repeat here including trading systems and credit limit snafus, unexpected arrest warrants and even ... a double booked violin teacher.

True SERIALIZABLE using the SSI algorithm is one of my favourite PostgreSQL features and was developed by Dan Ports and Kevin Grittner for release 9.1.  From what I've seen and heard, SERIALIZABLE has a reputation among application developers as a complicated expert-level feature for solving obscure problems with terrible performance, but that isn't at all justified... at least on PostgreSQL.  As the ACIDRain paper conveys much better than I can, weaker isolation levels are in fact more complicated to use correctly with concurrency.  For non-trivial applications, error-prone ad-hoc serialisation schemes based on explicit locking are often required for correctness.  While the theory behind SSI may sound complicated, that's the database's problem!  The end user experience is the exact opposite: it's a switch you can turn on that lets you write applications that assume that each transaction runs in complete isolation, dramatically cutting down the number of scenarios you have to consider (or fail to consider).  In short, it seems that it's the weaker isolation levels that are for experts.

As an example, suppose you are writing a scheduling system for a busy school.  One transaction might consist of a series of queries to check if a teacher is available certain time, check if a room is available, check if any of the enrolled students has another class at the same time, check if the room's capacity would be exceeded by the currently enrolled students, and then finally schedule a class.  If you do all of this in a SERIALIZABLE transaction then you don't even have to think about concurrent modifications to any of those things.  If you use a weaker level, then you have to come up with an ad-hoc locking strategy to make sure that a concurrent transaction doesn't create a scheduling clash.

Most other databases use a pessimistic locking strategy for SERIALIZABLE, which amounts to literally serialising transactions whose read/write set conflicts.  In contrast, PostgreSQL uses a recently discovered optimistic strategy which allows more concurrency and avoids deadlocks, but in exchange for the increased concurrency it sometimes needs to abort transactions if it determines that they are incompatible with all serial orderings of the transactions.  When that happens, the application must handle a special error code by retrying the whole transaction again.  Many workloads perform better under SSI than under the traditional locking strategy, though some workloads (notably queue-like workloads where sessions compete to access a 'hot' row) may be unsuitable because they generate too many retries.  In other words, optimistic locking strategies pay off as long as the optimism is warranted.  Pessimistic strategies may still be better if every transaction truly does conflict with every other transaction.

The type of locks used by SSI are known as "SIREAD" locks or "predicate" locks, and they are distinct from the regular PostgreSQL "heavyweight locks" in that you never wait for them and they can't deadlock.  The SSI algorithm permits spurious serialisation failures, which could be due to lack of memory for SIREAD locks leading to lock escalation (from row to page to relation level), lack of support in index types leading to lock escalation (see below), or the fundamental algorithm itself which is based on a fast conservative approximation of a circular graph detector.  We want to minimise those.  A newer algorithm called Precise SSI might be interesting for that last problem, but much lower hanging fruit is the index support.

Interaction with indexes

Unlike the regular locking that happens when you update rows, SERIALIZABLE needs to lock not only rows but also "predicates", representing gaps or hypothetical rows that would match some query.  If you run a query to check if there is already a class booked in a given classroom at a given time and found none, and then a concurrent transaction creates a row that would have matched your query, we need a way to determine that these transactions have an overlapping read/write set.  If you don't use SERIALIZABLE, you'd probably need to serialise such pairs of transactions by making sure that that there is an advisory lock or an explicit row lock on something else -- in the school example that might be the row representing the classroom (which is in some sense a kind of "advisory" lock by another name, since all transactions involved have to opt into this scheme).  Predicate locks handle the case automatically without the user having to do that analysis, and make sure that every transaction in the system gets it right.  In PostgreSQL, performing this magic efficiently requires special support from indexes.  PostgreSQL 11 adds more of that.

When SSI was first introduced, only btrees had support for predicate locks.  Conceptually, a predicate lock represents a logical predicate such as "X = 42" against which any concurrent writes must be compared.  Indexes that support predicate locks approximate that predicate by creating SIREAD locks for index pages that would have to be modified by any conflicting insert.  In PostgreSQL 11 that behaviour now extends to gin, gist and hash.

If your query happens to use an index that doesn't support predicate locks, then PostgreSQL falls back to predicate locking the whole relation.  This means that the SSI algorithm will report a lot more false positive serialisation failures.  In other words, in earlier releases if you were using SERIALIZABLE you'd have a good reason to avoid gin, gist and hash indexes, and vice versa, because concurrent transactions would produce a ton of serialisation anomalies and thus retries in your application.  In PostgreSQL 11 you can use all of those feature together without undue retries!

One quite subtle change to the SERIALIZABLE/index interaction landed in PostgreSQL 9.6.   It made sure that unique constraint violations wouldn't hide serialisation failures.  This is important, because if serialisation failures are hidden by other errors then it prevents application programming frameworks from automatically retrying transactions for you on serialisation failure.  For example, if your Java application is using Spring Retry you might configure it to retry any incoming service request on ConcurrencyFailureException; for Ruby applications you might use transaction_retry; similar things exist for other programming environments that provide managed transactions.  That one line change to PostgreSQL was later determined to be a bug-fix and back-patched to all supported versions.  If future index types add support for unique constraints, they will also need to consider this case.

Here ends the part of this blog article that concerns solid contributions to PostgreSQL 11.  The next sections are about progressively more vaporous contributions aiming to fill in the gaps where SERIALIZABLE interacts poorly with other features.


The parallel query facilities in PostgreSQL 9.6, 10 and the upcoming 11 release are disabled by SERIALIZABLE.  That is, if you enable SERIALIZABLE, your queries won't be able to use more than one CPU core.  I worked on a patch to fix that problem.  Unfortunately I didn't quite manage to get that into the right shape in time to land it in PostgreSQL 11 so the target is now PostgreSQL 12.  It's good that parallel query was released when it was and not held back by lack of SERIALIZABLE support, but we need to make sure that we plug gaps like these: you shouldn't have to choose between SERIALIZABLE and parallel query.


PostgreSQL allows read-only queries to be run on streaming replica servers.  It doesn't allowed SERIALIZABLE to be used on those sessions though, because even read-only transactions can create serialisation anomalies.  A solution to this problem was described by Kevin Gittner.  I have written some early prototype code to test the idea (or my interpretation of it), but I ran into a few problems that are going to require some more study.

Stepping back a bit, the general idea is to extend what SERIALIZABLE READ ONLY DEFERRABLE does on a single-node database server.  Before I explain that, I'll need to explain the concept of a "safe transaction".  One of the optimisations that Kevin and Dan made in their SSI implementation is to identify points in time when READ ONLY transactions become safe, meaning that there is no way that they can either suffer a serialisation failure or cause anyone else to suffer one.  When that point is reached, PostgreSQL effectively silently drops the current transaction from SERIALIZABLE to REPEATABLE READ, or in other words from SSI (serialisable snapshot isolation) to SI (snapshot isolation) because it has proven that the result will be the same.  That allows it to forget all about SIREAD locks and the transaction dependency graph, so that it can go faster.  SERIALIZABLE READ ONLY DEFERRABLE is a way to say that you would like to begin a READ ONLY transaction and then wait until it is safe before continuing.  In other words, it effectively runs in REPEATABLE READ isolation, but waits until a moment when that'll be indistinguishable from SERIALIZABLE.  It might return immediately if no writable SERIALIZABLE transactions are running, but otherwise it'll make you wait until all concurrent writable SERIALIZABLE transactions have ended.  As far as I know, PostgreSQL's safe read only transaction concept is an original contribution not described in the earlier papers.

The leading idea for how to make SERIALIZABLE work on standby servers is to cause it to silently behave like SERIALIZABLE READ ONLY DEFERRABLE.  That's complicated though, because the standby server doesn't know anything about transactions running on the primary server.  The proposed solution is to put a small amount of extra information into the WAL that would allow standby servers to know that read only transactions (or technically snapshots) begun at certain points must be safe, or are of unknown status and must wait for a later WAL record that contains the determination.

I really hope that we can get that to work, because as with the other features listed above, it's a shame to have to choose between load balancing and SERIALIZABLE.


SKIP LOCKED is the first patch that I wrote for PostgreSQL, released in 9.5.  I wrote it to scratch an itch: we used another RDBMS in my job of the time, and that was one of the features that came up as something missing from PostgreSQL that might prevent us from migrating.

It's designed to support distributing explicit row locks to multiple sessions when you don't care which rows each session gets, but you want to maximise concurrency.  The main use case is consuming jobs from a job queue, but other uses cases include reservation systems (booking free seats, rooms etc) and rolled-up lazily maintained aggregation tables (finding 'dirty' rows that need to be recomputed etc).

This is called a kind of "exotic isolation" by Jim Gray in Transaction Processing: Concepts and Techniques (under the name "read-past").  As far as I can see, it's philosophically opposed to SERIALIZABLE because it implies that you are using explicit row locks in the first place.  That shouldn't be necessary under SERIALIZABLE, or we have failed.  Philosophy aside, there is a more practical problem: the rows that you skip are still predicate-locked, so create conflicts among all the competing transactions.  You lock different rows but only one concurrent transaction ever manages to complete, and you waste a lot of energy retrying.

This forces a choice between SERIALIZABLE and SKIP LOCKED, not because the features exclude each other but because the resulting performance is terrible.

The idea I have to deal with this is to do a kind of implicit SIREAD lock skipping under certain conditions.  First, let's look at a typical job queue processing query using SKIP LOCKED:

    SELECT id, foo, bar
      FROM job_queue
     WHERE state = 'NEW'
     LIMIT 1;

The idea is that under SERIALIZABLE you should be able to remove the FOR UPDATE SKIP LOCKED clause and rely on SSI's normal protections.  Since you didn't specify an ORDER BY clause and you did specify a LIMIT N clause, you told that you don't care which N rows you get back as long as state = 'NEW'.   This means we can change the scan order.  Seeing the LIMIT and the isolation level, the executor could skip (but not forget) any tuples that are already SIREAD-locked, and then only go back to the ones it skipped if it doesn't manage to find enough non-SIREAD-locked tuples to satisfy the query.  Instead of an explicit SKIP LOCKED mode, it's a kind of implicit REORDER LOCKED (meaning SIREAD locks) that minimises conflicts.

If you add an ORDER BY clause it wouldn't work, because you thereby remove the leeway granted by nondeterminism in the ordering.  But without it, this approach should fix a well known worst case workload for SERIALIZABLE.  Just an idea; no patch yet.
09 Apr 16:15

Congressional Testimony

James Cameron's Terminator 3 was the REALLY prophetic one. That's why Skynet sent a robot back to the 1990s to prevent him from ever making it, ultimately handing the franchise over to other directors.
30 Mar 16:11

Conversational Dynamics

"You should make it so people can search for and jump into hundreds of conversations at once if they want." "Ooh, good idea! I imagine only the most well-informed people with the most critical information to share will use that feature."
17 Mar 13:18

Brian Fehrle: Migrating from MySQL to PostgreSQL - What You Should Know

Whether migrating a database or project from MySQL to PostgreSQL, or choosing PostgreSQL for a new project with only MySQL knowledge, there are a few things to know about PostgreSQL and the differences between the two database systems.

PostgreSQL is a fully open source database system released under its own license, the PostgreSQL License, which is described as "a liberal Open Source license, similar to the BSD or MIT licenses.” This has allowed The PostgreSQL Global Development Group (commonly referred to as PGDG), who develops and maintains the open source project, to improve the project with help from people around the world, turning it into one of the most stable and feature rich database solutions available. Today, PostgreSQL competes with the top proprietary and open source database systems for features, performance, and popularity.

PostgreSQL is a highly compliant Relational Database System that’s scalable, customizable, and has a thriving community of people improving it every day.

What PostgreSQL Needs

In a previous blog, we discussed setting up and optimizing PostgreSQL for a new project. It is a good introduction to PostgreSQL configuration and behavior, and can be found here:

If migrating an application from MySQL to PostgreSQL, the best place to start would be to host it on similar hardware or hosting platform as the source MySQL database.

On Premise

If hosting the database on premise, bare metal hosts (rather than Virtual Machines) are generally the best option for hosting PostgreSQL. Virtual Machines do add some helpful features at times, but they come at the cost of losing power and performance from the host in general, while bare metal allows the PostgreSQL software to have full access to performance with fewer layers between it and the hardware. On premise hosts would need an administrator to maintain the databases, whether it’s a full time employee or contractor, whichever makes more sense for the application needs.

In The Cloud

Cloud hosting has come a long way in the past few years, and countless companies across the world host their databases in cloud based servers. Since cloud hosts are highly configurable, the right size and power of host can be selected for the specific needs of the database, with a cost that matches.

Depending on the hosting option used, new hosts can be provisioned quickly, memory / cpu / disk can be tweaked quickly, and even additional backup methods can be available. When choosing a cloud host, look for whether a host is dedicated or shared, dedicated being better for extremely high load databases. Another key is to make sure the IOPS available for the cloud host is good enough for the database activity needs. Even with a large memory pool for PostgreSQL, there will always be disk operations to write data to disk, or fetch data when not in memory.

Cloud Services

Since PostgreSQL is increasing in popularity, it’s being found available on many cloud database hosting services like Heroku, Amazon AWS, and others, and is quickly catching up to the popularity of MySQL. These services allow a third party to host and manage a PostgreSQL database easily, allowing focus to remain on the application.

Concepts / term comparisons

There are a few comparisons to cover when migrating from MySQL to PostgreSQL, common configuration parameters, terms, or concepts that operate similarly but have their differences.

Database Terms

Various database terms can have different meanings within different implementations of the technology. Between MySQL and PostgreSQL, there’s a few basic terms that are understood slightly differently, so a translation is sometimes needed.


In MySQL, a ‘cluster’ usually refers to multiple MySQL database hosts connected together to appear as a single database or set of databases to clients.

In PostgreSQL, when referencing a ‘cluster’, it is a single running instance of the database software and all its sub-processes, which then contains one or more databases.


In MySQL, queries can access tables from different databases at the same time (provided the user has permission to access each database).

FROM customer_database.customer_table t1
JOIN orders_database.order_table t2 ON t1.customer_id = t2.customer_id
WHERE name = ‘Bob’;

However in PostgreSQL this cannot happen unless using Foreign Data Wrappers (a topic for another time). Instead, a PostgreSQL database has the option for multiple ‘schemas’ which operate similarly to databases in MySQL. Schemas contain the tables, indexes, etc, and can be accessed simultaneously by the same connection to the database that houses them.

FROM customer_schema.customer_table t1
JOIN orders_schema.order_table t2 ON t1.customer_id = t2.customer_id
WHERE name = ‘Bob’;

Interfacing with the PostgreSQL

In the MySQL command line client (mysql), interfacing with the database uses key works like ‘DESCRIBE table’ or ‘SHOW TABLES’. The PostgreSQL command line client (psql) uses its own form of ‘backslash commands’. For example, instead of ‘SHOW TABLES’, PostgreSQL’s command is ‘\dt’, and instead of ‘SHOW DATABASES;’, the command is ‘\l’.

A full list of commands for ‘psql’ can be found by the backslash command ‘\?’ within psql.

Language Support

Like MySQL, PostgreSQL has libraries and plugins for all major languages, as well as ODBC drivers along the lines of MySQL and Oracle. Finding a great and stable library for any language needed is an easy task.

Stored Procedures

Unlike MySQL, PostgreSQL has a wide range of supported Procedural Languages to choose from. In the base install of PostgreSQL, the supported languages are PL/pgSQL (SQL Procedural Language), PL/Tcl (Tcl Procedural Language), PL/Perl (Perl Procedural Language), and PL/Python (Python Procedural Language). Third party developers may have more languages not officially supported by the main PostgreSQL group.


  • Memory

    MySQL tunes this with key_buffer_size when using MyISAM, and with innodb_buffer_pool_size when using InnoDB.

    PostgreSQL uses shared_buffers for the main memory block given to the database for caching data, and generally sticks around 1/4th of system memory unless certain scenarios require that to change. Queries using memory for sorting use the work_mem value, which should be increased cautiously.

Tools for migration

Migrating to PostgreSQL can take some work, but there are tools the community has developed to help with the process. Generally they will convert / migrate the data from MySQL to PostgreSQL, and recreate tables / indexes. Stored Procedures or functions, are a different story, and usually require manual re-writing either in part, or from the ground up.

Some example tools available are pgloader and FromMySqlToPostgreSql. Pgloader is a tool written in Common Lisp that imports data from MySQL into PostgreSQL using the COPY command, and loads data, indexes, foreign keys, and comments with data conversion to represent the data correctly in PostgreSQL as intended. FromMySqlToPostgreSql is a similar tool written in PHP, and can convert MySQL data types to PostgreSQL as well as foreign keys and indexes. Both tools are free, however many other tools (free and paid) exist and are newly developed as new versions of each database software are released.

Converting should always include in depth evaluation after the migration to make sure data was converted correctly and functionality works as expected. Testing beforehand is always encouraged for timings and data validation.

Replication Options

If coming from MySQL where replication has been used, or replication is needed at all for any reason, PostgreSQL has several options available, each with its own pros and cons, depending on what is needed through replication.

  • Built In:

    By default, PostgreSQL has its own built in replication mode for Point In Time Recovery (PITR). This can be set up using either file-based log shipping, where Write Ahead Log files are shipped to a standby server where they are read and replayed, or Streaming Replication, where a read only standby server fetches transaction logs over a database connection to replay them.

    Either one of these built in options can be set up as either a ‘warm standby’ or ‘hot standby.’ A ‘warm standby’ doesn’t allow connections but is ready to become a master at any time to replace a master having issues. A ‘hot standby’ allows read-only connections to connect and issue queries, in addition to being ready to become a read/write master at any time as well if needed.

  • Slony:

    One of the oldest replication tools for PostgreSQL is Slony, which is a trigger based replication method that allows a high level of customization. Slony allows the setup of a Master node and any number of Replica nodes, and the ability to switch the Master to any node desired, and allows the administrator to choose which tables (if not wanting all tables) to replicate. It’s been used not just for replicating data in case of failure / load balancing, but shipping specific data to other services, or even minimal downtime upgrades, since replication can go across different versions of PostgreSQL.

    Slony does have the main requirement that every table to be replicated have either a PRIMARY KEY, or a UNIQUE index without nullable columns.

  • Bucardo:

    When it comes to multi-master options, Bucardo is one of few for PostgreSQL. Like Slony, it’s a third party software package that sits on top of PostgreSQL. Bucardo calls itself “an asynchronous PostgreSQL replication system, allowing for both multi-master and multi-slave operations.” The main benefit is multi-master replication, that works fairly well, however it does lack conflict resolution, so applications should be aware of possible issues and fix accordingly.

    There are many other replication tools as well, and finding the one that works best for an application depends on the specific needs.

Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl


PostgreSQL has a thriving community willing to help with any issues / info that may be needed.

  • IRC

    An active IRC chatroom named #postgresql is available on freenode, as administrators and developers world wide chat about PostgreSQL and related projects / issues. There’s even smaller rooms for specifics like Slony, Bucardo, and more.

  • Mailing lists

    There are a handful of PostgreSQL mailing lists for ‘general’, ‘admin’, ‘performance’, and even ‘novice’ (a great place to start if new to PostgreSQL in general). The mailing lists are subscribed to by many around the world, and provide a very useful wealth of resources to answer any question that may need answering.

    A full list of PostgreSQL mailing lists can be found at

  • User Groups

    User groups are a great place to get involved and active in the community, and many large cities worldwide have a PostgreSQL User Group (PUG) available to join and attend, and if not, consider starting one. These groups are great for networking, learning new technologies, and even just asking questions in person to people from any level of experience.

  • Documentation

    Most Importantly, PostgreSQL is documented very well. Any information for configuration parameters, SQL functions, usage, all can be easily learned through the official documentation provided on PostgreSQL’s website. If at all anything is unclear, the community will help in the previous outlined options.

16 Mar 22:43

Debian 9.4 released

by ris
The Debian Project has released the fourth update to Debian 9 "stretch". As usual, this update mainly adds corrections for security issues, along with a few adjustments for serious problems. "Those who frequently install updates from won't have to update many packages, and most such updates are included in the point release."
16 Mar 22:42

Firefox 59 released

by ris
Mozilla has released Firefox 59, the next iteration of Firefox Quantum. From the release notes: "On Firefox for desktop, we’ve improved page load times, added tools to annotate and crop your Firefox Screenshots, and made it easier to arrange your Top Sites on the Firefox Home page. On Firefox for Android, we’ve added support for sites that stream video using the HLS protocol."
16 Mar 22:41

Simon Riggs: PostgreSQL – The most loved RDBMS

The 2018 StackOverflow survey has just been published, with good news for PostgreSQL.

StackOverflow got more than 100,000 responses from people in a comprehensive 30 minute survey.

PostgreSQL is the third most commonly used database, with 33% of respondents, slightly behind MySQL and SQLServer, yet well ahead of other options. Early in January, the DBEngines results showed PostgreSQL in 4th place behind Oracle, yet here we see that actually Oracle heads up the Most Dreaded list along with DB2, leaving PostgreSQL to power through to 3rd place.

PostgreSQL at 62% is the second most loved database, so close behind Redis (on 64%) that they’re almost even. But then Redis is only used by 18.5% of people and its very much a different beast anyway – yes, its a datastore, but not a full functioned database like PostgreSQL and others.

Notice that neither MySQL nor SQLServer are well loved, yet enough people use them that we can be pretty certain of that as a collective opinion.

Later we learn that SQLServer has a strong correlation with C# and that MySQL has a strong correlation with PHP/HTML/CSS/WordPress, so they are both the main database choice for those software stacks. What’s interesting there is that PostgreSQL doesn’t have any correlation towards Java, Python, Ruby etc. Or if I might interpret that differently, it is equally popular amongst developers from all languages who aren’t already using LAMP or MS stacks.

SQL is the 4th most pervasive language in use, behind Javascript, HTML and CSS. At 58.5% it is way ahead of 5th place Java at 45%.

Later we learn that 57.5% of people love SQL, which is pretty much everyone that uses it.

We’ll do some more analysis when the anonymized data is available, just to double check these analyses.

11 Mar 00:59

Dimitri Fontaine: Database Normalization and Primary Keys

In our previous article we saw three classic Database Modelization Anti-Patterns. The article also contains a reference to a Primary Key section of my book Mastering PostgreSQL in Application Development, so it’s only fair that I would now publish said Primary Key section!

So in this article, we dive into Primary Keys as being a cornerstone of database normalization. It’s so important to get Primary Keys right that you would thing everybody knows how to do it, and yet, most of the primary key constraints I’ve seen used in database design are actually not primary keys at all.

11 Mar 00:59

Background Apps

My plane banner company gets business by flying around with a banner showing a <div> tag, waiting for a web developer to get frustrated enough to order a matching </div>.
02 Mar 14:02

DB-Engines Ranking of database management systems, March 2018


This is the March 2018 issue of the monthly DB-Engines Ranking of database management systems.

You can find the complete and most up-to-date ranking at

Rank DBMS Score Changes
1. Oracle 1289.61 -13.67
2. MySQL 1228.87 -23.60
3. Microsoft SQL Server 1104.79 -17.25
4. PostgreSQL 399.35 +10.97
5. MongoDB 340.52 + 4.10
6. DB2 186.66 -3.31
7. Microsoft Access 131.95 + 1.88
8. Redis 131.22 + 4.21
9. Elasticsearch 128.54 + 3.23
10. Cassandra 123.49 + 0.71
Copyright © March 2018
22 Feb 01:54

Self-Driving Issues

If most people turn into muderers all of a sudden, we'll need to push out a firmware update or something.
06 Jan 20:12

Meltdown and Spectre

New zero-day vulnerability: In addition to rowhammer, it turns out lots of servers are vulnerable to regular hammers, too.
02 Jan 22:05

Where to draw the line on true free speech

by Melanie

Offensiveness or wrong-headedness hurt no one. The claim that they do is designed to shut down legitimate debate.

The proper antidote to speech that offends is other speech. Opinion anchored in reason can be countered by other opinion. Lies can be exposed by factual evidence. Truth emerges from debate and disagreement.

The only sort of speech that deserves to be banned, on campus or elsewhere, is that which peddles true prejudice. This means speech that attacks people on the basis of an irrational hatred which by definition is immune to reasoned argument. Defending colonialism is an opinion. Saying black people are inferior is bigotry.

To read my whole Times column (£), please click here.

The post Where to draw the line on true free speech appeared first on

01 Jan 23:25

Jan Karremans: Why I picked Postgres over Oracle, part I

As with many stories, if you have something to tell, it quickly takes up a lot of space. Therefor this will be a series of blog posts on Postgres and a bit of Oracle. It will be a short series, though…

Let’s begin


 I have started with databases quite early on in my career. RMS by Datapoint… was it really a database? Well, at least sort of. It held data in a central storage, but it was a typical serial “database”. Interestingly enough, some of this stuff is maintained up to today (talk about longevity!)
After switching to a more novel system, we adopted DEC (Digital Equipment Corporation) VAX, VMS and Micro VAX systems! Arguably still the best operating system around… In any case, it brought us the ability to run, the only valid alternative for a database around, Oracle. With a shining Oracle version 6.2 soon to be replaced by version 7.3.4. Okay, truth be said, at that time I wasn’t really that deep into databases, so much of the significance was added later. My primary focus was on getting the job done, serving the business in making people better. Still working with SQL and analyzing data soon became of my hobbies.
From administering database I did a broad range of things, but always looping back to or staying connected to software and software development using databases.
Really, is there any other way, I mean, building software without using some kind of database?
At a good point in time, we were developing software using the super-trendy client-server concept. It served us well at the time and fitted the dogma of those days. No problems what so ever. We we running our application on “fairly big boxes” for our customers (eg. single or double core HP D 3000 servers) licensed through 1 or 2 Oracle Database Standard Edition One licenses, and the client software was free anyway…
Some rain has to fall
The first disconnect I experienced with licensed software was that time we needed to deploy Oracle Reports Server.
After porting our application successfully to some kind of pre-APEX framework, we needed to continue our printing facilities as before. The conclusion was to use Oracle Reports Server, which we could call to fulfill the exact same functionality as the original client-server printing agent (rwrbe60.exe, I’ll never forget) did. There was only no way we could do this, without buying licenses for (I though it was) Oracle BI-publisher, something each of our clients had to do. This made printing more expensive than the entire database-setup, nearly even the biggest part of the entire TCO of our product, which makes no sense at all.

More recently

This disconnect was the first one. Moving forward I noticed and felt more and more of a disconnect between Oracle and, what I like to call, core technology. Call me what you will, I feel that if you want to bring a database to the market and want to stay on top of your game, you focus needs to be at least seriously focused on that database.

Instead we saw ever more focus for “non-core” technology. Oracle Fusion, Oracle Applications (okay, Oracle Apps had been there always), and as time progressed, the dilution became ever greater. I grew more and more in the believe that Oracle didn’t want to be that Database Company anymore (which proved to be true in the), but it was tough for me to believe. Here I was, having spend most of my active career focused on this technology, and now it was derailing (as it felt to me).

We saw those final things, with the elimination of Oracle Standard Edition One, basically forcing a entire contingent of their customers either out (too expensive) or up (invest in Oracle Standard Edition Two, and deal with more cost for less functionality). What appeared to be a good thing, ended up leaving a bad taste in the mount.
And, of course… the Oracle Cloud, I am not even going to discuss that in this blog-post, sorry.

The switch to Postgres

For me the switch was in two stages. First, there was this situation that I was looking for something to do… I had completed my challenge and, through a good friend, ran into the kind people of EnterpriseDB. A company I only had little knowledge of doing stuff for PostgreSQL (or Postgres if you like, please, no Postgré or things alike please, find more about the project here), a database I had not so much more knowledge of. But, their challenge was very interesting! Grow and show Postgres and the good things it brings to the market.

Before I knew it, I was studying Postgres and all the things that Postgres brings. Which was easy enough in the and, as the internal workings and structures of Postgres and Oracle differ not much. I decided to do a presentation on the differences between Postgres and Oracle in Riga. I was kindly accepted by the committee even when I told them, my original submission had changed!
A very good experience, even today, but with an unaccepted consequence. -> The second part of the switch was Oracle’s decision to cut me from the Oracle ACE program.

It does free me up, somehow, to help database users across Europe, re-evaluate their Oracle buy-in and lock-in. Look at smarter and (much) more (cost)-effective ways to handle their database workloads. This finalized “the switch”, so to speak.
Meanwhile more and more people are realizing that there actually are valid alternatives to the Oracle database. After the adoption of the Oracle database as the only serious solution back in the early 1990’s, the world has changed, also for serious database applications!

End of Part I

A link to the follow-up blog post will be placed here shortly.

The post Why I picked Postgres over Oracle, part I appeared first on Johnnyq72 and was originally written by Johnnyq72.

25 Dec 14:19

Release of Sword Project version 1.8.0

by Refdoc

I would like to take this chance to announce the immediate availability of SWORD release 1.8.0. I know this release has been a long time in coming, but the long time comes with lots of benefits for users, developers, and maintainers. The benefits to users and developers are mentioned elsewhere, throughout the code and other places. The main benefit to maintainers is that, now, there are automated tests in place and the release process is now automated. This means that future releases on that 1.8 branch can be easily executed whenever needed. Have a Merry Christmas, everyone! And keep your eyes open for a 1.8.1 in the not too distant future to fix up buildings in the binding code. Otherwise, you can get the code you're looking for below: MD5: 095dbd723738c2a232c85685a11827a8 sword-1.8.0.tar.gz SHA512: c45f3135255322a77e955297997db2529f31b397c42cc4b9474dc6ec8d329b2233b292078979de5fbf33cad4a1a607aabb66f86501072a729d68e9fc840c8c8e sword-1.8.0.tar.gz URL: sword-1.8.0.tar.gz --Greg
25 Dec 14:18

The UN theatre of hatred

by Melanie

Many people are understandably baffled by the recent UN vote condemning President Trump’s recognition of Jerusalem as Israel’s capital. Since such a vote has zero practical effect, they ask, what was the point of it?

Well indeed. As the American ambassador to the UN Nikki Haley said in her barnstorming response, America will still be moving its embassy to Jerusalem regardless of the UN’s opinion.

The resolution didn’t need to have any practical import. It was merely part of the UN’s theatre of hatred, the malevolent campaign it has waged for decades against Israel and Israel alone as a result of the preponderance of tyrannies, dictatorships, kleptocracies and genocidal antisemitic regimes that make up what’s called called the UN’s “non-aligned block” and which are united in their desire that Israel should be wiped off the map.

So egregious is this hypocrisy in singling out Israel, the sole democracy and upholder of human rights in the region while ignoring the brutal and murderous record of those tyrannies, dictatorships, kleptocracies and genocidal antisemitic regimes, that even a CNN correspondent has been moved to call this out. Jake Tapper tweeted: “Among the 128 countries that voted in favor of the UN resolution condemning the US decision to move the Israeli embassy to Jerusalem were “some countries with some rather questionable records of their own”.

You don’t say. The shocking thing is that so many democratic nations voted alongside these tyrannies: nations such as Germany, Belgium, Ireland, Italy, Luxembourg, the Netherlands, most disappointingly India and, most sickening (to me, anyway), the UK.

Britain, the historic cradle of liberty and democracy and which once fought to defend freedom, has now made common cause with China, Iran, Libya, North Korea and Russia in their joint aim of denying the right of the Jewish people to declare, in accordance with law and history, the capital city of their own country, a right the UK and these other states would deny to no other people or state. What a disgrace.

What on earth did the UN think it was doing? What does Britain’s Prime Minister Theresa May think she’s doing? Does nobody in the British government have a clue about upholding international law or sovereignty? For the real point about this UN vote was that, on this occasion, the principal target wasn’t actually Israel. It was America, and its sovereign right to govern itself. The UN was telling the United States it was not entitled to conduct its own foreign policy in the way it thinks fit.

As Brook Goldstein of the Lawfare Project has observed, this contravenes the UN’s own charter:

“Article 2(7) of the UN Charter is crystal clear: ‘Nothing contained in the present Charter shall authorize the United Nations to intervene in matters which are essentially within the domestic jurisdiction of any state.’ Today’s General Assembly resolution is therefore extralegal and transparently political.

“The UN was built on the principle of respect for the sovereignty of member states (known legally as complementarity), with full awareness that independent nations of the world must make policy decisions in the best interests of their domestic constituencies. The moment the institution begins to attack that very sovereignty is the moment the UN loses all credibility, authority and international deference.”

That’s why most significant part of Nikki Haley’s response was where she said this:

“The United States will remember this day in which it was singled out for attack in the General Assembly for the very act of exercising our right as a sovereign nation. We will remember it when we are called upon to once again make the world’s largest contribution to the United Nations. And we will remember it when so many countries come calling on us, as they so often do, to pay even more and to use our influence for their benefit.”

For decades, the UN’s malicious double standard in repeatedly singling out Israel for condemnation has constituted the negation of its foundational ideals of global justice and peace. The UN has become instead the world’s principal engine of institutionalised Jew-hatred. Now it has crossed another line altogether. The Jerusalem vote could just be the point at which a US President finally decides that America’s tolerance towards the malign global incubus that the UN has become is now at an end.

The post The UN theatre of hatred appeared first on

14 Dec 00:10

Seven Years

12 Dec 19:40

Brazil's Gripen NG Programme: What's Next?

by Saab AB

In an interview with the DefesaNet, Mikael Franzén, Director and head of Business Unit Gripen Brazil, Saab, throws light on the Gripen E/F (called NG in Brazil) project so far and plans for the future.

 “The most important thing that took place this year was the first flight of the 39-8 aircraft, and it was very successful,” he says.

Other than that, the next big achievement was the development of the Wide Area Display (WAD) for Gripen. WAD will have the basic display software by AEL and tactical software by Saab.

About the next main goals, he says that Saab is currently building a test aircraft which is aimed to fly in 2019. "In the meantime, we will perform tests on various subsystems for the fighter jets – it’s going to be a very intense test period. We are also going to work on furthering the development of the tactical system for the aircraft, as well as the two-seat version, which is still in its early stages," he says. 

Post the first test flight, Saab has been in a phase of experimenting with speed and altitudes, various external loads etc.

Development work for the new cockpit is also set to start soon.

Read the full interview here.

Published: 12/12/2017 11:36 AM
12 Dec 19:29

Radio Lockdown: Current Status of Your Device Freedom

Radio Lockdown: Current Status of Your Device Freedom

For more than two years the Free Software Foundation Europe has worked on the issue of Radio Lockdown introduced by a European directive which may hinder users to load software on their radio devices like mobile phones, laptops and routers. We have informed the public and talked to decision makers to fix critical points of the directive. There is still much to do to protect freedom and IT security in our radio devices. Read about the latest proceedings and the next steps.

In 2014, the European Parliament passed the Radio Equipment Directive which, among other regulations, make vendors of radio hardware responsible for preventing users from installing software which may alter the devices' radio parameters to break applicable radio regulations. While we share the desire to keep radio frequencies clean, the directive's approach will have negative implications on users' rights and Free Software, fair competition, innovation and the environment – mostly without equal benefits for security.

[R]adio equipment [shall support] certain features in order to ensure that software can only be loaded into the radio equipment where the compliance of the combination of the radio equipment and software has been demonstrated. – Article 3(3)(i) of the Radio Equipment Directive 2014/53/EU

This concern is shared by more than 50 organisations and businesses which signed our Joint Statement against Radio Lockdown, a result of our ongoing exchange and cooperation with the Free Software community in Europe and beyond.

The Radio Equipment Directive was put in effect in June 2017, but the classes of devices affected by the controversial Article 3(3)(i), which causes the Radio Lockdown, have not yet been defined. This means the directive doesn't concern any existing hardware yet. The definition of what hardware devices are covered will be decided on by the European Commission through a delegated act and is expected to be finished at the earliest by the end of 2018.

The Commission shall be empowered to adopt delegated acts in accordance with Article 44 specifying which categories or classes of radio equipment are concerned by each of the requirements [...] – Article 3(3), paragraph 2 of 2014/53/EU

However, that list is already being prepared in the Expert Group on Reconfigurable Radio Systems, a body of member state authorities, organisations, and individuals whose task is to assist the European Commission with drafting the delegated acts to activate Article 3(3)(i). The FSFE applied to become a member of this committee but was rejected. The concerns that the members of the Expert Group do not sufficiently represent the civil society and the broad range of software users has also been raised during a recent meeting in the European Parliament.

Nevertheless, we are working together with organisations and companies to protect user freedoms on radio devices and keep in touch with members of the expert group. For example, we have shared our expertise for case studies and impact assessments drafted by the group members. We are also looking forward to a public consultation phase to officially present our arguments and improvement suggestions and allow other entities to share their opinion.

All our activities aim to protect Free Software and user rights on current and future radio devices. This is more important than ever since only a few members of the expert group seem to understand the importance of loading software on radio devices for IT security, for example critical updates on hardware which is not or only sporadically maintained by the original vendor. We will continue our efforts to make decision makers understand that Free Software (a.k.a. Open Source Software) is crucial for network security, science, education, and technical innovation. Therefore, broad exceptions in the class definition are necessary.

Conducting such lengthy policy activities requires a lot of resources for non-profit organisations like the FSFE. Please consider helping us by joining as an individual supporter today or a corporate donor to enable our work.

Support FSFE, join the Fellowship
Make a one time donation

08 Dec 17:29

Bad Code

"Oh my God, why did you scotch-tape a bunch of hammers together?" "It's ok! Nothing depends on this wall being destroyed efficiently."
07 Dec 20:49

Who's Following Trump's Lead on Jerusalem?

According to reports in the Israeli press, several other countries will follow President Trump's lead and move their Israeli embassies to Jerusalem. Who is doing this and why speaks volumes about the moral condition of the world.
06 Dec 17:58

Dutch government publishes large project as Free Software

Dutch government publishes large project as Free Software

The Dutch Ministry of the Interior and Kingdom Relations released the source code and documentation of Basisregistratie Personen (BRP), a 100M€ IT system that registers information about inhabitants within the Netherlands. This comes as a great success for Public Code, and the FSFE applauds the Dutch government's shift to Free Software.

Operation BRP is an IT project by the Dutch government that has been in the works since 2004. It has cost Dutch taxpayers upwards of 100 million Euros and has endured three failed attempts at revival, without anything to show for it. From the outside, it was unclear what exactly was costing taxpayers so much money with very little information to go on. After the plug had been pulled from the project earlier this year in July, the former interior minister agreed to publish the source code under pressure of Parliament, to offer transparency about the failed project. Secretary of state Knops has now gone beyond that promise and released the source code as Free Software (a.k.a. Open Source Software) to the public.

In 2013, when the first smoke signals showed, the former interior minister initially wanted to address concerns about the project by providing limited parts of the source code to a limited amount of people under certain restrictive conditions. The ministry has since made a complete about-face, releasing a snapshot of the (allegedly) full source code and documentation under the terms of the GNU Affero General Public License, with the development history soon to follow.

In a letter to Dutch municipalities earlier in November, secretary of state Knops said that he is convinced of the need of an even playing field for all parties, and that he intends to "let the publication happen under open source terms". He went on to say: "What has been realised in operation BRP has namely been financed with public funds. Software that is built on top of this source code should in turn be available to the public again."

These statements are an echo of the Free Software Foundation Europe's Public Money, Public Code campaign, in which we implore public administrations to release software funded by the public as Free Software available to the citizenry that paid for it.

The echoes of 'Public Money, Public Code' do not stop there. In a letter to the Dutch parliament Wednesday 29 November, the secretary of state writes about the AGPL: "The license terms assure that changes to the source code are also made publicly available. In this way, reuse is further supported. The AGPL offers the best guarantee for this, and besides the GPL (General Public License), sees a lot of use and support in the open source community.

"Publication will happen free of charge so that, in the public interest, an even playing field is created for everyone who wants to reuse this code."

This is big news from the Netherlands and an unprecedented move of transparency by the Dutch government. Following a report to the Ministry of the Interior about publishing government software as Free Software (Open Source Software), it seems that this will happen more often. In it, Free Software is described as making the government more transparent, lowering costs, increasing innovation, forming the foundation for a digital participation society, and increasing the quality of code.

"We applaud the Dutch government for releasing the source code for BRP. We have been asking for this method of working since 2001, and it is good to see that the government is finally taking steps towards Free Software. In the future, we hope that the source code will be released during an earlier stage of development, which we believe in this case would have brought issues to light sooner", says Maurice Verheesen, coordinator FSFE Netherlands.

If you like our campaign "Public Money, Public Code", please become a supporter today to enable our work!

Support FSFE, join the Fellowship
Make a one time donation

01 Dec 17:36

Pavel Golub: Code Quality Comparison of Firebird, MySQL, and PostgreSQL

I have read very interesting post “Code Quality Comparison of Firebird, MySQL, and PostgreSQL” today about static analysis of three open-source RDBMS. And I wonder, should we use static code analyzers on an ongoing basis, e.g. PVS Studio?


So, the code-quality rankings are as follows:

    • 1 place – Firebird and PostgreSQL.
    • 2 place – MySQL.



Please remember that any review or comparison, including this one, is subjective. Different approaches may produce different results (though it is mostly true for Firebird and PostgreSQL, but not for MySQL).


Filed under: Coding, MySQL, PostgreSQL Tagged: Coding, MySQL, PostgreSQL
27 Nov 12:14

The Moon and the Great Wall

And arguably sunspots, on rare occasions. But even if they count, it takes ideal conditions and you might hurt your eyes.
21 Nov 19:30

Pull the udder one

by Melanie

The surprise publishing bestseller of the autumn is the slim volume The Secret Life of Cows. The author, Rosamund Young, chronicles the life of Stephanie, Ivor, Olivia, Alice, Jake and the rest of the herd on her Worcestershire organic farm. She regards every bovine as an individual with a distinctive character and a full range of emotions and experiences. Their lives, she writes, are as full and varied as our own.

The secret life of cows may be richer than we realise. The secret life of humans, however, is more brutish than we care to admit. Ascribing human characteristics to animals won’t mean treating them more kindly. It means treating human beings rather worse.

To read my whole Times column (£), please click here.

The post Pull the udder one appeared first on

09 Nov 18:11

32 European ministers call for more Free Software in governmental infrastructure

32 European ministers call for more Free Software in governmental infrastructure

On 6 October, 32 European Ministers in charge of eGovernment policy signed the Tallinn Declaration on eGovernment that calls for more collaboration, interoperable solutions, and sharing of good practices throughout public administrations and across borders. Amongst other things, the EU ministers recognised the need to make more use of Free Software solutions and Open Standards when (re)building governmental digital systems with EU funds.

The Tallinn Declaration, lead by the Estonian EU presidency, has been adopted on 6 October 2017. It is a ministerial declaration that marks a new political commitment at European Union (EU) and European Free Trade Area (EFTA) level on priorities to ensure user-centric digital public services for both citizens and businesses cross-border. While having no legislative power, the ministerial declaration marks a political commitment to ensure the digital transformation of public administrations through a set of commonly agreed principles and actions.

The FSFE has previously submitted its input for the aforementioned declaration during the public consultation round, asking for greater inclusion of Free Software in delivering truly inclusive, trustworthy and interoperable digital services to all citizens and businesses across the EU.

The adopted Tallinn Declaration proves to be a forward-looking document that acknowledges the importance of Free Software in order to ensure the principle of 'interoperability by default', and expresses the will of all signed EU countries to:

"make more use of open source solutions and/or open standards when (re)building ICT systems and solutions (among else, to avoid vendor lock-ins)[...]"

Additionally, the signatories call upon the European Commission to:

"consider strengthening the requirements for use of open source solutions and standards when (re)building of ICT systems and solutions takes place with EU funding, including by an appropriate open licence policy – by 2020."

The last point is especially noteworthy, as it explicitly calls for the European Commission to make use of Free Software and Open Standards in building their ICT infrastructure with EU funds, which is in line with our "Public Money, Public Code" campaign that is targeted at the demand for all publicly financed software developed for the public sector to be publicly made available under Free Software licences.

What's next?

The Tallinn Declaration sets several deadlines for its implementation in the next few years: with the annual presentation on the progress of implementation of the declaration in the respective countries across the EU and EFTA through the eGovernment Action Plan Steering Board. The signatories also called upon the Austrian Presidency of the Council of the EU to evaluate the implementation of the Tallinn Declaration in autumn 2018.

"The Declaration expresses the political will of the EU and EFTA countries to digitise their governments in the most user-friendly and efficient way. The fact that it explicitly recognises the role of Free Software and Open Standards for a trustworthy, transparent and open eGovernment on a high level, along with a demand for strengthened reuse of ICT solutions based on Free Software in the EU public sector, is a valuable step forward to establishing a "Public Money, Public Code" reality across Europe", says Polina Malaja, the FSFE's policy analyst.

Support FSFE, join the Fellowship
Make a one time donation

07 Nov 12:43

Netanyahu’s master class on British TV

by Melanie

Whatever you think about Israel’s PM Benjamin Netanyahu, this is a master class in how to present the case, not just for Israel but for rational western policy on the manifold and gathering threats to the world within the Middle East – of which the overwhelming threat by far is posed by Iran. Watch how he calmly copes with the usual boiler-plate prejudices about the Palestinians and then makes the points that so badly need to be made to a British audience – such as pointing out that Israel has helped save many British lives. Not a fact that the British often hear.

The post Netanyahu’s master class on British TV appeared first on