"Now imagine if you owned a computing device that you could easily fix yourself and inexpensively upgrade as needed. So, instead of having to shell out for a completely new computer, you could simply spend around US$50 to upgrade — which, by the way, you could easily do in SECONDS, by pushing a button on the side of your device and just popping in a new computer card. Doesn’t that sound like the way it should be?"
The project is being developed by Luke Kenneth Casson Leighton of Rhombus-Tech and is sponsored by Christopher Waid of ThinkPenguin, a company that sells multiple RYF-certified hardware products. It is exciting to see passionate free software advocates in our community working with OEMs to produce a computer hardware product capable of achieving RYF certification. We hope that this is the first of many computing systems they are able to design and build that respect your freedom.
The Libre Tea Computer Card is built with an Allwinner A20 dual core processor configured to use the main CPU for graphics; it has 2 GB of RAM and 8 GB of NAND Flash; and it will come pre-installed with Parabola GNU/Linux-libre, an FSF-endorsed fully-free operating system.
We encourage you to back the Libre Tea Computer Card. We'll have to do another evaluation once it is actually produced to be sure it meets our certification standards, but we have high hopes. Their funding deadline is August 26th, so don't delay!
An Uber technical blog of July 2016 described the perception of “many Postgres limitations”. Regrettably, a number of important technical points are either not correct or not wholly correct because they overlook many optimizations in PostgreSQL that were added specifically to address the cases discussed. In most cases, those limitations were actually true in the distant past of 5-10 years ago, so that leaves us with the impression of comparing MySQL as it is now with PostgreSQL as it was a decade ago. This is no doubt because the post was actually written some time/years? ago and only recently published.
This document looks in detail at those points to ensure we have detailed information available for a wider audience, so nobody is confused by PostgreSQL’s capabilities.
Having said that, I very much welcome the raising of those points and also wish to show that the PostgreSQL project and 2ndQuadrant are responsive to feedback. To do this, detailed follow-ups are noted for immediate action.
These points were noted in the blog
* Poor replica MVCC support
* Inefficient architecture for writes
* Inefficient data replication
* Difficulty upgrading to newer releases
Poor replica MVCC support
“If a streaming replica has an open transaction, updates to the database are blocked if they affect rows held open by the transaction. In this situation, Postgres pauses the WAL application thread until the transaction has ended.”
This is true, though misses the point that a parameter exists to control that behaviour, so that when
hot_standby_feedback = on
the described behaviour does not occur in normal circumstances. This is supported from PostgreSQL 9.1 (2011) and above. If you’re not using it, please consider doing so.
Later, this comment leads to the conclusion “Postgres replicas … can’t implement MVCC” which is wholly incorrect and a major misunderstanding. PostgreSQL replicas certainly allow access to data with full MVCC semantics.
Inefficient architecture for writes
“If old transactions need to reference a row for the purposes of MVCC MySQL copies the old row into a special area called the rollback segment.”
“This design also makes vacuuming and compaction more efficient. All of the rows that are eligible to be vacuumed are available directly in the rollback segment. By comparison, the Postgres autovacuum process has to do full table scans to identify deleted rows.”
Moving old rows to a rollback segment adds time to the write path for UPDATEs, but that point isn’t mentioned. PostgreSQL is more efficient architecture for writes in relation to MVCC because it doesn’t need to do as many push-ups.
Later, if the workload requires that we access old rows from the rollback segment that is also more expensive. That is not always needed, yet it is very common for longer running queries to need to access older data. However, if all transactions are roughly the same short duration access to the rollback segment is seldom needed, which just happens to make benchmark results appear good while real-world applications suffer.
By contrast, PostgreSQL has multiple optimizations that improve vacuuming and compaction. First, an optimization called HOT improves vacuuming in heavily updated parts of a table (since 2007), while the visibility map ensures that VACUUM can avoid full table scans (since 2008).
Whether rollback segments help or hinder an application depend on the specific use case and it’s much more complex than this first appears.
Next, we discuss indexes…
“With Postgres, the primary index and secondary indexes all point directly to the on-disk tuple offsets.”
This point is correct; PostgreSQL indexes currently use a direct pointer between the index entry and the heap tuple version. InnoDB secondary indexes are “indirect indexes” in that they do not refer to the heap tuple version directly, they contain the value of the Primary Key (PK) of the tuple.
Comparing direct and indirect indexes we see
* direct indexes have links that go index → heap
* indirect indexes have links that go index → PK index → heap
Indirect indexes store the PK values of the rows they index, so if the PK columns are wide or contain multiple columns the index will use significantly more disk space than a direct index, making them even less efficient for both read and write (as stated in MySQL docs). Also indirect indexes have index search time >=2 times worse than direct indexes, which slows down both reads (SELECTs) and searched writes (UPDATEs and DELETEs).
Performance that is >=100% slower is understated as just a “slight disadvantage” [of MySQL].
“When a tuple location changes, all indexes must be updated.”
This is misleading, since it ignores the important Heap Only Tuple (HOT) optimization that was introduced in PostgreSQL 8.3 in 2007. The HOT optimization means that in the common case, a new row version does not require any new index entries, a point which effectively nullifies the various conclusions that are drawn from it regarding both inefficiency of writes and inefficiency of the replication protocol.
“However, these indexes still must be updated with the creation of a new row tuple in the database for the row record. For tables with a large number of secondary indexes, these superfluous steps can cause enormous inefficiencies.”
As a result of ignoring the HOT optimization this description appears to discuss the common case, rather than the uncommon case. It is currently true that for direct indexes if any one of the indexed columns change then new index pointers are required for all indexes. It seems possible for PostgreSQL to optimize this further and I’ve come up with various designs and will be looking to implement this best fairly soon.
Although they have a higher read overhead, indirect indexes have the useful property that if a table has multiple secondary indexes then an update of one secondary index does not affect the other secondary indexes if their column values remain unchanged. This makes indirect indexes useful only for the case where an application needs indexes that would be infrequently used for read, yet with a high update rate that does not touch those columns.
Thus, it is possible to construct cases in which PostgreSQL consistently beats InnoDB, or vice versa. In the “common case” PostgreSQL beats InnoDB on reads and is roughly equal on writes for btree access. What we should note is that PostgreSQL has the widest selection of index types of any database system and this is an area of strength, not weakness.
The current architecture of PostgreSQL is that all index types are “direct”, whereas in InnoDB primary indexes are “direct” and secondary indexes “indirect”. There is no inherent architectural limitation that prevents PostgreSQL from also using indirect indexes, though it is true that has not been added yet.
We’ve done a short feasibility study and it appears straightforward to implement indirect indexes for PostgreSQL, as an option at create index time. We will pursue this if the HOT optimizations discussed above aren’t as useful or possible, giving us a second approach for further optimization. Additional index optimizations have also been suggested.
Inefficient data replication
“However, the verbosity of the Postgres replication protocol can still cause an overwhelming amount of data for a database that uses a lot of indexes.”
Again, these comments discuss MySQL replication which can be characterized as Logical Replication. PostgreSQL provides both physical and logical replication. All of the benefits discussed for MySQL replication are shared by PostgreSQL’s logical replication. There are also benefits for physical replication in many cases, which is why PostgreSQL provides both logical and physical replication as options.
PostgreSQL physical replication protocol itself is not verbose – this comment is roughly the same as the “inefficient writes” discussion: if PostgreSQL optimizes away index updates then they do not generate any entries in the transaction log (WAL), so there is no inefficiency. Also, the comment doesn’t actually say what we mean by “overwhelming”. What this discussion doesn’t consider is the performance of replication apply. Physical replication is faster than logical replication because including the index pointers in the replication stream allows us to insert them directly into the index, rather than needing to search the index for the right point for insertion. Including the index pointers actually increases not decreases performance, even though the replication bandwidth requirement is higher.
PostgreSQL Logical Replication is available via 2ndQuadrant’s pglogical and will be available in PostgreSQL 10.0 in core.
MySQL “Statement-based replication is usually the most compact but can require replicas to apply expensive statements to update small amounts of data. On the other hand, row-based replication, akin to the Postgres WAL replication, is more verbose but results in more predictable and efficient updates on the replicas.”
Yes, statement-based replication is more efficient in terms of bandwidth, but even less efficient in terms of the performance of applying changes to receiving servers. Most importantly, it leads to various problems and in various cases replication may not work as expected, involving developers in diagnosing operational problems. PostgreSQL probably won’t adopt statement-based replication.
Difficulty upgrading to newer releases
“the basic design of the on-disk representation in 9.2 hasn’t changed significantly since at least the Postgres 8.3 release (now nearly 10 years old).”
This is described as if it were a bad thing, but actually it’s a good thing and is what allows major version upgrades to occur quickly without unloading and reloading data.
“We started out with Postgres 9.1 and successfully completed the upgrade process to move to Postgres 9.2. However, the process took so many hours that we couldn’t afford to do the process again. By the time Postgres 9.3 came out, Uber’s growth increased our dataset substantially, so the upgrade would have been even lengthier.”
The pg_upgrade -k option provides an easy and effective upgrade mechanism. Pg_upgrade does require some downtime, which is why 2ndQuadrant has been actively writing logical replication for some years, focusing on zero-downtime upgrade.
Although the logical replication upgrade is only currently available from 9.4 to 9.5, 9.4 to 9.6 and 9.5 to 9.6, there is more good news coming. 2ndQuadrant is working on highly efficient upgrades from earlier major releases, starting with 9.1 → 9.5/9.6. When PostgreSQL 9.1 is desupported later in 2016 this will allow people using 9.1 to upgrade to the latest versions. This is available as a private service, so if you need zero-downtime upgrade from 9.1 upwards please get in touch.
In 2017, upgrades from 9.2 and 9.3 will also be supported, allowing everybody to upgrade efficiently with zero-downtime prior to the de-supporting of those versions.
A few days ago Uber published the article “Why Uber Engineering Switched from Postgres to MySQL”. I didn’t read the article right away because my inner nerd told me to do some home improvements instead. While doing so my mailbox was filling up with questions like “Is PostgreSQL really that lousy?”. Knowing that PostgreSQL is not generally lousy, these messages made me wonder what the heck is written in this article. This post is an attempt to make sense out of Uber’s article.
In my opinion Uber’s article basically says that they found MySQL to be a better fit for their environment as PostgreSQL. However, the article does a lousy job to transport this message. Instead of writing “PostgreSQL has some limitations for update-heavy use-cases” the article just says “Inefficient architecture for writes,” for example. In case you don’t have an update-heavy use-case, don’t worry about the problems described in Uber’s article.
In this post I’ll explain why I think Uber’s article must not be taken as general advice about the choice of databases, why MySQL might still be a good fit for Uber, and why success might cause more problems than just scaling the data store.
The first problem Uber’s article describes in great, yet incomplete detail is that PostgreSQL always needs to update all indexes on a table when updating rows in the table. MySQL with InnoDB, on the other hand, needs to update only those indexes that contain updated columns. The PostgreSQL approach causes more disk IOs for updates that change non-indexed columns (“Write Amplification” in the article). If this is such a big problem to Uber, these updates might be a big part of their overall workload.
However, there is a little bit more speculation possible based upon something that is not written in Uber’s article: The article doesn’t mention PostgreSQL Heap-Only-Tuples (HOT). From the PostgreSQL source, HOT is useful for the special case “where a tuple is repeatedly updated in ways that do not change its indexed columns.” In that case, PostgreSQL is able to do the update without touching any index if the new row-version can be stored in the same page as the previous version. The latter condition can be tuned using the fillfactor setting. Assuming Uber’s Engineering is aware of this means that HOT is no solution to their problem because the updates they run at high frequency affect at least one indexed column.
This assumption is also backed by the following sentence in the article: “if we have a table with a dozen indexes defined on it, an update to a field that is only covered by a single index must be propagated into all 12 indexes to reflect the
ctid for the new row”. It explicitly says “only covered by a single index” which is the edge case—just one index—otherwise PostgreSQL’s HOT would solve the problem.
[Side note: I’m genuinely curious whether the number of indexes they have could be reduced—index redesign in my challenge. However, it is perfectly possible that those indexes are used sparingly, yet important when they are used.]
It seems that they are running many updates that change at least one indexed column, but still relatively few indexed columns compared to the “dozen” indexes the table has. If this is a predominate use-case, the article’s argument to use MySQL over PostgreSQL makes sense.
There is one more statement about their use-case that caught my attention: the article explains that MySQL/InnoDB uses clustered indexes and also admits that “This design means that InnoDB is at a slight disadvantage to Postgres when doing a secondary key lookup, since two indexes must be searched with InnoDB compared to just one for Postgres.” I’ve previously written about this problem (“the clustered index penalty”) in context of SQL Server.
What caught my attention is that they describe the clustered index penalty as a “slight disadvantage”. In my opinion, it is a pretty big disadvantage if you run many queries that use secondary indexes. If it is only a slight disadvantage to them, it might suggest that those indexes are used rather seldom. That would mean, they are mostly searching by primary key (then there is no clustered index penalty to pay). Note that I wrote “searching” rather than “selecting”. The reason is that the clustered index penalty affects any statement that has a where clause—not just select. That also implies that the high frequency updates are mostly based on the primary key.
Finally there is another omission that tells me something about their queries: they don’t mention PostgreSQL’s limited ability to do index-only scans. Especially in an update-heavy database, the PostgreSQL implementation of index-only scans is pretty much useless. I’d even say this is the single issue that affects most of my clients. I’ve already blogged about this in 2011. In 2012, PostgreSQL 9.2 got limited support of index-only scans (works only for mostly static data). In 2014 I even raised one aspect of my concern at PgCon. However, Uber doesn’t complain about that. Select speed is not their problem. I guess query speed is generally solved by running the selects on the replicas (see below) and possibly limited by mostly doing primary key side.
By now, their use-case seems to be a better fit for a key/value store. And guess what: InnoDB is a pretty solid and popular key/value store. There are even packages that bundle InnoDB with some (very limited) SQL front-ends: MySQL and MariaDB are the most popular ones, I think. Excuse the sarcasm. But seriously: if you basically need a key/value store and occasionally want to run a simple SQL query, MySQL (or MariaDB) is a reasonable choice. I guess it is at least a better choice than any random NoSQL key/value store that just started offering an even more limited SQL-ish query language. Uber, on the other hand just builds their own thing (“Schemaless”) on top of InnoDB and MySQL.
One last note about how the article describes indexing: it uses the word “rebalancing” in context of B-tree indexes. It even links to a Wikipedia article on “Rebalancing after deletion.” Unfortunately, the Wikipedia article doesn’t generally apply to database indexes because the algorithm described on Wikipedia maintains the requirement that each node has to be at least half-full. To improve concurrency, PostgreSQL uses the Lehman, Yao variation of B-trees, which lifts this requirement and thus allows sparse indexes. As a side note, PostgreSQL still removes empty pages from the index (see slide 15 of “Indexing Internals”). However, this is really just a side issue.
What really worries me is this sentence: “An essential aspect of B-trees are that they must be periodically rebalanced, …” Here I’d like to clarify that this is not a periodic process one that runs every day. The index balance is maintained with every single index change (even worse, hmm?). But the article continues “…and these rebalancing operations can completely change the structure of the tree as sub-trees are moved to new on-disk locations.” If you now think that the “rebalancing” involves a lot of data moving, you misunderstood it.
The important operation in a B-tree is the node split. As you might guess, a node split takes place when a node cannot host a new entry that belongs into this node. To give you a ballpark figure, this might happen once for about 100 inserts. The node split allocates a new node, moves half of the entries to the new node and connects the new node to the previous, next and parent nodes. This is where Lehman, Yao save a lot of locking. In some cases, the new node cannot be added to the parent node straight away because the parent node doesn’t have enough space for the new child entry. In this case, the parent node is split and everything repeats.
In the worst case, the splitting bubbles up to the root node, which will then be split as well and a new root node will be put above it. Only in this case, a B-tree ever becomes deeper. Note that a root node split effectively shifts the whole tree down and therefore keeps the balance. However, this doesn’t involve a lot of data moving. In the worst case, it might touch three nodes on each level and the new root node. To be explicit: most real world indexes have no more than 5 levels. To be even more explicit: the worst case—root node split—might happen about five times for a billion inserts. On the other cases it will not need to go the whole tree up. After all, index maintenance is not “periodic”, not even very frequent, and is never completely changing the structure of the tree. At least not physically on disk.
That brings me to the next major concern the article raises about PostgreSQL: physical replication. The reason the article even touches the index “rebalancing” topic is that Uber once hit a PostgreSQL replication bug that caused data corruption on the downstream servers (the bug “only affected certain releases of Postgres 9.2 and has been fixed for a long time now”).
Because PostgreSQL 9.2 only offers physical replication in core, a replication bug “can cause large parts of the tree to become completely invalid.” To elaborate: if a node split is replicated incorrectly so that it doesn’t point to the right child nodes anymore, this sub-tree is invalid. This is absolutely true—like any other “if there is a bug, bad things happen” statement. You don’t need to change a lot of data to break a tree structure: a single bad pointer is enough.
The Uber article mentions other issues with physical replication: huge replication traffic—partly due to the write amplification caused by updates—and the downtime required to update to new PostgreSQL versions. While the first one makes sense to me, I really cannot comment on the second one (but there were some statements on the PostgreSQL-hackers mailing list).
Finally, the article also claims that “Postgres does not have true replica MVCC support.” Luckily the article links to the PostgreSQL documentation where this problem (and remediations) are explained. The problem is basically that the master doesn’t know what the replicas are doing and might thus delete data that is still required on a replica to complete a query.
According to the PostgreSQL documentation, there are two ways to cope with this issue: (1) delaying the application of the replication stream for a configurable timeout so the read transaction gets a chance to complete. If a query doesn’t finish in time, kill the query and continue applying the replication stream. (2) configure the replicas to send feedback to the master about the queries they are running so that the master does not vacuum row versions still needed by any slave. Uber’s article rules the first option out and doesn’t mention the second one at all. Instead the article blames the Uber developers.
To quote it in all its glory: “For instance, say a developer has some code that has to email a receipt to a user. Depending on how it’s written, the code may implicitly have a database transaction that’s held open until after the email finishes sending. While it’s always bad form to let your code hold open database transactions while performing unrelated blocking I/O, the reality is that most engineers are not database experts and may not always understand this problem, especially when using an ORM that obscures low-level details like open transactions.”
Unfortunately, I understand and even agree with this argument. Instead of “most engineers are not database experts” I’d even say that most developers have very little understanding of databases because every developer that touches SQL needs know about transactions—not just database experts.
Giving SQL training to developers is my main business. I do it at companies of all sizes. If there is one thing I can say for sure is that the knowledge about SQL is ridiculously low. In context of the “open transaction” problem just mentioned I can conform that hardly any developer even knows that read only transactions are a real thing. Most developers just know that transactions can be used to back out writes. I’ve encountered this misunderstanding often enough that I’ve prepared slides to explain it and I just uploaded these slides for the curious reader.
This leads me to the last problem I’d like to write about: the more people a company hires, the closer their qualification will be to the average. To exaggerate, if you hire the whole planet, you’ll have the exact average. Hiring more people really just increases the sample size.
The two ways to beat the odds are: (1) Only hire the best. The difficult part with this approach is to wait if no above-average candidates are available; (2) Hire the average and train them on the job. This needs a pretty long warm-up period for the new staff and might also bind existing staff for the training. The problem with both approaches is that they take time. If you don’t have time—because your business is rapidly growing—you have to take the average, which doesn’t know a lot about databases (empirical data from 2014). In other words: for a rapidly growing company, technology is easier to change than people.
The success factor also affects the technology stack as requirements change over time. At an early stage, start-ups need out-of-the-box technology that is immediately available and flexible enough to be used for their business. SQL is a good choice here because it is actually flexible (you can query your data in any way) and it is easy to find people knowing SQL at least a little bit. Great, let’s get started! And for many—probably most—companies, the story ends here. Even if they become moderately successful and their business grows, they might still stay well within the limits of SQL databases forever. Not so for Uber.
A few lucky start-ups eventually outgrow SQL. By the time that happens, they have access to way more (virtually unlimited?) resources and then…something wonderful happens: They realize that they can solve many problems if they replace their general purpose database by a system they develop just for their very own use-case. This is the moment a new NoSQL database is born. At Uber, they call it Schemaless.
By now, I believe Uber did not replace PostgreSQL by MySQL as their article suggests. It seems that they actually replaced PostgreSQL by their tailor-made solution, which happens to be backed by MySQL/InnoDB (at the moment).
It seems that the article just explains why MySQL/InnoDB is a better backend for Schemaless than PostgreSQL. For those of you using Schemaless, take their advice! Unfortunately, the article doesn’t make this very clear because it doesn’t mention how their requirements changed with the introduction of Schemaless compared to 2013, when they migrated from MySQL to PostgreSQL.
Sadly, the only thing that sticks in the reader’s mind is that PostgreSQL is lousy.
If you like my way of explaining things, you’ll love my book.
Uber’s recent (2016) article Why Uber Engineering Switched from Postgres to MySQL
Uber’s 2013 article Migrating Uber from MySQL to PostgreSQL (same author)
The Free Software Foundation Europe protects users, companies and institutions from technological abuse by promoting the use of Free Software. Now there is a project that protects the code used in Free Software itself and promised to preserve it for the future: Inria presents the Software Heritage initiative.
The importance of software in the modern world cannot be overstated. Software is at the crux of all contemporary technological development and has become essential for all areas of scientific research. Software plays a pivotal role in our daily lives, our industries and our society. Software has become the reflection of our technological, scientific and cultural progress.
However, software is prone to disappear, either because it stops being profitable, or projects get cancelled, or the code is deemed obsolete and gets erased, or is left to fade on storage that physically degrades over time.
The Software Heritage initiative is created and funded by Inria. It collects programs, applications and snippets of code distributed under free licenses from a wide variety of active and defunct sources, its aim being to protect code from sinking into oblivion. The distributed and redundant back-end hardens the system against a potentially disastrous losses of data and guarantees its availability for users.
Users can check if a certain file exists within the system and propose new sources the Software Heritage engine can explore in search of more code to store. Soon users will also be able to find out where the code originated from using the Provenance information feature, browse the stored code, run full-text searches on all files, and download the content.
The Heritage aimes to store all Free Software, in other words, software that can be used, studied, adapted and shared freely with others; and this is because the Software Heritage initiative relies on being able to share the software it stores. The Software Heritage website is designed to be a useful tool for professionals, scientists, educators and end-users. Users must be allowed to re-use the code in other products, cutting development time and costs; engineers should be able to discover how others solved certain problems; or compare the efficiency of different solutions to the same problem. And, of course, researchers must have explicit permission to study the evolution of code over time. This is only possible if the code is distributed under a Free and Open Source license.Matthias Kirschner, President of the Free Software Foundation Europe, says: "Software is the most important cultural technology of today's society; it frames what we can and what we cannot do. Software shapes our communication and culture, our economy, education and research, as well as politics. It is important to preserve our collective knowledge about how software has influenced humankind. Collecting source code makes Software Heritage a valuable resource to understand how our society worked at any given time, and to build upon knowledge from humankind."
The Software Heritage intiative ensures today's code will be around for everybody in the future.About Inria
Inria, the French National Institute for computer science and applied mathematics, promotes "scientific excellence for technology transfer and society". Graduates from the world's top universities, Inria's 2,700 employees rise to the challenges of digital sciences. With this open, agile model, Inria is able to explore original approaches with its partners in industry and academia and provide an efficient response to the multidisciplinary and application challenges of the digital transformation. Inria transfers expertise and research results to companies (startups, SMEs and major groups) in fields as diverse as healthcare, transport, energy, communications, security and privacy protection, smart cities and the factory of the future.
Copy and Paste
Applies to note/chord attributes
CtrlC, Ctrl-V work for these
Copied marking is highlighted
Selection changes color when copied
Improved Acoustic Feedback
Trill makes a short trill sound on entry
Copy attributes sounds
Improved Visual Feedback
Status bar notices are animated
Characters are highlighted in Lyric Verses
Directives are made more legible when cursor is on them
For un-metered music
Music can still display in “bars”
Curved Tuplet Brackets
Cadenza on/off uses Cadenza Time, sets smaller note size, editable text
Notes without stems
Multi-line text annotation
Bold, Italic etc now apply to selection
A guard prevents accidental syntax collision
Command Center search now reliable
Standalone Multi-line text with backslash editing
Pasting into measures that precede a time signature change
The recently announced new Craft Camera is a modular device that will also be available with MFT mount. Here are two videos shwowing a bit more about how this works.
On 19 April, the European Commission published a communication on "ICT Standardisation Priorities for the Digital Single Market" (hereinafter 'the Communication'). The Digital Single Market (DSM) strategy intends to digitise industries with several legislative and political initiatives, and the Communication is a part of it covering standardisation. In general, the Free Software Foundation Europe (FSFE) welcomes the Communication's plausible approach for integrating Free Software and Open Standards into standardisation but expresses its concerns about the lack of understanding of necessary prerequisites to pursue that direction.Acknowledging the importance of Free Software
The Communication starts with acknowledging the importance of Open Standards for interoperability, innovation and access to media, cultural and educational content, and promotes "community building, attracting new sectors, promoting open standards and platforms where needed, strengthening the link between research and standardisation". The latter is closely linked to the "cloud", where the Communication states that the "proprietary solutions, purely national approaches and standards that limit interoperability can severely hamper the potential of the Digital Single Market", and highlights that "common open standards will help users access new innovative services".
As a result, the Commission concludes that by the end of 2016 it intends to make more use of Free Software elements by better integrating Free Software communities into standard setting processes in the standards developing organisations.
In the Internet of Things (IoT) domain, the Communication acknowledges the EU need for "an open platform approach that supports multiple application domains ... to create competitive IoT ecosystems". In this regard, the Commission states that "this requires open standards that support the entire value chain, integrating multiple technologies ... based on streamlined international cooperation that build on an IPR ["intellectual property rights"] framework enabling easy and fair access to standard essential patents (SEPs)".
FSFE welcomes this direction taken in the Communication, as well as the Commissioner Günther Oettinger's position, highlighted in his keynote at the Net Futures 2016, that "easy reuse of standard and open components accelerates digitisation of any business or any industry sector." Furthermore, according to the Commissioner Oettinger, Free Software standards "enable transparency and build trust."EC putting good efforts at risk
However, the attempts of the Commission to promote Open Standards and a more balanced approach towards "intellectual property rights" policies in standardisation may be seriously hampered by the Commission's stance towards FRAND licensing. In particular, the Commission sets the goal to "clarify core elements of an equitable, effective and enforceable licensing methodology around FRAND principles" which is seen as striking the right balance in standardisation and ensuring the "fair and non-discriminatory" access to standards. Furthermore, it is a well-known fact that FRAND licensing terms that in theory stand for "fair, reasonable, and non-discriminatory" terms, in practice are incompatible with most of Free Software.
In conclusion, whilst the Communication sets a positive direction towards the promotion of Open Standards and the inclusion of Free Software communities into the standardisation, this direction may be seriously limited if the Commission fails to acknowledge the incompatibility of FRAND licensing terms with Free Software licenses. This in return can in practice make a proper Free Software implementation of the standard impossible. As a result, the attempts of the Commission to achieve truly "digital single market" based on interoperability, openness and innovation will not be achieved as the significant part of innovative potential found in Free Software will be in practice excluded from standardisation.
In line with our recommendations on the DSM initiative that got well received by the Commission, FSFE believes that in order to achieve the adequate integration of Free Software communities, and the overall plausible approach towards appropriate use of Open Standards the Commission needs to avoid the harmful consequences of FRAND licensing to Free Software, and instead pursue the promotion of standards that are open, minimalistic and implementable with Free Software. These standards will give the substance to the Commission's promises to encourage Free Software communities to participate in standardisation.
There is a $2,000 price drop on the AG-AF100 superkit sold by BHphoto (Click here).
Nenhuma Oficina de Escrita Criativa tem o poder de transformar o aluno, num passe de mágica, em escritor. Nenhuma Oficina de Escrita Criativa pode conceder ao aluno, em poucas semanas, uma capacidade única para se expressar. Dizendo de outra forma, nenhum curso pode fazer com que o aluno ingresse, de repente, numa categoria iluminada de seres humanos.
Faço questão de sempre ressaltar essas impossibilidades, principalmente quando falo sobre o trabalho que desenvolvo em minha própria Oficina de Escrita Criativa — e quando abro inscrições para novas turmas, o que aconteceu recentemente, durante o webinário “12 Conselhos para Escritores Principiantes”.
Mas se os milagres citados no primeiro parágrafo não acontecem, por quais motivos alguém que deseja ser escritor deve cursar uma Oficina de Escrita Criativa? Para que ela serve? A seguir, apresento 5 razões:
Estamos sempre compartilhando histórias, ainda que façamos isso, a maior parte do tempo, de forma inconsciente. Somos contadores de histórias; estamos sempre compondo narrativas e transmitindo-as a nossos familiares, a nossos amigos.
Em minha Oficina de Escrita Criativa levo o aluno para além dessa constatação — e faço com que ele, ao se tornar consciente dessa habilidade, compreenda como, em literatura, é possível carregar essas narrativas de tensão, humor, ironia, dramaticidade.
Esse trabalho de ensinar como uma narrativa pode ser mais complexa não se resume a meras técnicas para estimular a imaginação, mas abrange refletir sobre a condição humana, questionar a si próprio e observar a realidade com um novo olhar.
É preciso transformar a habilidade para contar histórias numa prática consciente.
Transformar a habilidade para contar histórias numa prática consciente exige, portanto, um aprofundamento da autoconsciência — mas também exige maior precisão ao utilizar a linguagem, bem como o estudo dos elementos que compõem uma boa história.
Ao conhecer cada um desses elementos — em contato com textos fundamentais da literatura — e analisar de que maneira importantes escritores trabalharam, o aluno desperta para a necessidade de ter uma linguagem mais rigorosa, o que não deixa de ser uma forma de clarificar o pensamento.
Isso, aliás, nos recorda o sentido da palavra “aptidão”: não apenas uma disposição inata, mas uma habilidade que, em literatura, se aperfeiçoa à medida que estudamos e exercitamos o domínio da linguagem.
Observar a realidade com um novo olhar e ampliar sua autoconsciência leva o aluno a adquirir também uma compreensão mais profunda dos outros, dos seus semelhantes. Sem ela, é impossível construir narradores e personagens convincentes.
O escritor precisa saber quem narra a história que ele deseja contar e quem a vivencia: quais os valores, os preconceitos, as contradições, os sentimentos do narrador e dos personagens?
Procuro fazer, assim, com que o aluno alcance uma nova forma de empatia, por meio da qual possa viver e analisar os fatos sob diferentes perspectivas, como se carregasse consigo diferentes “eus”.
Quando estudamos os elementos acima não a partir de teorias, mas da leitura de textos fundamentais, o aluno compreende como estilos literários diversos expressam estados de espírito ou características pessoais que podem ou não ser semelhantes.
Aula após aula, o aluno é desafiado por esses grandes autores — desafiado a, conhecendo cada um deles, criar seu próprio estilo, sua própria voz.
Trata-se de um reaprendizado da leitura, de uma reeducação da atenção — um mergulho indispensável para perceber, no texto e na realidade, os pormenores que quase sempre nos escapam.
Por fim, é fundamental saber que a escrita exige disciplina, exige um comportamento metódico. Como tudo na vida, se não aprendemos a ser perseverantes, não nos desenvolvemos. É preciso ter consciência de que escrever não é fácil — e que aptidão ou talento são inúteis se não há determinação.
Estas 5 razões resumem o trabalho que desenvolvo em minha Oficina de Escrita Criativa. Mas você pode conhecer também o depoimento de alguns de meus alunos.
The post 5 razões para cursar minha Oficina de Escrita Criativa appeared first on Rodrigo Gurgel.
IO FU' GIÀ QUEL CHE VOI SETE, E QUEL CH'I’ SON VOI ANCO SARETE
Para o venezuelano Ricardo Hausmann, não é hora de ficar em cima do muro: o país precisa de um plano crível (e isso não deve ser possível enquanto Nicolás Maduro estiver no poder)
O eu do presente reavalia a previsão do eu do passado.
Abordei 15 conhecidos nos últimos 30 dias fazendo propaganda de vagas e/ou pedindo indicações. Dos 15 que abordei, 5 eu já sabia que tinham saído do país e outros 5 eu descobri que estão saindo ou planejando a saída.
E mal fez um ano.
Agora que o dólar passou de R$3 para fins práticos e uma recessão se avizinha, voltou a ser muito interessante trabalhar no exterior.
Eles já estavam vindo buscar talentos por aqui mesmo, agora então que um salário da ordem de 100 mil dólares por ano é capaz de fazer sobrar um bom pé de meia em reais, quero ver quem (bom) vai querer ficar no Brasil.
Aquisição de (verdadeiros) talentos em tecnologia: o que já era difícil vai ficar pior.
Can you use a magnifying glass and moonlight to light a fire?
At first, this sounds like a pretty easy question.
A magnifying glass concentrates light on a small spot. As many mischevious kids can tell you, a magnifying glass as small as a square inch in size can collect enough light to start a fire. A little Googling will tell you that the Sun is 400,000 times brighter than the Moon, so all we need is a 400,000-square-inch magnifying glass. Right?
Wrong. Here's the real answer: You can't start a fire with moonlightPretty sure this is a Bon Jovi song. no matter how big your magnifying glass is. The reason is kind of subtle. It involves a lot of arguments that sound wrong but aren't, and generally takes you down a rabbit hole of optics.
First, here's a general rule of thumb: You can't use lenses and mirrors to make something hotter than the surface of the light source itself. In other words, you can't use sunlight to make something hotter than the surface of the Sun.
There are lots of ways to show why this is true using optics, but a simpler—if perhaps less satisfying—argument comes from thermodynamics:
Lenses and mirrors work for free; they don't take any energy to operate.And, more specifically, everything they do is fully reversible—which means you can add them in without increasing the entropy of the system. If you could use lenses and mirrors to make heat flow from the Sun to a spot on the ground that's hotter than the Sun, you'd be making heat flow from a colder place to a hotter place without expending energy. The second law of thermodynamics says you can't do that. If you could, you could make a perpetual motion machine.
The Sun is about 5,000°C, so our rule says you can't focus sunlight with lenses and mirrors to get something any hotter than 5,000°C. The Moon's sunlit surface is a little over 100°C, so you can't focus moonlight to make something hotter than about 100°C. That's too cold to set most things on fire.
"But wait," you might say. "The Moon's light isn't like the Sun's! The Sun is a blackbody—its light output is related to its high temperature. The Moon shines with reflected sunlight, which has a "temperature" of thousands of degrees—that argument doesn't work!"
It turns out it does work, for reasons we'll get to later. But first, hang on—is that rule even correct for the Sun? Sure, the thermodynamics argument seems hard to argue with,Because it's correct. but to someone with a physics background who's used to thinking of energy flow, it may seem hard to swallow. Why can't you concentrate lots of sunlight onto a point to make it hot? Lenses can concentrate light down to a tiny point, right? Why can't you just concentrate more and more of the Sun's energy down onto the same point? With over 1026 watts available, you should be able to get a point as hot as you want, right?
Except lenses don't concentrate light down onto a point—not unless the light source is also a point. They concentrate light down onto an area—a tiny image of the Sun.Or a big one! This difference turns out to be important. To see why, let's look at an example:
This lens directs all the light from point A to point C. If the lens were to concentrate light from the Sun down to a point, it would need to direct all the light from point B to point C, too:
But now we have a problem. What happens if light goes back from point C toward the lens? Optical systems are reversible, so the light should be able to go back to where it came from—but how does the lens know whether the light came from B or to A?
In general, there's no way to "overlay" light beams on each other, because the whole system has to be reversible. This keeps you from squeezing more light in from a given direction, which puts a limit on how much light you can direct from a source to a target.
Maybe you can't overlay light rays, but can't you, you know, sort of smoosh them closer together, so you can fit more of them side-by-side? Then you could gather lots of smooshed beams and aim them at a target from slightly different angles.
Nope, you can't do this.We already know this, of course, since earlier we said that it would let you violate the second law of thermodynamics.
It turns out that any optical system follows a law called conservation of étendue. This law says that if you have light coming into a system from a bunch of different angles and over a large "input" area, then the input area times the input angleNote to nitpickers: In 3D systems, this is technically the solid angle, the 2D equivalent of the regular angle, but whatever. equals the output area times the output angle. If your light is concentrated to a smaller output area, then it must be "spread out" over a larger output angle.
In other words, you can't smoosh light beams together without also making them less parallel, which means you can't aim them at a faraway spot.
There's another way to think about this property of lenses: They only make light sources take up more of the sky; they can't make the light from any single spot brighter,A popular demonstration of this: Try holding up a magnifying glass to a wall. The magnifying glass collects light from many parts of the wall and sends them to your eye, but it doesn't make the wall look brighter. because it can be shownThis is left as an exercise for the reader. that making the light from a given direction brighter would violate the rules of étendue.My résumé says étendue is my forté. In other words, all a lens system can do is make every line of sight end on the surface of a light source, which is equivalent to making the light source surround the target.
If you're "surrounded" by the Sun's surface material, then you're effectively floating within the Sun, and will quickly reach the temperature of your surroundings.(Very hot)
If you're surrounded by the bright surface of the Moon, what temperature will you reach? Well, rocks on the Moon's surface are nearly surrounded by the surface of the Moon, and they reach the temperature of the surface of the Moon (since they are the surface of the Moon.) So a lens system focusing moonlight can't really make something hotter than a well-placed rock sitting on the Moon's surface.
Which gives us one last way to prove that you can't start a fire with moonlight: Buzz Aldrin is still alive.