The idea of a human colony on Titan, a moon of Saturn, might sound crazy. Its temperature hovers at nearly 300° below zero Fahrenheit, and its skies rain methane and ethane that flow into hydrocarbon seas. Nevertheless, Titan could be the only place in the solar system where it makes sense to build a permanent, self-sufficient human settlement.
We reached this conclusion after looking at the planets in a new way: ecologically. We considered the habitat that human beings need and searched for those conditions in our celestial neighborhood.
Our colonization scenario, based on science, technology, politics and culture, presents a thought experiment for anyone who wants to think about the species’ distant future.
We expect human nature to stay the same. Human beings of the future will have the same drives and needs we have now. Practically speaking, their home must have abundant energy, livable temperatures and protection from the rigors of space, including cosmic radiation, which new research suggests is unavoidably dangerous for biological beings like us.
Up to now, most researchers have looked at the Moon or Mars as the next step for human habitation. These destinations have the dual advantages of proximity and of not being clearly unrealistic as choices for where we should go. That second characteristic is lacking at the other bodies near us in the inner solar system, Mercury and Venus.
Mercury is too close to the sun, with temperature extremes and other physical conditions that seem hardly survivable. Venus’s atmosphere is poisonous, crushingly heavy and furnace-hot, due to a run-away greenhouse effect. It might be possible to live suspended by balloons high in Venus’s atmosphere, but we can’t see how such a habitation would ever be self-sustaining.
But although the Moon and Mars look like comparatively reasonable destinations, they also have a deal-breaking problem. Neither is protected by a magnetosphere or atmosphere. Galactic Cosmic Rays, the energetic particles from distant supernovae, bombard the surfaces of the Moon and Mars, and people can’t live long-term under the assault of GCRs.
The cancer-causing potential of this powerful radiation has long been known, although it remains poorly quantified. But research in the last two years has added a potentially more serious hazard: brain damage. GCRs include particles such as iron nuclei traveling at close to the speed of light that destroy brain tissue.
Exposing mice to this radiation at levels similar to those found in space caused brain damage and loss of cognitive abilities, according to a study published last year by Vipan K. Parihar and colleagues in Science Advances. That research suggests we aren’t ready to send astronauts to Mars for a visit, much less to live there.
On Earth, we are shielded from GCRs by water in the atmosphere. But it takes two meters of water to block half of the GCRs present in unprotected space. Practically, a Moon or Mars settlement would have to be built underground to be safe from this radiation.
Underground shelter is hard to build and not flexible or easy to expand. Settlers would need enormous excavations for room to supply all their needs for food, manufacturing and daily life. We ask why they would go to that trouble. We can live underground on Earth. What’s the advantage to doing so on Mars?
Beyond Mars, the next potential home is among the moons of Jupiter and Saturn. There are dozens of choices among them, but the winner is obvious. Titan is the most Earthlike body other than our original home.
Titan is the only other body in the solar system with liquid on the surface, with its lakes of methane and ethane that look startlingly like water bodies on Earth. It rains methane on Titan, occasionally filling swamps. Dunes of solid hydrocarbons look remarkably like Earth’s sand dunes.
For protection from radiation, Titan has a nitrogen atmosphere 50 percent thicker than Earth’s. Saturn’s magnetosphere also provides shelter. On the surface, vast quantities of hydrocarbons in solid and liquid form lie ready to be used for energy. Although the atmosphere lacks oxygen, water ice just below the surface could be used to provide oxygen for breathing and to combust hydrocarbons as fuel.
It’s cold on Titan, at -180°C (-291°F), but thanks to its thick atmosphere, residents wouldn’t need pressure suits—just warm clothing and respirators. Housing could be made of plastic produced from the unlimited resources harvested on the surface, and could consist of domes inflated by warm oxygen and nitrogen. The ease of construction would allow huge indoor spaces.
Titanians (as we call them) wouldn’t have to spend all their time inside. The recreational opportunities on Titan are unique. For example, you could fly. The weak gravity—similar to the Moon’s—combined with the thick atmosphere would allow individuals to aviate with wings on their backs. If the wings fall off, no worry, landing will be easy. Terminal velocity on Titan is a tenth that found on the Earth.
How will we get there? Currently, we can’t. Unfortunately, we probably can’t get to Mars safely, either, without faster propulsion to limit the time in space and associated GCR dosage before astronauts are unduly harmed. We will need faster propulsion to Mars or Titan. For Titan, much faster, as the trip currently takes seven years.
There is no quick way to move off the Earth. We will have to solve our problems here. But if our species continues to invest in the pure science of space exploration and the stretch technology needed to preserve human health in space, people will eventually live on Titan.
Charles Wohlforth and Amanda Hendrix are the authors of Beyond Earth: Our Path to a New Home in the Planets
Adam Victor Brandizzi
Post written by
Jason Black is an investor at RRE Ventures focused on deals in B2B SaaS focused data, machine learning, and developer tools.
Let’s punch through the noise around machine learning. (Credit: Shutterstock)
The tech ecosystem is well acquainted with buzzwords. From “Web 2.0” to “cloud computing” to “mobile first” to “on-demand,” it seems as though each passing year heralds the advent and popularization of new catchphrases to which fledgling companies attach themselves. But while the trends these phrases represent are real, and category-defining companies will inevitably give weight to newly coined buzzwords, so too will derivative startups seek to take advantage of concepts that remain ill-defined by experts and little-understood by everyone else.
In a June post, CB Insights encapsulated the frenzy (and absurdity) of the moment:
“It’s clear that 9 of 10 investors have very little idea what AI is so if you’re a founder raising money, you should sprinkle some AI into your pitch deck. Use of ‘artificial intelligence,’ ‘AI,’ ‘chatbot,’ or ‘bot’ are winners right now and might get you a little valuation bump or get the process to move quicker.
If you want to drive home that you’re all about that AI, use terms like machine learning, neural networks, image recognition, deep learning, and NLP. Then sit back and watch the funding roll in.”
Pitch decks and headlines today are lousy with references to “artificial intelligence” and “machine learning.” But what do those terms really mean? And how can you separate empty claims from real value creation when evaluating businesses and the technologies which underpin them? Having at least a passing knowledge of what you’re talking about is a good first step, so let’s start with the basics.
The terms “artificial intelligence” and “machine learning” are frequently used interchangeably, but doing so introduces imprecision and ambiguity. Artificial intelligence, a term coined in 1956 at a Dartmouth College CS conference, refers to a line of research that seeks to recreate the characteristics possessed by human intelligence.
At the time, “General AI” was thought to be within reach. People believed that specific advancements (like teaching a computer to master checkers or chess) would allow us to learn how machines learn, and ultimately program computers to learn like we do. If we could use machines to mimic the rudimentary way that babies learn about the world, the reasoning went, soon we would have a fully functioning “grown up” artificial intelligence that could master new tasks at a similar or faster rate.
In hindsight, this was a bit too optimistic.
While the end goal of AI was — and still is — the creation of a sentient machine consciousness, we haven’t yet achieved generalized artificial Intelligence. Moreover, barring a major breakthrough in methodology, we don’t have a reasonable timeline for doing so. As a result, research (especially the types of research relevant to the VC and startup world) now focuses on a sub-field of AI known as machine learning aimed at solving individual tasks which can increase productivity and benefit businesses today.
In contrast with AI’s stated goal of recreating human intelligence, machine learning tools seek to create predictive models around specific tasks. Simply put, machine learning is all about utility. Nothing too flashy, just supercharged statistics.
While there are plenty of good definitions for machine learning floating around, my favorite is Tom M. Mitchell’s 1997 definition:
“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.”
Rather formal, but this definition is buzzword-free and gets straight to the elegance and simplicity of machine learning. Simply put, a machine is said to learn if its performance at a set of tasks improves as it’s given more data.
Need an example? How about one from your Statistics 101 course: simple linear regression. The goal (or Task) is to draw a “line of best fit” given some initial set of observed data. Through an iterative process that seeks to minimize the average distance from the regression line and the scatterplot of data (its Performance measure), linear regression improves its predictive “line of best fit” with each additional data point (Experience).
Red dots represent scatter plot of all data. The blue line minimizes average distance from the regression line (represented here by grey lines). Credit: Jason Black
Boom. Machine learning.
Given that relatively low bar, nearly any tech company can claim to be “leveraging machine learning.” So where do we go from here? To further demystify the topic, it’s also useful to understand how machine learning algorithms are developed. With linear regression, the algorithm in question simply draws a line which gets as close to as many individual data points as possible. But how about a real world example?
While the math behind more sophisticated machine learning models quickly becomes incredibly complex, the underlying concepts are often very intuitive.
Developing A Machine Learning Model
Say you wanted to predict what new songs a particular Spotify user would enjoy. Follow your intuition.
You’d probably start with his or her existing library and expect that other users who have a large number of songs in common would be likely to enjoy the complement set of the songs in the other user’s library (a process called collaborative filtering). You might also analyze the acoustic elements in the user’s library to look for common traits such as an upbeat tempo or use of electric guitar (Spotify uses neural networks to do this, for example). Finally you might assign an appropriate weight to the tracks a user has listened to repeatedly, starred, or marked with a thumbs up/down.
Check out this visualization of the filters learned in the first convolutional layer of Spotify’s deep learning algorithm. The time axis is horizontal, the frequency axis is vertical (frequency increases from top to bottom). Credit: Jason Black / Spotify
All that’s left is to translate these intuitions into a mathematical representation that ingests the requisite data sources and outputs a ranked list of songs to present to the user. As the user listens, likes, and dislikes new music, these new data points (or Experience in our earlier terminology) can be fed back into the same models to update, and thus improve, that prediction list.
If you want to learn more about more complex machine learning algorithms, there are ample resources across the web that do a great job of explaining neural networks, deep learning, Bayesian networks, hidden Markov models, and many more modeling systems. But for our purposes, technical implementation is less relevant than understanding how startups create value by harnessing that technology. So let’s keep moving.
Where’s The Value?
Now that we have covered what machine learning is, for what should savvy investors and skeptical readers be on the lookout? In my experience, the initial litmus test is to walk through the three fundamental building blocks of a machine learning model (task T, performance measure P, and experience E) and look for new or interesting approaches. It is these novelties which form the basis of differentiated products and successful startups.
Experience | Unique or Proprietary Data Sets
Without data, you can’t train a machine learning model. Full stop.
With a publicly available training set, you can train a machine learning model to do specified tasks, which is great, but then you are relying on tuning and tweaking the performance of your algorithm to outperform others. If everyone is building machine learning models with the same sets of training data, competitive advantages (at least at the outset) are all but non-existent.
By contrast, a unique and proprietary data set confers an unfair advantage. Only Facebook has access to its Social Graph. Only Uber has access to the pickup/dropoff points of every rider in its network. These are data sets that only one company can use to train their machine learning models. The value of that is obvious. It’s basic scarcity of a private resource. And it can create an enormous moat.
Take *Digital Genius, as an example. The Company offers customer service automation tools and counts numerous Fortune 500 companies as clients. These relationships offer Digital Genius exclusive access to millions of historical customer service chat logs, which represent millions of appropriate responses to a wide swath of customer queries. Using this data, Digital Genius trains its Natural Language Processing (NLP) algorithms before beginning to interact with new, live customers.
In order to attain the same level of performance, a competitor would have to amass a similar number of chat logs from scratch. Practically speaking, this would require performing millions of live customer interactions, many of which would likely be frustrating and useless for the customers themselves. While the algorithm would eventually learn and improve, the model’s day one performance would be lackluster at best, and the company itself would be unlikely to gain traction in the market. Thus, having the proprietary data sets from their largest clients gives Digital Genius a real, differentiated value proposition in the chat automation space.
Of course, another way to go about gaining access to a unique data set is to capture one that has never existed. The coming wave of IoT and the proliferation of sensors promise to unlock troves of new data sets that have never before been analyzed. Companies which get proprietary access to new data sets, or those which create proprietary data sets themselves, can thus outperform the competition.
*OTTO Motors (a division of Clearpath Robotics), has captured one of the most robust data sets of indoor industrial environments on the planet from their network of autonomous materials transport robots (pictured below). Every time an OTTO robot makes its way around the factory floor, information about its environment — moving forklifts, walking workers, path obstructions — can be sent back to a centralized database. If the company then develops a more robust model to navigate around forklifts, for example, the OTTO Motors team can backtest and debug their improvements against real-world, historical environment data without needing to actually test their robots or even use physical environments.
An OTTO 1500 robot autonomously navigates around a warehouse. (Credit: Otto)
This same data-race is even more competitive on the road. The reason why the Google Self-Driving Car, Tesla Autopilot, and Uber Self-Driving teams all tout (or forecast) the number of autonomous miles driven is because each additional mile captures valuable data about changing environments that engineers can then use to test against as they improve their autonomous navigation algorithms. But relative to the global total number of miles driven per year (an estimated 3.15 trillion miles in 2015 in the US alone), only a de minimus number of those are being captured by the three projects mentioned above, leaving greenfield opportunity for startups like Cruise Automation, nuTonomy, and Zoox.
The final, and most experimental approach to leveraging unique data sets is to programmatically generate data which is then used to train machine learning algorithms. This technique is best suited for creating data sets that are difficult or impossible to collect.
Here’s an example. In order to create a machine learning algorithm to predict the direction a person is looking in a real world environment, you first have to train on sample data that has gaze direction correctly labeled. Given the literal billions of images that we have of people looking, with their eyes open, in different directions in every conceivable environment, you’d think this would be a trivial task. The data set—it would seem—already exists.
The problem is that the data isn’t labeled, and manually labeling, let alone determining, a person’s exact gaze direction based on a photograph is way too hard for a human to do to any degree of accuracy or in a reasonable length of time. Despite possessing a vast repository of images, we can’t even create good enough approximations of gaze direction for a machine to train on. We don’t have a complete, labeled set of data.
Programmatically generated eyes used to train machine learning algorithms to determine gaze direction. Credit: Jason Black
In order to tackle this problem, a set of researchers at the University of Cambridge programmatically generated renderings of an artificial eye and coupled each image with its corresponding gaze direction. By generating over 10,000 images in a variety of different lighting conditions, the researchers generated enough labeled data to train a machine learning algorithm (in this case, a neural network) to predict gaze direction in photos of people the machine had not previously encountered. By programmatically generating a labeled data set, we sidestepped the problems inherent to our existing repository of real-world data.
While means of finding, collection, or generating data on which to train machine learning models are varied, evaluating the sources of data a company has access to (especially those which competitors can’t access) is a great starting point when evaluating a startup or its technology. But there’s more to machine learning than just experience.
Task | Differentiated Approaches
Just as access to a unique data set is inherently valuable, developing a new approach to a machine learning Task (T) or starting work on a new or neglected Task provide alternative paths to creating value.
DeepMind, a company Google acquired for over $500 million in 2014, developed a model generation approach that enabled them to pull ahead of the pack in a branch of machine learning known as deep learning (hence the name). While their acquisition went largely unnoticed by the mainstream press, it was difficult to miss the headlines as their machine learning algorithm dubbed “AlphaGo” squared off against the world champion of Go in early 2016.
The rules of the game of Go are relatively simple, yet the number of possible board positions in the game outnumber the atoms in the universe. Traditional machine learning techniques by themselves simply could not produce an effective strategy given the number of possible outcomes. However, DeepMind’s differentiated approach to these existing techniques enabled the team not only to best the current world champion of the game, Lee Sedol, but do so in such a way that spectators described the machine’s performance as “genius” and “beautiful.”
However, the sophistication of performance on one Task does not translate well to other domains. Use the same code from the AlphaGo project to respond to customer service enquiries or navigate around a factory floor and the performance would likely be abysmal. Practically, the approximate 1:1 ratio between Task and machine learning model means that for the short- and medium-term there are innumerable Tasks for which no machine learning model has yet been trained.
For this reason, identifying neglected Tasks can be quite lucrative and easier than one might expect. One might assume, for example, that since a significant amount of time, effort, and money has been spent on improving photo analysis, that video analysis has enjoyed the same performance gains. Not so. While some of the models from static image analysis have carried over, the complexity associated with moving images and audio has discouraged development, especially as plenty of low hanging fruit in the photo identification space still remains.
*Dextro’s Stream API annotating live Periscope videos in real time. (Credit: Dextro)
This created a great opportunity for *Dextro and Clarifai to quickly pull out ahead in applying machine learning to understand the content in videos. These advancements in video analysis now enables video distributors to create searchable videos based on not just the manually submitted metadata from the users who upload, but also the content contained within the video like an automatically generated transcript, an auto-classified categorization label, and even individual objects or concepts that appear throughout the video.
Performance | Step Function Improvement
The final major value driver for startups seeking to harness machine learning technology is meaningfully outperforming the competition at a known Task.
One great example is Prosper which makes loans to individuals and SMBs. Their Task is the same as any other lender on the market — to accurately evaluate the risk of lending money to a particular individual or business. Given that Prosper and their peers in both the alternative and the traditional lending world live or die by their ability to predict creditworthiness, Performance (P) is absolutely critical to the success of their business. So how do relatively young alternative lenders outperform even the largest financial institutions out there?
Instead of taking in tens of data points about a particular borrower Prosper draw an order of magnitude more data. In addition to using a larger and differentiated data set, the two newcomers have been rigorously scouring research papers and doing their own internal development in order to incorporate bleeding edge machine learning algorithms to their data sets. Together, the Performance characteristics of the resulting machine learning models represent a unique and differentiated ability to issue profitable loans to a whole group of consumers who have historically been turned away by legacy institutions.
Being able to judge the performance of a startup’s machine learning models against that of the competition is another great way to cull the most innovative companies and separate out the mere peddlers of hype and buzz.
Back To Business
To be clear, there’s much more to machine learning than hyped-up pitch decks and empty promises. The trick is culling the wheat from the chaff. Armed with clear definitions and a working knowledge of the simple concepts underlying the buzzwords and headlines, go forth and pick through presentations with confidence!
But remember this caveat.
Yes, machine learning — when harnessed appropriately — is both real and powerful. But the ultimate success or failure of any business hinges much more on the market opportunity, productization, and the team’s ability to sell than it does on specific implementations of machine learning algorithms. Just as compelling tech is a necessary but insufficient condition to create a successful tech company, great tech in the absence of a viable business is unlikely to become anything more than a science project.
Note: Jason Black is an investor at RRE Ventures. The few RRE portfolio companies mentioned in this post are clearly denoted with an *asterisk. You can follow Jason on Twitter here.
Adam Victor Brandizzi
Combatendo epilepsia ao limitar carboidratos. Isso é bem surpreendente! (via https://twitter.com/luhugerth/status/802110965588357120)
The ketogenic diet is a special high-fat, low-carbohydrate diet that helps to control seizures in some people with epilepsy. It is prescribed by a physician and carefully monitored by a dietitian. It is stricter than the modified Atkins diet, requiring careful measurements of calories, fluids, and proteins.
Several studies have shown that the ketogenic diet does reduce or prevent seizures in many children whose seizures could not be controlled by medications.
If seizures have been well controlled for some time, usually 2 years, the doctor might suggest going off the diet.
Other than the internet, there are several books about the ketogenic diet available.
Adam Victor Brandizzi
Não muito útil agora mas certamente algo muito interessante!
There's no question that humans are driving long-term changes in the amount of carbon in the atmosphere. But the human influence is taking place against a backdrop of natural carbon fluxes that are staggering in scale. Each year, for example, the amount of CO2 in the atmosphere cycles up and down by over a percent purely due to seasonal differences in plant growth.
The effectiveness of biological activity provides the hope that we could leverage it to help us pull some of our carbon back out of the atmosphere at an accelerated pace. But the incredible scale of biology hides a bit of an ugly secret: the individual enzymes and pathways that are used to incorporate CO2 into living organisms aren't that efficient. These pathways are also linked to a complex biochemistry inside the cell that doesn't always suit our purposes.
Fed up with waiting for life to evolve a solution to our industrialization problem, a German-Swiss team of researchers has decided to roll its own. In an astonishing bit of work, they've taken enzymes from nine different organisms in all three domains of life and used them to build and optimize a synthetic cycle that can use carbon dioxide with an efficiency 20 times that of the system used by plants.
The problem with carbon dioxide is that it's a very stable molecule. It requires a fair bit of energy to break it down, but unless we can figure out how to break it down efficiently, we can't use atmospheric CO2 for any of the many things we use carbon for, like the polymers in our plastics or the graphite in our electrodes. While various ideas have been floated for incorporating atmospheric CO2 into usable chemicals, none of them has managed to scale economically yet.
Living organisms, however, do this trick all the time. More than 90 percent of the carbon removed from the atmosphere ends up being made into sugars by photosynthetic organisms, and there are at least five other minor pathways through which organisms build complex molecules starting from CO2. All of these processes have issues when it comes to how we might want to use them. Many of them are relatively inefficient; others will only work in environmental conditions that are inconvenient; all of them are plugged into a complex cellular biochemistry that often results in lots of side products or a final product that's not easy to turn into something useful.
All of those annoying features are what you might expect from evolution, which is tuning the carbon reactions for the environments and needs of specific organisms. So, the team behind the new work decided to do what evolution hasn't: bring together enzymes from organisms that would never come in contact with each other and build a pathway that's designed for efficient use of CO2.
To do so, they started by focusing on the limiting enzyme in most known pathways: the one that breaks down CO2 in the first place. The team searched the databases for all enzymes belonging to this class and identified ones that had the properties they were looking for. They settled on a group of enzymes called enoyl-CoA carboxylases/reductases, or ECRs.
ECRs were only discovered fairly recently, and they typically aren't even the main route for obtaining carbon in the organisms that have them. But for the purposes here, ECRs have a lot of good properties: they're highly efficient, don't undergo side-reactions with oxygen, and don't require any unusual chemicals to make the reaction work.
But the reaction that ECRs catalyze is only the first step, and it would require a constant feed of chemicals to react the CO2 with. Most organisms obtain carbon dioxide as part of a cycle. They get it to react with a larger chemical, then break off a smaller carbon-containing molecule, and then use a few further reactions to re-form the original chemical. (You can see an example of this in a Calvin Cycle diagram.) So, the team decided to build an entire cycle that incorporates the ECR enzyme.
Rather than adapt an existing cycle, the researchers started from scratch, building hypothetical pathways that use biologically plausible molecules and then evaluating them for energy efficiency. Only once a cycle was identified did they search databases to find out whether any enzymes existed that could catalyze the reaction. They ended up with a 13-step cycle that incorporated CO2 at two different steps and ended by combining the two resulting carbons with acetic acid to form a four-carbon molecule called malic acid. A number of chemical co-factors and energy in the form of ATP would need to be added along the way, but on paper, it all worked out.
And that's when the real work began.
In total, 12 of those 13 steps required a distinct enzyme to work, so the authors had to obtain the genes for all of these, make proteins, and then purify them. Once they had that, the team showed that adding the enzymes for each step ended up producing the products expected. Once all the enzymes were added, the expected end product (malic acid) was produced.
This process let researchers identify any inefficiencies in the process. For example, things tended to bog down at step 10 of the cycle, leading to the accumulation of the chemical produced by step nine. So, they looked at the enzyme involved and determined the reaction would be more efficient if it used oxygen instead of the chemical it typically required. The team looked at the structure of the enzyme and redesigned it to use oxygen. It worked.
They kept tweaking the pathway. The overall design was replaced with one that used a somewhat different reaction pathway. Some of the enzymes ended up spitting out a bunch of side products that were unusable dead-ends; those were engineered to stop this. In other cases, new enzymes were added to do what the researchers call "proofreading"—when a dead-end side product was made, they converted it back to a useful one.
By the time the team was done, the system used 17 different enzymes from nine different organisms, including bacteria, archaea, plants, and humans. The final system was truly impressive, using carbon dioxide with an efficiency 20 times that of the system used in photosynthesis.
Take a moment to appreciate the scale of this accomplishment. In four billion years of evolution, life has only managed to evolve six known pathways that start with carbon dioxide and build more complex molecules. In just a few years, a bunch of grad students in Zurich added a seventh.
There are some pretty obvious limitations to this system as it now stands. A variety of biochemical co-factors need to be added to the reaction to get it to work, and the output—malic acid—is currently only used as a food additive. But malic acid undergoes a variety of reactions within cells, and there's no reason to think that some of these couldn't direct it into a useful industrial chemical. Or, there's no reason to believe we couldn't find other ways of using malic acid if there was suddenly a surplus of it.
The other thing is that the entire pathway can now be put inside cells, either normal bacteria like E. coli or the synthetic cells with a minimal genome that researchers are working on. If that's the case, then the need to supply all the chemical co-factors should go away, since the cells should be producing them anyway. More importantly, if the cell is made to depend on this pathway as its only source of carbon, evolution would have the chance to optimize it even further.
The paper also comes at an interesting time. International climate negotiations are taking place as nations start to grapple with the fact that the Paris Agreement isn't sufficient to keep the planet under the goal of 2 degrees Celsius warming. The US has submitted its plans for the mid-century, which include extensive use of carbon capture and storage to make its energy system carbon neutral. Even then, however, it's likely that we'll need to pull carbon directly out of the atmosphere before this century is out to limit warming.
Something like this, which could make atmospheric carbon into an industrial feedstock, might be essential to enabling that future. The same goes for a separate paper in the same issue of Science that describes re-engineering trees to get them to photosynthesize more efficiently under variable light conditions. We're probably going to need some sort of technology like this, so it's nice to see the fundamental science that could enable it getting done.
Updated to clarify the need for external energy supply.
Adam Victor Brandizzi
El hijo del representante de los Sex Pistols, Malcolm McLaren, y de la diseñadora Vivienne Westwood prendió fuego a un conjunto de valiosos recuerdos punk el sábado, como protesta contra los planes de celebración del 40º aniversario del movimiento.
Joe Corre quemó los artículos, cuyo valor se calcula en 5 millones de libras (6,2 millones de dólares, 5,9 millones de euros) en un barco en el río Támesis, en Londres.
"El punk nunca, nunca tuvo por finalidad ser nostálgico, y tú no puedes aprender cómo ser uno en un taller del Museo de Londres", dijo a los observadores.
"El punk se ha convertido en otra herramienta de mercadeo para venderte algo que no necesitas. La ilusión de una elección alternativa. Conformidad con otro uniforme".
El conjunto de recuerdos quedó reducido a humo, junto con fuegos artificiales y efigies de líderes políticos.
Anteriormente, Corre había dicho que le disgustaban los planes de Londres de celebrar los 40 años de la subcultura punk.
El programa, que incluye actos, conciertos y exposiciones, está respaldado por el alcalde de Londres, la Biblioteca Británica y el Instituto del Cine Británico, entre otros.
Corre afirmó que quería destacar "la hipocresía que hay en el corazón de este robo de 40 años de 'Anarquía en Reino Unido'", el icónico sencillo de los Sex Pistols, lanzado el 26 de noviembre de 1976.
"El 'establishment' ha decidido que es el momento de celebrarlo. Está tratando de privatizarlo, de empaquetarlo, de castrarlo", declaró Corre, citado por el periódico The Times. "Es el momento de prenderle fuego a todo esto".
Un barco del servicio de bomberos ayudó a extinguir las llamas.
The team over at Nervous System recently designed this fun Infinite Galaxy Puzzle that tiles continuously in any direction. Pieces from the top can be removed and added to the bottom, and likewise from side to side. So regardless of where you start the puzzle can continue in a seemingly infinite series of patterns. Each puzzle is printed with satellite imagery obtained from NASA and includes a few themed pieces like an astronaut, shuttle, and satellite. Apparently the puzzles were wildly popular and are now available as a pre-order for 2017. (via My Modern Met)
Wednesday Book Reviews!
A Cartoon History of the Universe (book 2) (Gonick) I decided I’m gonna plow through these. This one was as good as the first, but with the same (in my opinion) tendency to sometimes rely entirely on myth for parts of the story. To Gonick’s credit, he tends to point out when he does this, but to me it makes the stories less enjoyable, insofar as they’re presented as history. Still, quite good, and I feel like I’m learning a lot from his art style.
The Hidden Life of Trees (Wohlleben) I really enjoyed this book. Wohlleben works in forest management, and has written a wonderful book on all the weird ways in which trees adapt to their environments and communicate with each other (using chemical signals, electric signals, etc.). It contains a ton of strange info - for example, apparently some bug-infested trees will chemically signal parasitoids to come eat the bugs that are harming the tree. The author also claims that old trees are more disease resistant because they can communicate with each other about what pathogens have entered the area. Wohlleben occasionally gets a little sappy and mystical about forestry, but all of his serious claims are either backed by scientific evidence or have a disclaimer that they’re just something he suspects is true.
The Utopia of Rules (Graeber) Dammit, Graeber. Every time I wanted to hate this book, he had something really insightful to say. This is my second time reading a Graeber collection, and this one is very similar. There are big, interesting, sweeping thoughts about how humanity and society work. I kinda like this - it’s a sort of throwback to the way people sometimes wrote in the 19th century, trying to grandly analyze The Whole Thing. On the other hand, as with those writers, Graeber often makes statements that are simply wrong.
For instance, he has a whole theory on why superhero comics are the most popular. It comes from an anthropological perspective, which is interesting, but completely neglects the fact that (as any comics dork can tell you) non-Superhero comic genres basically got killed off in the mid-50s by the Comics Code Authority. It’s possible the theory could be salvaged, but it’d have to bear the weight of that weird turn in history. And yet… he’s got so much insight, you find yourself wanting his advice then wanting to scream at him. It’s like a conversation with a brilliant polymath who doesn’t quite have every little fact straight, but who nevertheless is absolutely delightful.
One particular bit really stuck with me: Graeber described the idea that in modern life, people have ideas but then don’t pursue them because they find something vaguely similar on Google. This is obvious, but Graeber’s theory is that this effect may hold back progress more than we think. I’ve certainly observed other cartoonists doing this, whereas my personal rule is to never check google after I have an idea. It’s a waste of time, and it benefits no. A bit later (see next week’s book reviews) I happened to read Tom Standage’s book on the telegraph, in which an important occurrence was that Samuel Morse had no clue other people had tried and failed to make a long distance telegraph. I can’t help but wondering if our incredible connectivity today has more subtle negative consequences than we typically consider.
The Man Who Knew Infinity (Kanigel) A great biography of Ramanujan, with the one caveat (for the potential buyer) that, well… from the perspective of storytelling, Ramanujan’s life just wasn’t that exciting. Of course, as a mathematician (in ways I’m sure I don’t understand) he was one of the most incredible in history. But, perhaps for that reason, his life consists of a lot of sitting around, having abstruse discussions, and making poor dietary choices. It’s a very good biography, but it can’t help but feel a bit tedious here and there, when describing minor flaps between Ramanujan and his relatives, for instance. This sort of thing is made doubly tiresome by the fact that it seems we often don’t actually know the full nature of this or that disagreement, because Ramanujan is treated almost like a God by those who knew him. Still, quite good, and if you want to know about Ramanujan, this is probably the book!
Demerit: Kanigel repeats an incorrect etymology of the word “posh” in which it purportedly is a sea acronym for Port Outward Starboard Home. This is known to be false.
Adam Victor Brandizzi