This is the keynote talk I just gave at YAPC::NA 2014 in Orlando, Fl.
YouTube video below: click the link below to read the full text instead.
Keynote talk for YAPC::NA 2034
No, that's not right ...
This should be titled Keynote talk for YAPC::NA 2014. What's up with the title?
Obviously I must have had some success with the experiment on applied algorithmic causality violation — that's time travel as applied to computing — that I was thinking about starting some time in the next twenty years, in my next career, as a card-carrying Mad Scientist.
Or maybe that was some other me, in some other parallel universe.
But this isn't the file I remember writing, it's some other talk about a conference that hasn't happened yet and probably won't happen now if I read you their keynote. It's probably not a good idea to read it to you — we wouldn't want to cause any temporal paradoxes, would we? So I'm not going to go there — at least, not yet. Before we take a look at the state of Perl in 2034, we need to know where we stand today, in 2014. So please allow me to start again:
The world in 2014
Back in the 1990s I used to argue with Perl for a living. These days I'm no longer a programmer by profession: instead, I tell lies for money. I'm a science fiction writer. As my friend and fellow-writer Ken Macleod observes, the secret weapon in science fiction's armory is history. So I'd like to approach the subject of this keynote sideways, by way of a trip down memory lane, from the year 2014 — late in English summer afternoon of the computer revolution, just before the sun set — all the way back to 1914.
To the extent that the computing and information technology revolution is a late 20th and early 21st century revolution, we can draw some lessons about where it may be going by observing the trajectory of one of the other major technological revolutions that came before it — the mass transportation revolution.
Like all technological revolutions, the development of computers followed a sigmoidal curve of increasing performance over time. Each new generation of technology contributed to the next by providing the tools and machines needed to bootstrap their successors.
The computer revolution started slowly enough, but the development of the transistor galvanized it, and the integrated circuit, and its offspring, the monolithic processor-on-a-chip, up-ended the entire board game. Over a fifty year period, from roughly 1970 to 2020, we grew so accustomed to Moore's Law — the law that the transistor count of a dense integrated circuit doubles roughly once every two years — that we unconsciously came to mistake it for a law of nature. But in reality, it was nothing of the kind: it merely represented our ability to iteratively improve a process governed by physics until it converged on a hard limit.
In the case of Moore's law, the primary governing constraint was electrical resistivity. As you shrink the length of a circuit, the resistance decreases: you can use lower voltages, or lower current flows, and run at a higher frequency. Physically smaller circuits can be made to switch faster. We build smaller integrated circuits by increasing the resolution of the lithographic process by which we print or etch surface features. But we are doomed to run into the limits of physics. First, we lose energy as heat if we try to switch too fast. Secondly, current leakage becomes a problem as our circuits become smaller. And thirdly, at the end of the day, we're making stuff out of atoms, not magic pixie dust: it's not obvious how to build circuits with tracks less than one atom wide.
Similarly, if we look back to an earlier century we can see that the speed and cost of mass transportation followed a similar sigmoid development curve between roughly 1830 and 1970.
And for me, one of the most interesting things about this sort of technological revolution is what happens after we hit the end of the curve ...
Addressing YAPC::NA in 2014 I feel a lot like a fat, self-satisfied locomotive boiler designer addressing a convention of railway design engineers in 1914. We've come a long way in a relatively short period of time. From the first steam locomotive — Richard Trevithick's 1804 Merthyr Tydfil Tramroad engine — to 1914, steam locomotives surged out of the mines and ironworks and onto permanent roads linking cities all over the world, crossing the American continent from east to west, reaching the dizzy speed of almost a hundred miles per hour and hauling hundreds of passengers or hundreds of tons of freight.
Speaking from 1914's perspective, it is apparent that if the current rate of improvement in the technology can be maintained, then steam locomotion has a bright future ahead of it! We can reasonably expect that, by 2014, with improvements in signaling and boiler technology our 200 mile per hour passenger trains will constitute the bedrock of human transport, and we, as boiler engineers, will be the princes of industry.
Pay no attention to those gasoline-burning automobiles! We can safely ignore them. They're inefficient and break down all the time, away from the race track they're no faster than a horse-drawn carriage — cobblestones and dirt trails hammer their suspensions, quite unlike our steel rails lying on carefully leveled sleepers — and the carnage that results when you entrust motorized transport to the hands of the general public is so frightful that it's bound to be banned.
As for the so-called aeroplane, it's a marginal case. To make it work at all requires an engine that can produce one horsepower per pound of weight — a preposterous power to weight ratio — and it's ability to carry freight is marginal. We might eventually see an aeroplane that can fly for a hundred miles, at a hundred miles per hour, carrying up to a ton of mail or a dozen passengers: but it will never displace the mature, steadily improving technology of the steam locomotive from its position at the apex of mass transportation.
So, that's the view from 1914. What actually happened?
Well, as it happens, our locomotive boiler-maker was absolutely right: 200 mph steam-powered trains are the backbone of passenger transportation.
Admittedly the steam is heated in Électricité de France's nuclear reactors and the motive power conveyed to the trains by overhead electrical wires — the French aren't stupid: nothing makes a steam boiler explosion worse like adding fifty tons of reactor-grade uranium to the problem — but it's not too much of a stretch to say that the European and Chinese high speed rail networks are so efficient that they're taking passengers away from low cost airlines on routes of less than 500 miles.
But in places where we don't have a determined government building out the infrastructure to support shiny 200mph atomic-powered trains, or where we have to travel more than about 500 miles, airliners ate the railways' lunch. The steam engines of 1914 and their lineal descendants were nowhere near the theoretical limits of a Carnot heat-cycle engine, nor were they optimized for maximum efficiency in either power output or weight. Gas turbines offered a higher power density and lower weight and made long-haul air travel feasible. At the same time, the amount of infrastructure you need to build at ground level to support a given air route — namely two airports — is pretty much constant however far apart the airports are, whereas the cost of railroad tracks scales linearly with the distance. A 2000 mile railroad route costs at least ten times as much as a 200 mile railroad route, and takes ten times as long to traverse. Whereas a 2000 mile plane journey — given jet airliners traveling at 500 mph — costs no more to build and little more to operate than a 200 mile route. Furthermore, a big chunk of the duration of any airline flight is a fixed overhead, the latency imposed by pre-flight boarding and post-flight unloading. Assuming two hours at the start and one hour at the end of the journey, a 2000 mile flight may take seven hours, only twice the duration of a 200 mile flight. So air wipes the floor with rail once we cross a critical time threshold of about three hours.
As for automobiles, our railroad engineer of 1914 overlooked their key advantage: flexibility. It turns out that many people find personal transport to be more valuable than fast or efficient transport. So much so, that they were willing to pay for an unprecedented build-out of roads and a wholesale reconstruction of cities and communities around the premise of mass automobile ownership. At which point the cobblestones and dirt trails were replaced by concrete and tarmac, driver and vehicle licensing laws were enacted, and cars got a whole lot faster and safer.
Mind you, even as the steam locomotive fell into eclipse, it wasn't all plain sailing for the aircraft and automobiles. Today's airliners actually fly more slowly than the fastest passenger airliners of 1994. It turns out that physical limits apply: we are constrained by the energy density of our fuels and the ability of our airframes to deal with the thermal stress of high speed flight through air.
Concorde, the type specimen of the supersonic airliner, was a gorgeous, technologically sophisticated, white elephant that, in the end, couldn't compete economically with aircraft that flew at half the speed but consumed a fifth as much fuel per seat. Concorde, in service, crossed the Atlantic in three hours, with a hundred passengers, while burning a hundred tons of jet fuel. A Boeing 747 would take twice as long, but could fly twice as far with nearly five times as many passengers on the same fuel load.
Automobiles have more subtle limitations, imposed largely by our requirements for safety. They operate in close proximity to other people and vehicles, not to mention large animals: they have to be able to protect their precious cargo of passengers from the forces of impact if something goes wrong, while not imposing undue safety externalities on innocent by-standers. Furthermore, they have to be manually controlled by poorly-trained and absent-minded ordinary people. We have speed limits on our highways not because we can't build 200 mph cars — we can — but because we can't reliably train all our drivers to be as safe as Michael Schumacher at 200 mph.
Now, the fact that we don't have 200 mph automobiles in every garage, or Mach 4 SSTs at every airline terminal, or 200 mph nuclear-powered express trains on Amtrak, shouldn't blind us to the fact that the mass transportation industry is still making of progress. But the progress it's making is much less visible than it used to be. It's incremental progress.
For example, the first-generation Boeing 747 jumbo jet, the 747-100, carried roughly 400 passengers and had a maximum range of just over 6000 miles. Today's 747-8 can fly 50% further on 30% more fuel, thanks to its more efficient engines, with 460 passengers in equivalent seating. Other airliners have become even more efficient. With Pratt & Whitney and Rolls now moving towards commercialization of geared turbofan engines, we can expect to see up to 30% greater efficiency in the jet engines of airliners in service in the next 30 years. But 30 years is also the span of time that once separated the Wright Flyer from the Douglas DC-3, or the Spitfire from the SR-71.
(Incidentally, I'm going to exclude from this discussion of incremental change the implications of the Tesla Model S for the automobile industry — an electric car that people actually aspire to drive — or Google's self-driving car project, or Volvo's equivalent. These are properly understood as developments emerging from the next technological revolution, the computing and information field, which is still undergoing revolutionary change and disrupting earlier industries.)
The point I'd like to emphasize is that, over time, a series of incremental improvements to a mature technological field — be it in engine efficiency, or safety design, or guidance technology — can add up to more than the sum of its parts. But it's nothing like as flashy or obvious as a doubling of performance every two years while a new technology is exploding towards the limits physics imposes on what is possible. Linear improvements look flat when they follow an exponential curve, even if they quietly revolutionize the industry they apply to.
And that, I think, is what the future of the computing industry looks like in 2014.
2014: the view forward
As of 2014, we're inching closer to the end of Moore's Law. It seems inevitable that within the next decade the biannual doubling of performance we've come to expect as our birthright will be a thing of the past.
We had a brief taste of the end of the exponential revolution in the early noughties, when the clock frequency wars that had taken us from 33MHz 80386s to 3GHz Pentium IVs in just one decade ended, killed by spiraling power consumption and RF interference. There will come a point some time between 2020 and 2030 when we will no longer be able to draw ever finer features on our atomically perfect semiconductor crystals because to do so we'd need to create features less than one atom wide. For a while progress will appear to continue. We will see stacked architectures with more and more layers plastered atop one another. And we'll see more and more parallelism. But the writing is on the wall: barring paradigm shifts such as the development of a mass-producible room temperature quantum computing architecture, we're going to hit the buffers within the next few years.
An interesting side-effect of Moore's Law, seldom commented on, is that each successive generation of chip fab — necessary in order to produce circuit features at ever finer resolutions — also double in price. Once the manufacturers of the highly specialized equipment that goes into fab lines can no longer up-sell Intel and the other foundries on new hardware, there are going to be interesting repercussions. We may see a vast shake-out in the hardware side of the manufacturing business. For example, in aerospace, between 1965 and 1975 roughly half the US aerospace engineering faculty found themselves thrown out of work. Or we may see a short-lived commodification of semiconductor manufacturing plant, as the suppliers desperately compete to stay in business and the cost of a new factory drops by an order of magnitude. Either way, once the manufacturing costs of the factories are amortized we can look forward to the commodification of the chips themselves. There seems to be no market-imposed lower floor to the price of computing machinery: that is, the cheaper we can make chips, the more uses we can find for them.
At the same time, improvements in the efficiency of microprocessors at any given lithographic scale may continue for some time. Power consumption can be cut. Incremental design improvements can be applied. A 64-bit ARM core from 2034, made using a 7-nm process, will undoubtedly out-perform a 7-nm 64-bit ARM core from 2020, both on energy efficiency and manufacturing cost — both factors in the all-important total cost of ownership per MIP.
But by 2034 the kind of progress we see in hardware will resemble the slow, incremental improvements in the transportation industry of today rather than the wildly surging sigmoid curve we experienced during the hey-day of the semiconductor revolution.
And we're going to be dealing with a world full of ubiquitous dirt-cheap low-powered microprocessors with on-die sensors and wireless networking, which remain in production for decades because there is no prospect of a faster, cheaper better product coming along any time soon.
2034: The view backward
Okay, so I'm eventually going to give you a digest of what I found in the YAPC keynote that my time-travelling future self sent me from 2034.
But first, having taken a look at the world of 1914, I'd now like you to bear with me as I describe the experiences of an earlier me, visiting the world of today by time machine from 1994. Then we're going to borrow his time machine and visit the world of 2034 together.
The world of 2014 actually looks a lot like 1994. And this shouldn't surprise us. Change is gradual. Most of the buildings around us today were already here 20 years ago. Most of the people around us were already alive back then, too. The world of 2014 is a wrapper around the smaller, less complicated world of 1994, adding depth and texture and novelties. And so my earlier self, visiting from 1994, would have found lots of things about the future unsurprisingly familiar.
My 1994 self would have been utterly underwhelmed by the automobiles and airliners and architecture and fashion changes visible in 2014. After all, these are ephemera that follow constant — if unpredictable — trajectories. The appearance of URLs in adverts everywhere might have made 1994-me raise an eyebrow — the world wide web was new and shiny and kinda slow and clunky in 1994 — but it was at least a thing and I was aware of it, so predicting that it would have spread like weed would have been an easy target. Nor would the laptops everyone here is carrying have been particularly remarkable. They're slimmer, shinier, cheaper, and much more powerful than the laptop I owned in 1994, but they're not a fundamentally different type of object.
What would have weirded 1994-me out about the 2014-present would have been the way everyone walks around staring at these little glowing slabs of glass as if they're windows onto the sum total of human knowledge. Which, after all, they are. Yes, the idea of ubiquitous wireless networking and pocket computers with touchscreens that integrate cellular phone services with data is the kind of thing that trips off the tongue and any vaguely tech-savvy science fiction writer from 1994 could be expected to understand. But that such devices are in every hand, from eight years old to eighty, would be a bit of a reach. We tend to forget that in the early 1990s, the internet was an elite experience, a rare and recondite tool that most people had little use for. 1994 was still the age of CompuServe and AOL — remember AOL, that's kind of like a pre-internet version of Facebook? Computers were twenty years newer than they are today: older folks didn't know how to type, or use a mouse, and this was normal.
But the mere existence of smartphones would only be the start of it. The uses people made of their smartphones — that would be endlessly surprising. Cat macros. Online dating websites. Geocaching. Wikipedia. Twitter. 4chan.
If 1994 me had gotten onto 2014 twitter, that would have been an eye-opener. The cultural shifts of the past two decades, facilitated by the internet, have been more subtle and far-reaching than 1994-me would have imagined. Briefly: the internet disintermediates people and things. Formerly isolated individuals with shared interests can form communities and find a voice. And once groups of people find a voice they will not be silenced easily. Half the shouting and social upheaval on the internet today comes from entrenched groups who are outraged to learn that their opinions and views are not universally agreed upon; the other half comes from those whose silence was previously mistaken for assent.
Once technologies get into the hands of ordinary people, nobody can even begin to guess where they're going to end up, or what kind of social changes they're going to catalyze. The internet has become a tool for revolutions, from Egypt to Yemen by way of Ukraine; it's also a tool for political repression.
(And I'm straying off-topic.)
Now, let's go and borrow that time machine and take a look at 2034.
2034 superficially looks a lot like 2014, only not. After all, most of 2034 is already here, for real, in 2014.
The one stunningly big difference is that today we're still living through exponential change: by 2034, the semiconductor revolution will have slowed down to the steady state of gradual incremental changes I described earlier. Change won't have stopped — but the armature of technological revolution will have moved elsewhere.
Now for a whistle-stop tour of 2034:
Of the people alive in 2014, about 75% of us will still be alive. (I feel safe in making this prediction because if I'm wildly wrong — if we've undergone a species extinction-level event — you won't be around to call me on my mistake. That's the great thing about futurology: when you get it really wrong, nobody cares.)
About two-thirds of the buildings standing in 2034 are already there in 2014. Except in low-lying areas where the well-known liberal bias of climatological science has taken its toll.
Automobiles look pretty much the same, although a lot more of them are electric or diesel-electric hybrids, and they exhibit a mysterious reluctance to run over pedestrians, shoot stop lights, or exceed the speed limit. In fact, the main force opposing the universal adoption of self-driving automobiles will probably be the Police unions: and it's only a matter of time before the insurance companies arm-wrestle the traffic cops into submission.
Airliners in 2034 look even more similar to those of 2014 than the automobiles. That's because airliners have a design life of 30 years; about a third of those flying in 2034 are already in service in 2014. And another third are new-build specimens of models already flying — Boeing 787s, Airbus 350s.
Not everything progresses linearly Every decade brings a WTF moment or two to the history books: 9/11, Edward Snowden, the collapse of the USSR. And there are some obvious technology-driven radical changes. By 2034 Elon Musk has either declared bankruptcy or taken his fluffy white cat and retired to his billionaire's lair on Mars. China has a moon base. One of Apple, Ford, Disney, or Boeing has gone bust or fallen upon hard times, their niche usurped by someone utterly unpredictable. And I'm pretty sure that there will be some utterly bizarre, Rumsfeldian unknown-unknowns to disturb us all. A cure for old age, a global collapse of the financial institutions, a devastating epidemic of Martian hyper-scabies. But most of the changes, however radical, are not in fact very visible at first glance.
Most change is gradual, and it's only when we stack enough iterative changes atop one another that we get something that's immediately striking from a distance. The structures we inhabit in 2034 are going to look much the same: I think it's fairly safe to say that we will still live in buildings and wear clothes, even if the buildings are assembled by robots and the clothes emerge fully-formed from 3D printers that bond fibres suspended in a liquid matrix, and the particular fashions change. The ways we use buildings and clothes seem to be pretty much immutable across deep historical time.
So let me repeat that: buildings and clothing are examples of artifacts that may be manufactured using a variety of different techniques, some of which are not widespread today, but where the use-case is unlikely to change.
But then, there's a correspondingly different class of artifact that may be built or assembled using familiar techniques but put to utterly different uses.
Take the concrete paving slabs that sidewalks are made from, for example. Our concrete paving slab of 2034 is likely to be almost identical to the paving slab of 2014 — except for the trivial addition of a dirt-cheap microcontroller powered by an on-die photovoltaic cell, with a handful of MEMS sensors and a low power transceiver. Manufactured in bulk, the chip in the paving slab adds about a dollar to its price — it makes about as much of a difference the logistics of building a pavement as adding a barcoded label does to the manufacture and distribution of t-shirts. But the effect of the change, of adding an embedded sensor and control processor to a paving stone, is revolutionary: suddenly the sidewalk is part of the internet of things.
What sort of things does our internet-ified paving slab do?
For one thing, it can monitor its ambient temperature and warn its neighbors to tell incoming vehicle traffic if there's a danger of ice, or if a pot-hole is developing. Maybe it can also monitor atmospheric pressure and humidity, providing the city with a micro-level weather map. Genome sequencing is rapidly becoming the domain of micro-electromechanical systems, MEMS, which as semiconductor devices are amenable to Moore's law: we could do ambient genome sequencing, looking for the tell-tale signs of pathogens in the environment. Does that puddle harbor mosquito larvae infected with malaria parasites?
With low-power transceivers our networked sidewalk slab can ping any RFID transponders that cross it, thereby providing a slew of rich metadata about its users. If you can read the unique product identifier labels in a random pedestrian's clothing you can build up a database that identifies citizens uniquely — unless they habitually borrow each other's underwear. You can probably tell from their gait pattern if they're unwell, or depressed, or about to impulsively step out into the road. In which case your internet-of-things enabled sidewalk can notify any automobiles in the vicinity to steer wide of the self-propelled traffic obstacle.
It's not just automobiles and paving slabs that have internet-connected processors in them in 2034, of course. Your domestic washing machine is going to have a much simpler user interface, for one thing: you shove clothing items inside it and it asks them how they want to be washed, then moans at you until you remove the crimson-dyed tee shirt from the batch of whites that will otherwise come out pink.
And meanwhile your cheap Indonesian toaster oven has a concealed processor embedded in its power cable that is being rented out by the hour to spammers or bitcoin miners or whatever the equivalent theft-of-service nuisance threat happens to be in 2034.
In fact, by 2034, thanks to the fallout left behind by the end of Moore's law and it's corollary Koomey's law (that power consumption per MIP decreases by 50% every 18 months), we can reasonably assume that any object more durable than a bar of soap and with a retail value of over $5 probably has as much computing power as your laptop today — and if you can't think of a use for it, the advertising industry will be happy to do so for you (because we have, for better or worse, chosen advertising as the underlying business model for monetizing the internet: and the internet of things is, after all, an out-growth of the internet).
The world of 2034 is going to superficially, outwardly, resemble the world of 2014, subject to some obvious minor differences — more extreme weather, more expensive gas — but there are going to be some really creepy differences under the surface. In particular, with the build-out of the internet of things and the stabilization of standards once the semiconductor revolution has run its course, the world of 2034 is going to be dominated by metadata.
Today in 2014 we can reasonably to be tracked by CCTV whenever we show our faces in public, and for any photograph of us to be uploaded to Facebook and tagged by location, time, and identity using face recognition software. We know our phones are tracking us from picocell to picocell and, at the behest of the NSA, can be turned into bugging devices without our knowledge or consent (as long as we're locked out of our own baseband processors).
By 2034 the monitoring is going to be even more pervasive. The NETMIT group at MIT's Computer Science and Artificial Intelligence Lab are currently using WiFi signals to detect the breathing and heart rate of individuals in a room: wireless transmitters with steerable phased-array antennae that can beam bandwidth through a house are by definition excellent wall-penetrating radar devices, and just as the NSA has rooted many domestic routers to inspect our packets, so we can expect the next generation of spies to attempt to use our routers to examine our bodies.
The internet of things needs to be able to rapidly create dynamic routing tables so that objects can communicate with each other, and a corollary of that requirement is that everything knows where it is and who it belongs to and who has permission to use them. This has good consequences and bad consequences.
Shoplifting and theft are going to be difficult to get away with in a world where unsold goods know when they're being abducted and call for help. That's good. Collapsing and dying of a stroke in your own home may also become a rare event, if our environment is smart enough to monitor us for anomalous behavior indicative of a medical emergency.
On the other hand, do you really want your exact pattern of eye movements across the screen of your smartphone to be monitored and analyzed, the better to beam tailored advertisements into your peripheral field of vision while you check your email? Or every conversation you have in any public space within range of a microphone to be converted via speech-to-text, indexed, and analyzed by the NSA's server farms for the Bayesian spoor of conspiracy? Or for your implanted cardiac defibrillator to be rooted and held to ransom by a piece of malware that doesn't know it's running on a life-critical medical device?
Arguably, these are the paranoid worries of a poopy-head from 2014, not a savvy native of 2034 who's had two decades to get used to the emergence of these new phenomena. To an actual denizen of 2034, one who's been sitting in the steadily warming saucepan of water for two decades, the concerns will be different.
The worst thing about the internet of things is that it's built atop the seventy year old bones of ARPAnet. It's insecure by design, horribly flawed, and susceptible to subversion. Back in the early days, national security bureaucrats deliberately weakened the protocols for computer-to-computer communications so that they could monitor at-will, never quite anticipating that it would become so fundamental to our entire civilization that by so doing, they were preparing the field for entire criminal industries and rendering what should have been secure infrastructure vulnerable to what is unironically termed cyber-attack. Vetoing endpoint encryption in TCP might have seemed like a good idea in the early 1980s, when only a few hundred thousand people — mostly industry professionals and scientists — were expected to use the internet, but it's a disaster when your automobile needs a reliable, secure stream of trusted environment data to tell it that it's safe to turn the next corner.
But.
We hit the buffers at the end of the railroad track of exponentially accelerating semiconductor tech. The industry downsized, and aged. There's no money to develop and roll out new standards, nor the willpower to do so: trying to secure the internet of things is like trying to switch the USA to driving on the left, or using the metric system. Pre-existing infrastructure has tremendous cultural inertia: to change it you first have to flatten it, and nobody much wants to destroy western civilization in order to clear the ground for rolling out IPv8.
So here's my takeaway list of bullet-points for 2034:
It's going to superficially resemble 2014.
However, every object in the real world is going to be providing a constant stream of metadata about its environment — and I mean every object.
The frameworks used for channeling this firehose of environment data are going to be insecure and ramshackle, with foundations built on decades-old design errors.
The commercial internet funding model of 1994 — advertising — is still influential, and its blind-spots underpin the attitude of the internet of things to our privacy and security.
How physical products are manufactured and distributed may be quite different from 2014. In particular, expect more 3D printing at end-points and less long-range shipment of centrally manufactured products. But in many cases, how we use the products may be the same.
The continuing trend towards fewer people being employed in manufacturing, and greater automation of service jobs, will continue: our current societal model, whereby we work to earn money with which to buy the goods and services we need may not be sustainable in the face of a continuing squeeze on employment. But since when has consistency or coherency or even humanity been a prerequisite of any human civilization in history? We'll muddle on, even when an objective observer might look at us and shake her head in despair.
And now, the state of Perl in 2034
(I'm reading from the keynote talk for YAPC::NA 2034 by Charles Stross, recovering Perl hacker, science fiction writer, and card-carrying Mad Scientist — Paratemporal Meddling Management Group, speciality: screwing up history).
Frankly I'm kind of astonished to be standing here, talking to you about a programming language that first escaped into the wild forty-five years ago. And not just because my continued existence is a tribute to medical science: it's because the half life of a programming language, back when people were still inventing new programming languages, was typically about ten years.
Programming languages come and go, and mostly they go.
Back in the dim and distant past, programming languages were rare. We rode out the 1950s on just FORTRAN LISP, and the embryonic product of the CODASYL Conference on Data Systems Languages, COBOL. Then the 1960s saw a small pre-Cambrian explosion, bequeathing us ALGOL, GOTO considered harmful, BASIC (as supporting evidence for the prosecution), and a bunch of hopeful monsters like SNOBOL4, BCPL, and Pascal, some of which went on the rampage and did enormous damage to our ideas of what computers are good for.
Then, between about 1970 and 1990, compiler design wormed its way into the syllabus of undergraduate CS degree courses, and the number of languages mushroomed. Even though most sane CS students stick to re-implementing Lisp in Haskell and similar five-finger exercises, there are enough fools out there who suffer from the delusion that their ideas are not only new but useful to other people to keep the linguistic taxonomists in business.
Student projects seldom have the opportunity to do much harm — for a language to do real damage it needs a flag and an army — but if by some mischance a frustrated language designer later finds themselves in a managerial role at a company that ships code, they can inflict their personal demons on everyone unlucky enough to be caught within the blast radius of a proprietary platform and a supercritical mass of extremely bad ideas.
Much more rarely, a language designer actually has something useful to say — not just an urge to scratch a personal itch, but an urge to scratch an itch that lots of other programmers share. The degree of success with which their ideas are met often depends as much on the timing — when they go public — as on the content. Which brings me to the matter at hand ...
Even twenty years ago, in 2014, Perl was no longer a sexy paradigm-busting newcomer but a staid middle-aged citizen, living in a sprawling but somehow cluttered mansion full of fussily decorated modules of questionable utility. That people are still gathering to talk about new developments in Perl after 45 years is ... well, it's no crazier than the idea that people would be drafting new standards for COBOL in the 21st century would have seemed if you'd put the idea to Grace Hopper in the early 1960s. Much less Object-Oriented COBOL. Or the 2018 standard for Functional COBOL with immutable objects.
So why is Perl still going in 2034, and why is there any prospect whatsoever of it still being a thing in 2134?
By rights, Perl in 2034 ought to have been a dead language. The law of averages is against it: the half-life of a programming language in the latter half of the 20th century was around a decade, and as a hold-over from 1987 it should be well past its sell-by date.
Perl, like other scripting languages of the late 20th century, was susceptible to a decade-long cycle of fashion trends. In the 1990s it was all about the web, and in particular the web 1.0 transactional model — now dying, if not dead, replaced by more sophisticated client/server or distributed processing frameworks. While Perl was always far more than just a scripting language for writing quick and dirty server-side CGI scripts, that's the context in which many programmers first encountered it. And indeed, many people approached Perl as if they thought it was a slightly less web-specific version of PHP.
But Perl isn't PHP — any more than it's Python or Ruby. Perl 5 is a powerful, expressive general-purpose high level programming language with a huge archive of modules for processing data and interfacing to databases. Perl 6 — if and when we get there — is almost a different beast, essentially a toolkit for creating application domain-specific sub-languages. And while Perl and its modules were once a bit of a beast (as anyone who ever had to build perl 5 from source on a workstation powered by a 33MHz 68030 will recall), by todays standards it's svelte and fast.
If what you're juggling is a city-wide street network with an average of one processor per paving slab, generating metadata at a rate of megabytes per minute per square metre of sidewalk, it pays to distill down your data as close to source as possible. And if those paving slabs are all running some descendant of Linux and talking to each other over IP, then some kind of data reduction and data mangling language is probably the ideal duct tape to hold the whole thing together.
But Perl also has a secret weapon in the language longevity wars. And that secret weapon is: you.
Back when I went to my first YAPC in London in the late 1990s, I had no idea that I'd return to one in Orlando in 2014 and see several familiar faces in the audience. And I'm pretty sure that 2034 my future hypothetical self will recognize some of those faces again in the audience at YAPC::NA 2034.
Perl has a culture — curated since the early days via the perl5-porters mailing list and the comp.lang.perl usenet group, and elsewhere. I don't know whether it was intentional or not, but for better or worse Perl tends to attract die-hard loyalists and has a culture not only of language use but of contribution to the pool of extensions and modules known as CPAN.
And Perl was invented just late enough in the semiconductor revolution that it stands a chance of still being in use by a die-hard core of loyalists when the progression dictated by Moore's law comes to an end, and everything slows down.
If a technology is invented and discarded during a technological revolution before the revolution matures and reaches the limits dictated by physical law, then it will probably remain forgotten or a niche player at best. In aerospace, perhaps the classic examples are the biplane and the rigid airship or Zeppelin. They worked, but they were inefficient compared to alternative designs and so they are unlikely to be resurrected. But if a technology was still in use when the revolution ended and the dust settles, then it will probably remain in use for a very long time. The variable-pitch propeller, the turbofan, and the aileron: it's hard to see any of them vanishing from the skies any time soon.
Perl is, in 2014, a mature language — but it's not a dead language. The community of Perl loyalists is aging and greying, but we're still here and still relevant. And the revolution is due to end some time in the next ten years. If Perl is still relevant in 2024, then it will certainly still be relevant in 2034 because the world of operating systems (research into which, as Rob Pike lamented, stagnated after 1990) and the world of programming languages are intimately dependent on the pace of change of the underlying hardware, and once the hardware freezes (or switches to incremental change over a period of decades) the drive to develop new tools and languages will evaporate.
Just keep going, folks. Focus on modules. Focus on unit testing. Focus on big data, on data mining and transformation, on large scale distributed low-energy processing. Focus on staying alive. Perl is 27 in this year, 2014. If Perl is still in use in 2024, then the odds are good that it will make it to 2034 and then to 2114.
Let's hope we get that cure for old age: people are going to need you to still be around for a long time to come!
Thank you and good night.