No need to rely on the Tesla Powerwall
No need to rely on the Tesla Powerwall
Amazon’s push-button technology is about augmenting your appliances and your life
Amazon Dash is expanding its offerings in ways that have had us thinking. This seemingly unnecessary technology has a lot of potential as it evolves. The best part is that, if Amazon develops it correctly, Dash will only be limited by your own creativity.
Les activistes environnementaux de Robin Wood & l’agence Grabarz & Partner ont fait appel à Illusion pour imaginer des visuels illustrant l’idée que détruire la nature revient à détruire la vie. Un rendu incroyable grâce à un jeu sur la double exposition qui souligne parfaitement l’urgence de la situation et le message véhiculé. Impressionnant.
Unless you're Izzy Swan, a vehicle is something you buy. It's ready to go; they're turnkey affairs, which is probably where that phrase came from. But this interesting concept by a pair of industrial designers proposes an alternate system, whereby a corporation would sell the motive power and local communities would build the vehicle bodies themselves.
It was designed with a specific region in mind, the impoverished and flooding-prone Navotas region in the Philippines, where folks are reportedly living in conditions like this:
Navotas which is located in the central region of the Philippines. It's a typical slum where 25 million people live in floating bamboo houses because Navotas is a flood-prone region.
When the rainy season begins, the whole village is covered with waste from the landfills. The whole village suffers from epidemics due to poor sanitation.
Lee and Jeong's idea is that Michelin would manufacture a pedaling rig, similar to what you'd see on a recumbent bike, and provide it along with axles, a crankshaft and four paddle-wheel assemblies. The target user would then lash widely-available local bamboo into a structure that the Michelin components would integrate with.
Part of their concept entails kitting the raft out with a series of Lifestraw filters, the idea being that the craft would purify the water as it went along. Obviously this doesn't seem realistic from a volume perspective, particularly after looking at those photos above, but this is a concept, after all.
Lee and Jeong took third place in the competition, whose theme was "Mobility for All." You can see the rest of the entries here.
Cycling has long been hailed as a kind of all-purpose urban panacea, an all-purpose solution for congestion and the first-and-last-mile—getting people to and from a transportation hub. Now, Beijing's People's Architecture Office has designed a bike — by a slight stretch of the imagination — into a destination in itself. Billed as "architecture for events and architecture as event," the People's Canopy is a kind of pedal-powered pop-up pavilion: The expandable two-story canopy is set on unicyclic "feet," such that it functions as its own transportation system, a simple and elegant form of mobile placemaking.
The People's Canopy was commissioned by In Certain Places — an ongoing program of urban interventions and events — for the Lancashire Encounter, an arts festival in the city of Preston, UK in September of last year. As ICP curators Charles Quick and Elaine Speight and PAO principal James Shen explain in the expository video, it is a connective space in several ways, not only as a civic gesture of activating an urban space but a kind of cultural exchange, transposing the vernacular canopies of Southern China to a rainy locale on the other side of the world.
Shen also shared some additional insights about the event, production and future of the People's Canopy:
On one hand, the cycling aspect is a matter of practicality, as it does make it very convenient to move the canopies across large distances for use at various sites. On the other hand, it makes a spectacle out of this communal activity. For eight people to cycle together and navigate the canopies through the city requires real teamwork. At a length of 10m and 4.5m in height the double decker bus size of the vehicles and the parade-like aspect of the cycling makes moving the canopies around an urban event. And when parked and expanded, the canopies take on a different role and become backdrops for events.
Cyclists on bicycles flanked the canopies as they were being cycled and there were people cycling and skateboarding under the canopies when they were expanded. The project certainly promotes cycling in the city, but it is also a celebration of public occupation of space: The canopies transform roads for pedestrian use, allowing cities to experiment with multifunctional uses of space. Simply providing covering does a lot to promote public activity, especially in a rainy city like Preston.
A dozen canopies were designed and manufactured in China then shipped to Preston in two shipping containers where they were then assembled. So far they have been moved between at least four locations in the city, including the University of Central Lancashire and the center of Preston. The canopies were physically cycled between each of these locations by volunteers.
The People's Canopies were put in storage in Preston after last year's event and will continue to be used in Preston for years to come. Since the festival, another set of People's Canopies was installed in Hong Kong for the Urbanism/Architecture Biennale. They are there until March and are currently being used to host a variety of events. Those who have taken a seat in the People's Canopy include filmmaker Christopher Doyle and Mr. Mobile Architecture Peter Cook, who immediately incorporated the project into his keynote talk at the biennale.
We've also received a request from an organization in Kolkata India who want to use the People's Canopies for a food festival there. I imagine we'll be getting more interest from people as we start to promote the project. And these are mass-produced, prefabricated, modular structures. I would consider the People's Canopy to be product design on an architecture scale. But with canopies weighing up to one ton each, the need for street closures, and structural and wind-load certification, it's not such a simple thing for People's Canopies to roll into a city near you. Because they are architectural in size, they come with a range of health and safety concerns.
In addition to buildings like Palazzo Italia, air-clearing concrete could pave sidewalks, highways, or other places with heavy pollution.
A new construction material could make the concrete jungle function a bit more like a natural one. Palazzo Italia, which debuted at the 2015 World’s Fair in Milan, is the first building made of concrete that’s designed to clear the air.
The facade, a mixture of cement and titanium dioxide, captures nitrogen-oxide pollution and converts it into a harmless salt that easily rinses off the walls when it rains.
Palazzo Italia also consumes 40 percent less energy than a conventional building of its size, and emits zero air pollution.
“We wanted the building to be an osmotic organism,” says lead architect Michele Molè—like a tree that breathes in carbon dioxide and exhales oxygen.140: Energy, in kilowatts, generated by the building’s photovoltaic glass roof, enough to power nearly 11,000 CFL lightbulbs
This article was originally published in the January/February 2016 issue of Popular Science.
In our Weekly Maker's Roundup, we post videos from a cross-section of folks whom we hope will appeal to the Core77 audience. What they all have in common is that they create physical objects and record the act, but beyond that they have wildly different styles, skillsets and personalities. In fact we'd be hard-pressed to place makers of any sort into highly specific categories.
That hasn't stopped Autodesk from trying, however. In an effort to "[investigate] the personas that form the maker and designer community, to see if they fall into specific categories, and how these differ in each culture," the company worked with a Professor of Psychology from City University London to pigeonhole makers into five personas:
They then came up with a questionnaire designed to reveal which of these categories any given maker falls into, and surveyed over 4,000 people in six countries:
Curious to see which category you fall into? The online questionnaire is here.
Years ago, PumpTire developed a self-inflating bicycle tire that used the compression of the tire on the ground to suck air inside. Unfortunately, the entire tire had to be replaced once it wore down. Now the developers have used the same principles to create a self-inflating bike tube – and it’s compatible with standard bike tires.
Read the rest of This self-inflating bicycle tube pumps itself up as you ride
Iceland already runs on 100 percent renewable energy. Most of it comes from geothermal sources, but researchers have been working on ways to harness the incredibly powerful winds in the region as well. Traditional wind turbines would spin out of control in the high winds common to the small country, but one bright inventor realized that an entirely different type of wind turbine could withstand the winds. In fact, IceWind’s CW1000 wind turbine may be even better than its skinny counterparts.
Facing a month-long stretch of pollution levels above a healthy threshold, the Italian cities of Milan and Rome have decided to implement a three-day ban on private car use. The ban will take place this week from Monday through Wednesday, during which time car use will not be permitted from 10:00 am until 4:00 pm. Vehicle pollution in the country has become more potent during a recent dry spell and accounts for an increased number in acute cardiovascular disease, according to the AP. The few days of nearly carless activity will hopefully provide some needed respite for residents.
Read the rest of Milan and Rome just banned cars for three days to fight smog
Gardeners and farmers who live in colder climates are well aware of the limitations posed by a short growing season. But these challenges often yield outstanding innovative practices, such as those used by Growing Power in Milwaukee, Wisconsin. Growing Power harnesses natural cycles to power a farm that produces over one million pounds of food every year. Because of its ultra efficient greenhouse system, Growing Power is able to continue its harvest even through the frigid Great Lakes winters.
Read the rest of 12 top projects by 2016 Pritzker Prize winner Alejandro Aravena
Dating all the way back to Neolithic times, the mortise and tenon is the oldest wood joint known to mankind. While the specific provenance of the joint is unknown, I'm willing to bet the inventor wasn't a virgin.
In the thousands of years since, craftspeople have developed an almost absurd variety of joints, some of which you learned in the ID shop at school, some of which you've never heard of, and that one that you can always see in your head but have forgotten the name of. To help you remember for the next time you're building something out of wood, or to give you some alternatives for any current designs you're working on, here are some visual guides:
Here are some books that those of you researching or making joints may want to peruse:
If you've got any more joint resources you'd like to see included, please let us know in the comments and we'll expand this section.
The Floyd Bed is a truly adaptable bed frame designed for city living, and perfect for that foam mattress living on the floor —modular, sturdy and moves with you (not to mention it's also beautiful!). Engineered to be the last bed frame you'll have to buy.
From ex-frog-designer Michael McDaniel, here's a design for a disaster relief shelter that takes a very different approach from Ikea's Better Shelter. One of the first questions for those seeking to deploy such shelters is how to efficiently transport them. Ikea answered this by developing a flatpack design. McDaniel, on the other hand, has exploited draft angle to create a design that can be nested like coffee cups.
Like Ikea's Better Shelter, McDaniel's solution, called the Exo, requires four people to assemble. Beyond that, the differences are sharp: The former generates electricity; the latter requires it. The Better Shelter needs to be assembled from components and anchored to the ground, whereas the Exo is dropped into place on top of a base that has been filled with sand for stability. The Better Shelter sleeps four at floor level, whereas the Exo sleeps four via a pair of built-in bunk beds.
Let's take a closer look at the Exo, which is produced by McDaniel's company, Reaction:
Another key difference between the two designs is the ease-of-assembly factor, which can impact the ultimate application. The speedy set-up time of the Exo makes them ideal for renting out to festivalgoers, campsites or folks looking to set up an AirBNB unit on their property.
Perhaps the largest difference, however, is cost. Turning back to disaster relief, what type of unit an organization will deploy boils down to two things, assuming availability: How much does it cost to buy, and how much does it cost to ship? The answer makes it apparent that the Better Shelter and the Exo will never be competing for the same "market:" The Better Shelter rings in at about US $1,150, whereas the Exo runs a whopping $6,000 to $12,000 per unit. (Fortune says $6,000; Wired says $12,000. At press time we hadn't received clarification from Reaction's press rep.)
The exponentially higher cost means the Exo is probably destined for American use only. Indeed, Reaction's manufacturing base in Texas is an ideal location to serve America's disaster-prone Gulf Coast and Tornado Alley.
You may be wondering: Geography aside, wouldn't it still be cheaper for FEMA to order Better Shelters, presumably produced in Europe or Scandinavia, and have them shipped over? Maybe, but it's unlikely that will happen. According to an AP article from 2007, the famously bungling FEMA paid $19,000 per single family trailer following Hurricanes Katrina and Rita. And CNS News reports that the math got even worse by 2013, when FEMA was paying first $24,000, then $48,000 per trailer:
…The smaller park models (33 feet wide and 12 feet long) cost taxpayers about $24,000 each, half the amount FEMA is spending on 64-foot long, 14-feet wide manufactured housing units the agency is now using exclusively to house disaster victims.
And of course, one of those FEMA trailers takes up an entire truck for shipping. So economically speaking, the Exo is not competing with tents nor Ikea's Better Shelter; it's competing with those FEMA trailers. Assuming there's pressure to buy American, the Exo seems a far better deal than the trailers and is far more efficient to ship.
If you're wondering where the higher cost of the Exo comes from, we assume the bulk of the cost comes from the proprietary materials developed to build it. You'll learn about those and hear McDaniel's development story in the next entry.
Name: Nicholas Felton
Occupation: Information designer
Current projects: I’m wrapping up my tenth and final Annual Report, doing a couple client projects, and finishing up a book that should be released in the spring.
Mission: My primary mission is to help people understand the data that they generate, and to express themselves or tell stories using that data.
When did you decide that you wanted to be a designer? I realized pretty early on that I wanted to work in design. Probably late middle school. I started getting internships in high school, working at a video production house and starting to do graphic and motion design. That led me to studying graphic design and starting my career in New York. But it wasn’t until around 2005 that I discovered the narrative potential of working with data, and how fulfilling I found it to work with.
Education: I went to the Rhode Island School of Design, where I got my undergraduate degree in graphic design. I’ve been working ever since.
First design job: My first design job out of school was at an advertising agency in New York. I was brought on at first to be an art director, and I was able to parlay that into a small, dedicated design group within the agency.
What was your big break? Working on the Annual Reports was certainly my big break, after several attempts of doing personal and professional projects that I thought would make a name for myself. It’s rooted in the same desire that drives journaling or a travel log, but is based on a single year. That was the one that truly started to carve out a substantial niche.
Describe your workspace: I have a shared office space in Brooklyn near the Navy Yard. We have a big white box with nice light and then a bunch of huge white tables with the requisite monitors and laptops everywhere. Eames chairs, nice shelves. Lots of books.
What is your most important tool? Certainly the laptop. Working with computers has made up for my inability to draw very well. A notebook is also very important for working out ideas.
What is the best part of your job? The most rewarding things I’ve done have been making products that have impacted other people’s lives. Taking some of the stuff I’ve learned in my own projects and translating that into experiences or applications that other people can use for their own goals. Working on the design of the Facebook timeline was probably the most impactful thing that I’ve done and maybe will do. On a smaller scale, there’s Reporter, which is an iOS app I released last year. It hasn’t had the kind of impact that timeline has, but on a personal level, meeting people who have said it’s improved their lives significantly—that’s always a great experience.
What is the worst part of your job? Working for myself. Not having help and trying to balance all the things that I want to do with generating an income and trying not to say no to great projects. The work-life balance gets kind of difficult sometimes.
What time do you get up and go to bed? Usually I get up at 7:30 or 8:00 a.m. and go to bed around midnight.
How do you procrastinate? The Internet probably, but also just the ways that you trick yourself into thinking you’re getting stuff done. Like cleaning up your desk and reorganizing; those tasks that feel productive but are basically just designed to delay the inevitable things you don’t want to do.
What is your favorite productivity tip or trick? Right now I’m trying to sprint on a thing that has to be done today, so I quit my e-mail. Just turned it off. If I need to check it, I’ll check it on the phone, but I’m leaving it off on my computer to make sure that that little red dot is not going to distract me. I can’t ignore the notification in Mail if it’s there.
What is the best-designed object in your home? One that I sometimes marvel at is the Global knife set. They’re these Japanese knives that have a monolithic construction where the handle turns into the blade. Some of them were gifts but I started the collection myself, and I really take the time to appreciate them every once in a while.
Who is your design hero? I don’t know if he’s a hero but I have a lot of respect for Brian Eno. He’s a person who I think has done a good job with the work-life balance. I casually knew his music, but then I saw him speak once and was kind of transfixed. He also published a diary that was a year in his life. It was a very intimate way of learning some of the details about all the people he was collaborating with, and how he was putting on art exhibits at the same time he was working on music, traveling the globe, seeing his family and interacting with them. I’ve thought about that quite a bit since I read it. He seems to always be pushing himself, and he’s someone I certainly look up to.
What is the most important quality in a designer? Stubbornness. Pursuing one idea relentlessly. It has served me well.
What is the most widespread misunderstanding about design or designers? I think one of the things that’s not widely known is how much design factors into the systems we use continuously (or take for granted). Like all the design that goes into the typography and the systems that make the highway system work. I don’t know if people really contemplate the depth to which design is a functional part of our lives, and not just decorative.
What is exciting you in design right now? I’ve moved more and more into using code to create my design. So I tend to be pretty inspired by the people who are at the forefront of the field. I’m always trying to expand my abilities. Practice, practice, practice—to be able to make the kind of things that I want to create.
We think of trailer park homes as being lamentably small--unless you upgrade to a double-wide. But what if you could upgrade to a quadruple-wide? Or heck, a quadruple-wide and triple-high? Invention and patent maven Semenov Dahir Kurmanbievich has worked out CG animations (if not the practical engineering) for how a fold-out trailer-borne house would unfurl, and just how big they could be:
It would be neat if every trailer park secretly does, in fact, operate like this; but when they hear visitors are coming, everyone folds their houses up tight for appearance's sake.
Counter Culture Labs
Is this where generic insulin might be discovered?
The 370 million people worldwide with diabetes rely on injections of insulin to regulate the amount of sugar in their blood, since their bodies can’t make the hormone themselves. Since there are no generic versions available in the United States, insulin is very expensive—that cost was likely a large proportion of the $176 billion in medical expenditures incurred by diabetes patients in 2012 alone. Now a team of biohackers with Counter Culture Labs, a community lab in Oakland, California, wants to pave the way towards generic insulin, and they’ve started a crowdfunding page for their project.
The biohackers’ goal is fairly straightforward: To make and refine synthetic insulin from E. coli bacteria and document their process. The result, they hope, will be that a generic pharmaceutical company will use that protocol to make insulin that’s affordable for diabetes patients all over the world. There are several types of insulin that have been approved by the Food and Drug Administration—some are long-lasting, others are fast-acting—but they are all protected by patents, so there are no generic versions. “It takes legitimate scientific research to create a biosimilar generic [drug], and generic companies don’t want to do scientific research,” says Maureen Muldavin, a program manager at Counter Culture Labs and a biohacker involved in the Open Insulin project.
Biosimilar drugs are not easy to concoct, Muldavin says, and insulin in particular is a challenge because of its structure. “It’s not as simple as putting the genetic code [in the bacteria] and out pops insulin,” she says. With the money raised in their crowdfunding campaign, the researchers plan to spend the next year figuring out how to make insulin from E. coli and purifying it. Since the resulting insulin probably won’t be pure enough to inject into human patients, so they will check that their insulin would be viable through other tests with antibodies. With the open-sourced insulin protocol, pharmaceutical companies could seek FDA approval to start producing generic insulin.
Kevin Riggs, an instructor in medicine at Johns Hopkins University who co-authored an article about generic insulin earlier this year, doesn’t think that Open Insulin will be enough to bring a generic insulin drug to the market. “I don't think the major hurdle is that the companies don't know how to make insulin, because that part is reasonably straightforward,” he says. “The real hurdles are getting the drug approved by the FDA (and since insulin is a biologic drug, it requires a lot more original data than an application for a small-molecule generic would), and then upfront manufacturing costs (because making a biologic drug is different, so it requires different equipment).” He suspects that it will take “an altruistic entity with a lot of start-up money” to make generic insulin commercially available.
But the biohackers behind Open Insulin also have their own agenda—they want to show the world that biohackers can make real scientific contributions, even if they’re not affiliated with an established world-class laboratory. “We want to show that a bunch of people with varying levels of scientific training can come together with a minimal budget and community space and do legitimate scientific research,” Muldavin says.
Even if their project is successful, it doesn’t yet mean that you can brew your own insulin at home quite yet—Muldavin estimates that’s about 20 years away. In that amount of time, she anticipates that researchers will be able to sell kits so that patients can brew all sorts of their own drugs at home. “We’re truly in a golden age of biology,” Muldavin says. “Every year it gets so much easier and cheaper to do genetic engineering.”
To meet their goal, the biohackers need to raise just over $3,000 in 16 days.
A century before Apollo astronauts walked on the Moon, science fiction told the public of a future with outposts on the Moon and orbiting high above the Earth. In the decades that followed, these earliest space station concepts evolved into orbital platforms that could launch manned missions to the Moon and Mars. After its inception, NASA picked up where visionaries left off, dabbling in space stations of varying layouts and capabilities before building the International Space Station that orbits the Earth today.
Austrian Herman Noordung's 1929 concept for a toroidal space station that would generate artificial gravity by spinning.
Earliest Concepts in Science Fiction
Between October 1869 and February of 1870, readers of the Atlantic Monthly were introduced to the idea of living off the Earth through Edward Everett Hale’s science fiction story, "The Brick Moon.” Hale’s story tells of a 200-foot diameter brick sphere designed to orbit the Earth as a navigational aid for ships. But the brick moon is accidentally put into orbit with people on board. Halle followed this story with a sequel, ‘Life in the Brick Moon,’ wherein the characters find ways to survive in their new home. Not only do they live in the artificial satellite, they communicate with the Earth turning the brick moon into a communications satellite.
The term “space station” was coined more than fifty years later by Romanian rocket pioneer Hermann Oberth. Writing in 1923, he envisioned a platform orbiting the Earth that would serve as a starting point for missions to the Moon and Mars. His idea was shared by Austrian Herman Noordung who, in 1928, published a blue print for an orbital outpost composed of multiple modules each with its own unique function.
Both Oberth and Noordung imagined their space stations would be launched by massive rockets, and an ocean away American engineer Robert Goddard was taking major strides towards developing these rockets. Goddard was among the first to experiment successfully with liquid propulsion, harnessing a reaction far more powerful than the black powder rockets that were prevalent at the time. And he wasn’t the only one. In Germany, an amateur rocket group called the Verein für Raumschiffahrt (Society for Space Travel) was also having moderate success with liquid propelled rocket. In 1932, one member of the VfR, Wernher von Braun, was handpicked by the German Army to continue his work, developing liquid rockets for the military.
Von Braun's Space Station
Von Braun presents his toroidal space station on Walt Disney's Tomorrowland TV show.
Military funding from the Reich before and during the Second World War brought rocketry to maturity, and by the war’s end von Braun’s team had a functioning rocket on their hands with the V-2. The technology was picked up by both American and Soviet scientists, but the bulk of leftover hardware and key team members migrated to the United States in 1945, including von Braun who added "spaceflight popularizer” to his resume before long.
Through collaborations with Collier’s Magazine and Walt Disney on the Tomorrowland TV series, von Braun brought his vision of a wheel-shaped space station to the public. This orbital platform would be an Earth observation post, a microgravity laboratory, as well as an observatory. It would also be a starting point for missions to the Moon and Mars, serving as a cornerstone in man’s exploration of the inner solar system.
NASA's Inflatable Wheel
NASA explored inflatable toroidal space stations in the early 1960s. Most of these concepts were studied out of the Langley laboratory.
Space Stations for the New Space Agency
These futuristic space station concepts started inching towards reality in 1958. NASA was created to manage all spaceflight programs with the overarching goal of getting an astronaut in space before the Soviet Union lunched a cosmonaut. Anticipating putting the first man in space, NASA was already considering a space station as its second program in 1959, something that would help the agency learn to live and work in space before serving as a jumping off point for manned missions to the Moon. But Yuri Gagarin beat an American astronaut into orbit, derailing NASA’s long-range plan. President John Kennedy considered all possible next steps for America in space and determined that a mission to the Moon would be a better show of technological dominance than a space station.
NASA was laser focused on the Moon landing goal throughout the 1960s, but the a space station was never truly off the drawing board. Spurred on by Apollo’s high funding level and the growing national interest in spaceflight, an orbital outpost was again seriously considered in 1964 NASA’s main post-Apollo program. Four years later, it was officially on the books.
This 1969 concept was for a 100-man space station called Space Base. It was intended as a laboratory and living space that could double as a home port for nuclear-powered space tugs that could ferry astronauts to the Moon.
In 1969, NASA proposed a 100-man space station called Space Base. The idea was to build a platform that would serve as a laboratory for scientific and industry-sponsored microgravity experiments as well as a home port for nuclear-powered space tugs to ferry astronauts to and from an outpost on the Moon. Scheduled for orbital assembly to be completed by 1975, it didn’t take NASA long to realize that the cost of using expendable rockets (like the Saturn family that launched Apollo to the Moon) to both build and supply Space Base would exceed the construction cost of the station. The only way the project wouldn’t be a loss for NASA would be to build and supply it with a reusable vehicle, a sort of shuttle to ferry supplies and astronauts to Space Base over multiple missions. This became the Space Transportation System, more colloquially known as the space shuttle.
But a space shuttle was in the future. For the moment, NASA had an excess Apollo hardware from three cancelled lunar missions and opted to turn it into a short-term space station program called Skylab. The station itself was made form a repurposed S-IVB upper stage and launched on the last Saturn V in May of 1973. In the year that followed, Skylab hosted three crews, ultimately proving that humans can not only live and work in space, they can be instrumental in large-scale orbital construction efforts as well. The first Skylab crew performed emergency spacewalks to successfully free a jammed solar array, effectively saving the station.
Skylab was America's first space station. Launched in 1973, it hosted three crews before it was abandoned in orbit and left to fall to Earth. It reentered the atmosphere in 1979.
Skylab was never intended to be a long-term space station; it wasn’t designed to be resupplied, refuelled, or boosted into a higher orbit. After the last crew left in February of 1974, the station was abandoned and its orbit was left to decay. NASA briefly considered using a space shuttle orbiter to boost it into a higher orbit and revisit the station, but increased solar activity had expanded the Earth’s atmosphere, the fringes of which were dragging on the station more than expected. NASA simply couldn't have a shuttle ready in time. Skylab reentered the Earth’s atmosphere over western Australia in July of 1979.
The Beginnings of a Real Space Station
As the Skylab program unfolded, NASA forged ahead with the reusable space shuttle that would facilitate building and maintaining a long-lasting space station. But the project as initially conceived by the agency proved too ambitious for its shrinking post-Apollo budget. The economic, political, social, and cultural landscape in the early 1970s wasn’t conducive to another program on the same scale as Apollo. The agency was forced to choose between an orbital outpost or a the vehicle needed to build it. The latter won, and the space station was put on hold while NASA developed its space shuttle.
With the station’s future uncertain, NASA began exploring the potential of working with international partners to defray its cost. In 1973, the United States and Europe formally entered into a partnership that would see the European Space Agency supply mini laboratory modules called Spacelabs to NASA. These small units were designed to launch in the shuttle’s massive payload bay and serve as experiment facility for up to three weeks in orbit, a proof of concept for a later station. The first Spacelab reached orbit in 1983 as part of the payload of STS-9.
The Dual Keel Space Station
NASA's dual keel space station layout.
From Dream to Reality
In his State of the Union Address on January 25, 1984, President Ronald Reagan called for NASA to collaborate with international partners to build a space station within a decade. It was the political backing the space station program desperately needed. With congressional support and a formal presidential mandate in place, the agency established the Space Station Program Office in April and issued requests for proposals to industry leaders that fall. Two years later, Japan and Europe signed on to contribute modules and Canada agreed to supply a manipulator arm. The station was emerging from these early design stages with a dual keel arrangement with a central truss to hold the main living and working quarters as well as solar arrays.
But setbacks seemed to trump every step forward. One problem was the steadily rising cost. NASA’s original projection of $8 billion for three separate facilities — the main living space and two automated laboratory platforms — proved ambitiously low. The Challenger disaster also took its toll on the space station. The loss of seven astronauts raised safety concerns that ended with the decision to give astronauts on board the space station some escape system. This forced design changes that in turn increased the station’s weight.
This 1991 concept art shows space station Freedom in orbit around the Earth. Freedom was the first true space station concept NASA tried to get off the ground. It was designed to be the launch point for missions to the Moon and Mars in addition to an orbital living and work space.
The solution was to replace the dual keel arrangement with a single truss design and to make the laboratory modules smaller overall. This new design was finalized in 1987. In 1988, Reagan gave the station a name: Freedom.
Bush Puts NASA on a Path to Mars.
In July of 1989, just six months after taking office, President George Bush attempted to have his own “Kennedy moment.” In a speech commemorating the 20th anniversary of Apollo 11 mission, he called for the United States to return men to the Moon and send astronauts on to Mars within three decades. And he endorsed space station Freedom as the cornerstone of this long-range effort. The plan was, roughly, a tripartite one. Immediately in the 1990s construction of space station Freedom was to be NASA’s main endeavour. Lunar missions were slated to resume in the early 21st century with the establishment of a lunar base for long term exploration. These two steps would lay the groundwork for a manned mission to Mars as early as the late 2010s. The ultimate goal was a permanent outpost on the red planet.
Bush’s call for a Mars landing yielded a 90-day study to solidify the program’s prospective timeline, goals, and cost. The idea was for Freedom to evolve alongside the effort to land men on Mars. It would serve as an orbital test bed where NASA would iron out the technologies needed to support long-duration deep space missions, and with crews staying on board for up to six months at a time it would be a way for the agency to learn about human survival in space in the relative safety of low Earth orbit.
This concept art from 1989 shows the ferry spacecraft used to take astronauts between a space station and the Moon, and possibly Mars.
The subsequent phase of lunar missions would rely heavily on Freedom. The crew, vehicles, and supplies would launch to the space station where they would load into a transfer vehicle. That transfer vehicle would then travel to low lunar orbit where it would meet an excursion vehicle that would either be waiting in a parking orbit or would launch from the surface to meet the arriving spacecraft. The excursion module would then take the crew to the lunar surface while the transfer vehicle would return to Freedom for maintenance and resupply. The crew could stay on the Moon for as long as a year, serviced by that same transfer vehicle ferrying between Freedom and the Moon. A similar arrangement was planned for missions to Mars.
From Freedom to ISS
As the 1990s dawned, Freedom was getting increasingly heavy and complicated with new requirements like the provision for spacewalks to support in-orbit assembly. Freedom’s cost rose to $38.3 billion, a figure that now included the shuttle launches but was still a far cry from the original $8 billion estimate. In 1993, President Bill Clinton called for the station to be redesigned once again in an attempt to lower the cost and bring in more international partners. Three redesigned station were put forth, and the proposal called Alpha was chosen by the White House.
This 1995 concept shows space station Alpha, an early version of what eventually became the International Space Station.
Alpha used 75 percent of the hardware from Freedom, and before long Russia offered pieces of its unflown Mir 2 space station to lower the overall cost. This new station developed as one that promised to be far more capable than Freedom. In the course of the redefinition process, Alpha took on the moniker “International Space Station.” NASA’s Johnson Space Centre became lead centre behind the program, and Boeing signed on as prime contractor.
The ISS program kicked off with the Shuttle-Mir program, the first cooperation between the United States and Russia since 1975’s Apollo-Soyuz Test Project. In February of 1994, cosmonaut Sergei Krikalev became the first Russian astronaut to fly on a shuttle with the STS-60 crew. A year later, the orbiter Discovery rendezvoused with Mir during the STS-63 mission. In March of 1995, U.S. astronaut Dr. Norman Thagard launched with two cosmonauts aboard Soyuz-TM 21 for a three month stay on Mir. At the end of the mission, the orbiter Atlantis docked with Mir to collect the crew and bring them home. In November of 1995, the orbiter Atlantis launched on STS-74 and delivered a Russian-built Docking Module to Mir marking the first time a module was added to a working space station in orbit. These shuttle-Mir missions gave NASA astronauts their first exposure to long-duration spaceflight since Skylab and also taught both nations valuable lessons in working together and building a multi-module station in space.
The ISS began taking shape in earnest in 1998. On November 20, the Zarya Control Module launched on a Russian Proton rocket. It was the first piece of the station, the battery power and fuel storage unit onto which later modules were added. The Unity node followed in December, and in May of 1999 the shuttle orbiter Discovery fitted the station with logistics and stocked it full of supplies. Four assembly missions in May, July, September, and October of 2000 saw addition of the Zvezda Service Module as well as installation of the Z1-Truss, a third pressurized mating adapter, and a Ku-band antenna. These missions also delivered supplies and performed maintenance on the Station. It was finally ready for a human crew.
On October 30, 2000, Expedition 1 launched on a Soyuz rocket and docked with the International Space Station. The crew of Yuri P. Gidzenko, William M. Shepherd, and Sergei K. Krikalev became the first to live and work on board the orbiting outpost. Another thirty-two assembly missions completed the ISS, bringing the dream of a space station to life nearly a century and a half after Hale’s story captured imaginations.
Sources: Report of the 90-Day Study on Human Exploration of the Moon and Mars; A History of US Space Stations; NASA's Space Station Evolution and Current Status; ISS assembly timeline; Hale's "The Brick Moon"; "Tomorrowland" TV episodes (part 1 is here, and you can find the rest on You Tube or buy it on Amazon).
Mecanum Wheel Robot
Screenshot by author, from YouTube
Wheels, as we know them, could stand a little reinvention. They take fiddling and time to move into parallel spaces, and the whole world would be a better place if wheels could simply pull us sideways without any forward momentum. Fortunately, "mecanum wheels" exist. Here, watch a robot demonstrate them:
Here’s how botmaker MicroRobo describes their creation:
This Mecanum wheel mobile Arduino robotics car can be made to move in any direction and turn by varying the direction and speed of each wheel. Moving all four wheels in the same direction causes forward/backward movement, running left/right sides in opposite directions causes rotation, and running front and rear in opposite directions causes sideways movement. The platform rear wheels are mounted in a particular way, so that the suspension structure ensures that all four wheels can adhere to the ground, even when the ground is uneven.
Watch more of the robot's weird movements below:
ASSOCIATED PRESS/Francois Mori
Elon Musk speaks ahead of COP 21 Paris Climate Talks
The SpaceX and Tesla founder gave a talk ahead of the climate conference in 2015.
In the last few years, the world of artificial intelligence has mainly been dominated by large internet companies with huge computing infrastructures like Google and Facebook, or research universities like MIT or Stanford.
Now, there’s another player in town: OpenAI. The non-profit research firm is backed by heavy hitters like co-chairs Elon Musk (of SpaceX and Tesla fame), Y Combinator’s Sam Altman, as well as investor Peter Thiel (who worked with Musk at PayPal). They claim to have garnered a billion dollars in private funding, from people like Thiel and Amazon Web Services.“We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.”
“We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely,” OpenAI writes in its first blog post, published just a few moments ago.
The goal? Make the scope of A.I less narrow. Right now, machines are either good at identifying people, or answering questions, but not both. But the ultimate goal in A.I research is to “generalize” intelligence—have an algorithm that can do it all.
In pursuit of that, OpenAI's founding team hired Ilya Sutskever as research director. Sutskever is currently a research scientist at Google who has worked with some of the most well-known names in A.I. and machine learning, like Geoff Hinton and Andrew Ng (who work with Google and Baidu respectively).
The organization is a non-profit, and only hopes to spend a small fraction of their billion dollar seed in the next few years. They hope to “freely collaborate” with other institutions, which makes sense, as nearly everyone on their research team comes from a prestigious institution like Google, Stanford, and New York University.
Musk's involvement in particular is noteworthy, given the SpaceX founder has previously expressed fears that artificial intelligence could be more dangerous than nuclear weapons. OpenAI would appear to be in part an effort to power-check the development of A.I. going forward.
As an industrial designer with a proficiency in injection molding, Pete Ceglinski's job was to "make plastic products," he explains. "And after a while I realized that we didn't need the stuff that I was making, so I stopped."
Ceglinski instead turned his attention to designing an awesome product conceived of by fellow Aussie surfer Andrew Turton, a boat builder and sailor. Both men had witnessed marinas covered in floating garbage, and resolved to do something about it. What they've come up with is the clever Seabin:
I love that Ceglinski and Turton's long-term goal is for the product to make itself obsolete, but in the best of ways. It takes a lot of conviction to quit a bill-paying job for the sake of the greater good, and we're happy to report that the Seabin has just reached its $230,000 crowdfunding goal.
They can still use more help, of course. "The possibilities of what we can do if we get some smarter brains involved are endless!" Ceglinski writes. "This is just the beginning!"
One thing I wondered was how they end up trapping fish in the device. The diagram below explains:
"We have never caught a fish or marine animal in 4 years of testing Seabins," writes Ceglinski. "We also have a meeting with a marine Biologist on the 22nd dec [presumably an update is pending] to start a study into the microscopic marine life."
If you'd like to contribute to the project, click here. (At press time there were three days left to pledge.)
At first glance, this design for a stacking chair called the RJR might not seem like much:
It's simple, clean, consists entirely of 90-degree cuts and looks like anyone could make one. But that's actually the point. That's because Italy-based industrial designer Mario Alessiani designed it for Slow/d, an Italian outfit that bills themselves as "the first distributed design factory." What Slow/d is shooting to be is, in essence, a production company with no warehouses, no inventory and no fabricating facility of their own; instead individual craftspeople and artisans scattered throughout Italy are their production arm.
Under Slow/d's scheme, designers submit their designs to the Slow/d site for approval. Consumers peruse the chosen designs, and when they purchase one, an artisan local to the consumer that's been pre-approved by Slow/d is then tasked with building and delivering the piece. "In this way," explains Alessiani's entry to the VModern Furniture Design Competition, "everyone works and we have less transportation and pollution."
The aim of the designer was to make a wooden chair that can be [built] by the most number of carpenters in order to make the net of artisans capable of doing it as big as possible. The idea was to create a design that could be done with base carpentry tools but with something more that makes the chair recognizable and functional.
Thus far Slow/d claims to have some 1,300 designers and artisans signed up, but I could only find 20 products currently for sale on their site. Some examples of the furniture currently being sold are Nicola Dalla Casta's Woodrope, a flatpack stool with structural stability being provided by rope in tension:
FareDesign's similarly flatpack Join coatrack:
Mess+Simoni's Cullatonda cradle:
All of the designs feature straightforward construction similar to Alessiani's. While design snobs might sniff at what they perceive to be "idiot-proof" construction designed to attract producers of varying talents, I think the idea of distributed manufacturing has merit, and the long-term environmental benefits, if such a thing were to work, are undeniable.
Less clear are some of the details of the precise payouts offered. First off, the site states that designers score a 10% royalty on each piece sold—if that percentage sounds low to you, it's still far higher than what you'd get from an established furniture brand—and the initial fabricator who helps them prototype that design gets a 5% royalty. Those numbers seem fine to me, and is a particularly good way for a fabricator to continuously earn a little coin after a one-time job.
Where it gets murky, at least for me trying to puzzle through the badly-translated English description, is that once a particular design's "manufacturing license" is sold to the fabricator who will ultimately build the exact version going to a consumer, the designer gets 65%; is that a one-time fee, and who determines the price of the license? Furthermore, that last-mile fabricator is said to receive only 5%. The website is not clear on whether the fabricators are also paid for the actual materials and labor, but I imagine they'd have to be; otherwise the payout in building a €280.60 (US $305) RJR chair only amount to €14 (US $15.25) per unit for the last-mile fabricator, which hardly seems worthwhile for what is likely several hours of labor.
In any case, here is Slow/d's pitch, and I hope they can hire a proper translator in the future to make the financials a bit more clear:
A new wave of aircraft design is emerging—and NASA is helping.
It's probably been a long time since flying amazed you. Air travel tends to feel like an inconvenience these days—not a miracle of engineering. This is partially due to market forces and security theater. It's also the fact that aircraft design itself hasn't changed dramatically in decades.
A solar farm will pump seawater up the Andes mountains, so renewable power can be available day and night.
Stuck between the Andes Mountains and the Pacific Ocean, the Atacama Desert is one of the driest places on Earth. But the area's weird geography means that it will soon be home to a massive hydropower plant—the first step in a new system that could theoretically provide all of South America with 100% renewable energy.
M. Weingarten et al., Science, June 19 2015
Earthquakes East Of the Rockies, 1973-2014.
In a recent study, published in Science, geophysicists analyzed earthquakes east of the Rockies and found a strong link to injection sites. Those colored in red were near active wells. Those in gray were not.
Mark Crismon and I were sitting outside his Oklahoma house, looking at the day lilies that lined his pond, when our conversation was interrupted by a distant boom. “Did you feel that?” Crismon asked. “Just be quiet. Sit still.” He’s a lanky 76-year-old, retired from an electronics career, with gray hair combed straight back from his ruddy face. The booms continued, once or twice per minute; I felt them under my skin. “That’s a small earthquake,” he said, seconds before the sound recurred. “There it was again. We’ll go and look on the seismometer—I’ll show you what it looks like.”
We walked into his garage. It was July and approaching 100 degrees in the countryside north of Stillwater. The building was filled with freezers where Crismon and his wife store the food they grow, catch, shoot, and smoke. Deer and coyote tails covered corrugated-tin walls. On a desk in the corner, beside a hand-labeled bottle of peach brandy, sat a Dell laptop connected by a cable to a buried seismometer. Oklahoma State University scientists had given Crismon the seismometer in 2014, as part of a project to monitor the state’s current rash of earthquakes.
He took a drag from his cigarette, then turned his attention to three parallel bars scrolling across the screen: blue on the bottom, red in the middle, green on top. They were mostly straight but had become jagged for the several minutes we had felt the tremors. “How’d you like to put up with that day and night?” he asked.
Mark Crismon monitors earthquake activity caused by injection wells at his home in Oklahoma, as part of a seismic network set up by scientists.
Crismon sits at this desk, on and off, for 14 hours a day. He arrives at six in the morning and takes pictures of the spikes with a digital camera to document what he calls a growing menace (even though the data gets recorded regardless). There’s plenty to photograph: Oklahoma, which historically has had few earthquakes of magnitude 3.0 or higher, started rumbling regularly in 2009. The Oklahoma Geological Survey recorded 35 such quakes in 2012, 109 in 2013, and 584 in 2014. (The prior annual average was fewer than two.) By late October, the 2015 figure had already exceeded 700.
Scientists have figured out the reason: the oil-and-gas industry’s practice of injecting wastewater deep underground.
The granite basement that underlies the continent, a mile below Oklahoma’s wheat and alfalfa fields, is full of faults. Usually, natural stresses clamp the rocks and keep them from moving—like “a vise that’s slammed on the east and west side, and someone’s turning the screw,” says Todd Halihan, a hydrogeophysicist at Oklahoma State University. Inject fluid deep enough, he says, and it travels into the fractures in the granite, in effect lubricating the rock and causing faults to slip.
Halihan compares this to tabletop air hockey. “When it’s off, the puck doesn’t move particularly well,” he says. “Turn on the air, and it’s like you’re injecting. That puck moves real well.”Sometimes these quakes arrive as jolts like those Crismon and I felt outside his house. Sometimes they topple buildings and claim lives.
It’s not only in Oklahoma where we’re giving the proverbial puck more room to slide. Our species, unintentionally, keeps finding new ways to unleash earthquakes. We have rattled the ground by impounding reservoirs, excavating mines, testing nuclear weapons, tapping geothermal power, and pushing carbon dioxide underground to slow global warming.
Sometimes these quakes arrive as jolts like those Crismon and I felt outside his house. Sometimes they topple buildings and claim lives. Whether they hit the Midwest, California, Switzerland, India, or China, some of those who feel the shocks are asking: Can we control the tremors, or are damaging quakes an inevitable feature of the future?
We first recognized the problem of man-made earthquakes around the turn of the 20th century, as they began hitting the regions around South Africa’s gold mines and Europe’s coal mines. The release of gravitational energy, when the rock above the mines sagged, triggered them.
It took until the 1930s for Americans to notice man-made quakes beneath our own soil. When engineers created Lake Mead behind the Hoover Dam, the sudden addition of 12 billion tons of water apparently set off hundreds of small tremors along the Arizona-Nevada border.
“This was an ‘aha moment,’ an important benchmark in the science,” says Bill Ellsworth, an emeritus seismologist with the U.S. Geological Survey and geophysics professor at Stanford University. Since then, reservoirs have been linked to devastating quakes around the world: definitively to a magnitude-6.3 quake that killed 200 people in 1967 near India’s Koyna Dam, and more speculatively to the 2008 Sichuan quake, a magnitude-7.9 colossus that flattened schools and hospitals in China, and left more than 80,000 people dead or missing. The Sichuan quake was triggered less than 6 miles from the Zipingpu Dam reservoir, says natural-hazards researcher Christian Klose, who has linked water levels there to tremor frequency.
The most ominous precursor to Oklahoma came in the 1960s, when a series of earthquakes walloped the normally quiet Denver area. During two particularly lively days in 1962, the shocks broke windows, cracked plaster, and left electrical outlets hanging by wires. “Children cried with fright,” read a federal field report from Dupont, a town just north of the city.
Oil-and-gas operations produce wastewater made up largely of brine. Injection wells in the U.S. pump 2 billion gallons of it a day underground.
Scientists traced this seismic uptick to the Rocky Mountain Arsenal, an Army facility that manufactured chemical weapons and rocket fuel. Weeks before the trembling began, the arsenal had started injecting wastewater 2 miles down into the crystalline basement rock. After the injections stopped, in 1966, it took a year for the shaking to cease: A magnitude-5.3 quake knocked bricks from chimneys in 1967 and caused more than $1 million in damage.
A geologist named David Evans found an association between the amount of fluid injected at the arsenal and the number of earthquakes, and suggested cause and effect. (To demonstrate how lubricated rocks slip, Evans reportedly would perform the “Coors experiment”: He perforated the bottom of a beer can, and then showed how the seeping liquid eased its slide down an incline.) The Army disputed Evans’ hypothesis, but he was vindicated by USGS researchers, who triggered seismicity soon after by methodically injecting fluid into Colorado’s Rangely oil field.
Since then, scientists have grown more sophisticated about documenting changes in earthquake activity. About 200 miles south of Rangely, in Colorado’s Paradox Valley, the U.S. Bureau of Reclamation has been injecting briny groundwater into a deep limestone formation in order to keep it from contaminating a river. “Somebody had the foresight to say, ‘Let’s see what kind of seismicity’s out there before we start injecting,’” says bureau geophysicist Lisa Block. Six years of baseline data showed almost no natural activity. By contrast, the agency has recorded 6,200 quakes, most of them small, since underground disposal began in 1991.
By the time Oklahoma starting ramping up its own wastewater injection—now more than a billion barrels a year—the notion that humans can induce earthquakes by putting fluid underground was already familiar. Still, Sooner State residents were caught off-guard when that geologic principle hit home.
A decade ago, this part of North America experienced just 14 tremors a year. In 2014, 650 quakes hit the area, most of them clustered around wastewater injection wells.
Data visualization by Pitch Interactive
Both maps show tremors stronger than magnitude 3.0. Color represents strength; dots mark centers. The map omits a few quakes along the west of New Mexico and Colorado, to avoid earthquake-prone mountain regions. Source: U.S. Geological Survey National Earthquake Information Center
Todd Halihan was standing in the hallway of his Stillwater home one night in November 2011 when he noticed glasses starting to rattle. As the building shook, the hydrogeophysicist flashed on his sleeping 6-year-old. “Should I get my kid out of bed and run out of the house?” he recalls thinking. “Should I get him under a table?”
Halihan was feeling the effects of a magnitude-5.7 quake—Oklahoma’s largest, it turns out. Its center was near Prague (rhymes with “vague”), almost 50 miles away, where it buckled a highway and destroyed 14 homes. In one living room, rock from a fireplace and chimney struck a woman as she watched TV. The earthquake also toppled a historic turret at St. Gregory’s University in nearby Shawnee. A team from the University of Oklahoma, Columbia University, and USGS determined the source: a fault rupture that began about 650 feet from active injection wells.
“That’s when a lot more people started paying attention,” says Austin Holland, Oklahoma’s state seismologist until this past summer.
The idea that the oil-and-gas industry could be producing these quakes was a touchy subject, both for companies and for the administration of Gov. Mary Fallin. One study shows the industry has created one-fourth of Oklahoma’s new jobs since 2010. Emails obtained by the EnergyWire news service paint a picture of a government that, in the words of Fallin chief of staff Denise Northrup, tried to “make this go away.” Shortly after the Prague earthquake, Fallin aides contacted Devon Energy, an oil-and-gas producer, and obtained talking points to use with constituents. Among them: “There is no current evidence that oil-and-gas operations had anything to do with the recent large earthquakes in Oklahoma.” When Fallin addressed a National Governors Association forum on shale-energy development in 2013, a reference to underground injection wells was deleted from her speech. “We had other issues we wanted to highlight,” says Alex Weintz, Fallin’s communications director until this past November.The problem doesn’t stop at Oklahoma’s borders; man-made earthquakes have hit other midcontinent states too.
Weintz says Fallin’s personal views were always more nuanced than Devon Energy’s talking points, even if those points were used by her staff. Her own reticence to blame disposal wells, he says, reflected the state of the research when she took office in 2011. “It was only the beginning of a spike in seismic activity,” he says. “Since then, the science has evolved.”
Even the Oklahoma Geological Survey, a university-affiliated state agency, was slow to acknowledge the disposal-well connection. In a 2013 statement, it noted that the Prague earthquake appeared to be the result of “natural causes.” Holland, who worked for the survey, says, “Oil-and-gas is a very important industry, and so . . . some of the public statements saw a lot of wordsmithing.”
As scientists dug into data, a consensus emerged that fluid injection was indeed behind the spike in earthquake activity. Even Fallin has come around. “We all know now there is a direct correlation between the increase of earthquakes that we’ve seen in Oklahoma and the disposal wells,” she said at an August meeting of her administration’s Coordinating Council on Seismic Activity.
In fact, new research shows earthquakes now pose a risk to the oil- and-gas industry itself. The largest crude-oil storage facility in the world sits in Cushing, Oklahoma, right above a fault recently activated by injection. Continued injection could produce a magnitude-5.7 earthquake, large enough to rupture oil tanks and pipelines.
The problem doesn’t stop at Oklahoma’s borders; man-made earthquakes have hit other midcontinent states too. On New Year’s Eve 2011, a magnitude-4.0 tremor in Youngstown, Ohio, shook buildings and led to the shutdown of a disposal well that was deemed the likely culprit. Waste injection has also been linked to quakes in Arkansas, Colorado, Kansas, New Mexico, and Texas.
If the same quakes had happened overseas, they might have caused far more damage. “The technologies that have been pioneered primarily in the U.S.—to unlock gas from tight shale and to produce oil from unconventional reservoirs—have the potential to be applied around the world,” says Ellsworth, the USGS seismologist. “Many countries will find it irresistible to produce their own resources. Unfortunately, in many of these countries, the building standards are not what they are in the United States, and the potential for severe damage and loss of life is really high.”
As we keep using the earth as a vault to stash our waste—and as a tappable resource—we’re creating a global-energy system that will likely increase the risk of small and potentially large earthquakes. Engineers will need to weigh every resource, looking at how much power it provides, how green it is, and what type of seismic risk it poses.
In some cases, the technologies we’ve engineered to ease our impact on the environment have proved likely to shake the ground. In a pilot project in Decatur, Illinois, carbon dioxide captured from an ethanol plant is being injected, in liquidlike form, almost 7,000 feet down into a sandstone formation. The goal is to slow climate change by keeping the greenhouse gas out of the atmosphere (a tactic also advanced by proponents of “clean coal”).
So far, the injections have caused only the smallest of tremors, too faint to be felt. But Stanford University geophysicist Mark Zoback and hydrogeologist Steven Gorelick have argued that for underground carbon storage to benefit the climate, it must happen at a “massive scale”—one that will likely trigger more seismicity, and therefore potentially defeat its own purpose by discharging the carbon into the atmosphere. “Even small to moderate earthquakes threaten the seal integrity of a CO2 repository,” they wrote in a 2012 journal article. For that reason, they concluded, carbon injection will be “an extremely expensive and risky strategy” to reduce greenhouse gases.
Ole Kaven—a USGS geophysicist involved in the Illinois project—says that if researchers can map faults, fractures, and fluid pathways using sophisticated instruments, they can reduce the hazard, though not eliminate it entirely. “If one factors in the cost of greenhouse-gas emissions, and what effect CO2 sequestration can have on reducing some of the long-term effects, this conversation changes,” he says. “Some of these risks might be tolerated.”
Hydrogeophysicist Todd Halihan says it might not take much pressure to increase seismicity—as little as 15 pounds per square inch. “Half of a car tire,” he says.
That’s a critical point: A technology might produce earthquakes, but what harm might come from not using it? Take geothermal production, a reliable and underused source of electricity that causes little environmental damage. “If we could tap all the heat in the earth, we wouldn’t need anything else,” says Ernest Majer, a geophysicist affiliated with both the Lawrence Berkeley National Laboratory and Sandia National Laboratories.
That was no comfort in Basel, Switzerland, where in December 2006 operators of the Deep Heat Mining project began injecting cold water into the naturally hot granite below the city. The following week, a magnitude-3.4 shock rattled windows and cracked plaster. Injection was halted. A government study projected a 15 percent chance of a man-made earthquake causing more than $500 million in damage if production resumed. In 2009 the project was scrapped entirely.
Geothermal production continues in rural areas—most notably at the Geysers, north of California’s Bay Area, where locals have routinely endured minor quake damage. “Sometimes it feels like a big truck just bumped into the house,” says Jeff Gospe, who sits on a seismic-monitoring advisory committee there. Neighbors have reported cracked windows and a retaining wall that crushed a van.
Majer believes that the hazard posed by geothermal fields is minor compared with their potential to produce clean energy. (He says the Geysers alone could power all of San Francisco.) “There’s no such thing as zero risk,” Majer says. “Driving to the grocery store is a risk. Everybody risks when they get out of bed in the morning.” Compare that, he says, with the cost of carbon emissions: “If you start looking at the health impacts, the climate impacts—all the nasty things coming out of the fossil-fuel economy—well, maybe we better do something else. Induced seismicity associated with putting carbon into the ground, associated with geothermal: Those are minor, minor things compared with all these other risks facing us.”
Assuming we’re not going to shut down energy production, scientists now face a more complex question: whether it’s possible to minimize the hazard.
After the experiments in Colorado’s Rangely oil field in the 1960s and ’70s, which showed that we could control induced seismicity by varying the pressure of injected fluid, scientists were bursting with hope. Not only might they reduce damage from man-made quakes, the thinking went, but maybe they could control natural ones. Rather than waiting for the next bridge-toppler to hit California, USGS scientists suggested drilling wells along the San Andreas Fault, injecting water, and releasing the accumulated stress in a series of small, harmless quakes.
The idea never got traction. Not only would it take thousands of mini quakes to offset a major one, but it is also too risky. More than a century has passed since San Francisco’s deadly 1906 quake, which means the city is sitting on highly stressed rock. Inject water underground, says Oklahoma’s Halihan, and “you might not release 100,000 small ones. You might release a big one.” Even if the experiment did work locally, he says, “you might set off the next segment of the fault. It’s 3D and it’s complicated: ‘Hey, we didn’t cause an earthquake in San Francisco.’ ‘Well, you just knocked down LA.’ ‘Sorry.’ ”
Today, nobody’s talking about setting off microtremors in Oklahoma in order to avoid the next Prague earthquake. But scientists are talking about more-modest ways to manage seismicity.
“The toolbox is growing,” says Austin Holland, who now works for USGS in New Mexico. It includes avoiding known faults, scaling back the volume and rate of fluid injected into the rock, injecting at a shallower depth, improving monitoring, and preparing to abandon wells altogether if seismicity can’t be stopped. This past year, the Oklahoma Corporation Commission—which regulates the industry—ordered volume reductions for some wells, as well as “plug backs” to limit how deep some wastewater is injected. The state’s “traffic light” system, instituted in 2013, allows regulators to scale back or halt drilling in response to seismic activity.
A truck carrying wastewater dumps it into a storage facility in Oklahoma before it is re-injected into the ground.
Still, eight magnitude-3 and -4 quakes struck northern Oklahoma during a 24-hour period as this story went to press. There’s a lot we haven’t yet learned about what happens underground—and that knowledge gap stymies us from managing the earthquakes we do create.
“Probably the greatest unknowns are the properties and processes deep within the earth, things that are very difficult to measure directly,” Holland says. “How is pressure being communicated? Do faults act as seals or as conduits? What are the actual stress states deep within the earth? That’s where science has to spend a significant amount of effort and resources.”
Back in Stillwater, Todd Halihan understands both sides. He wants his students to find work in the energy industry when they graduate. But he also doesn’t want to have to dive for his son the next time an earthquake shakes his house.
“We’re going to make some decisions, and none of them are going to be super-simple or super-pleasant,” he says. Ideally, that means talking levelheadedly about both the value of oil-and-gas production and the threat of earthquakes—how to balance those competing concerns and how much uncertainty we’re willing to tolerate.
It’s not easy to talk, though, when the ground is rattling. Each side retreats into a corner. Some industry and political leaders refuse to acknowledge the emerging science. Some quake-zone residents, feeling ignored and outgunned, pull out the only weapon they have: their rhetoric. The conversations grow polarized rather than solution-oriented.It’s not easy to talk when the ground is rattling. Each side retreats into a corner…The conversations grow polarized.
Technology, Halihan says, often carries harm. “The Titanic’s a nice example,” he says. “We were developing big ships, and we sank them. Developed airplanes; we crashed them.” Addressing unintended consequences doesn’t necessarily mean scrapping innovations. Nor does it mean pretending the consequences don’t exist.
With seismicity, as with addiction, the first step is admitting we have a problem that’s not fully within our control. “The government, as well as these companies, should be upfront,” says Leonardo Seeber, an earthquake geologist at Columbia University’s Lamont-Doherty Earth Observatory. “You want to drive your car. It takes gasoline. To produce that, you have to make wells. You have to pump in here and pump out there. And when you are doing that,” he says, “you are changing the stress in the subsurface. Sometimes there could be earthquakes that we can’t predict. There could be consequences. But we’re all in it together.”
This article was originally published in the January/February 2016 issue of Popular Science, under the title “Earthquake Nation”
Mars capsule concept
The nonprofit Inspiration Mars envisions sending humans to Mars in a modified SpaceX Dragon capsule fitted with an inflatable element.
NASA may dream of sending humans to Mars in the coming decades, but the fact remains that nobody's really sure how we'll survive the journey or set up camp on the red planet.
The Orion spacecraft that will drive astronauts to Mars has a diameter that's about the length of a pickup truck. That's not a lot of space when you consider the astronauts' journey to Mars will take at least 6 months.
In order to not go totally bonkers, Mars-bound astronauts will need a larger place to live, complete with private quarters and exercise equipment. NASA envisions the Orion capsule could link up to a habitation module in space, but right now they have no idea what that module could look like. And who knows what the astronauts will live in once they get to Mars.
Now SpaceNews says that a report attached to the recent omnibus spending bill has allocated funds for NASA to figure it out. The bill orders NASA to spend at least $55 million to develop a habitation module for deep space exploration, and to have a prototype ready by 2018.
That would be great timing, since NASA wants to test out its new space habitat around the moon in the 2020s before sending it to Mars in the 2030s.
Deep Space Habitat
The not so creatively named Deep Space Habitat is one of many concepts that NASA is looking into.
However, whether NASA could have something ready by 2018 seems debatable. At this point, the agency pretty much has a blank slate as to what the habitat would look like and how it would function. Shielding astronauts from space radiation while also maintaining a light weight will be one of the major challenges.
Thus far Bigelow Aerospace's inflatable habitat stands out as a frontrunner--a test version of the habitat will soon be deployed on the International Space Station. SpaceNews reports that NASA has also awarded funds to Boeing, Lockheed, Martin, Orbital ATK, and other companies to look into potential habitat designs.
It looks like NASA will have to step up its game, and fast. The report requires NASA to come back with a status update about how it has distributed funds within 180 days of the bill becoming law, which happened on December 18.
If you happen to be in Austin this week, be sure to check out the final days of Moderno: Design for Living in Brazil, Mexico, and Venezuela, 1940–1978 at the University of Texas’s Blanton Museum of Art. With a selection of 130 design objects, the exhibition celebrates a pioneering chapter in Latin American modernism while also highlighting some lesser-known designers from south of the border—including Clara Porset, a Cuban-born furniture and interior designer who called Mexico home.
Born to a wealthy Cuban family in 1895, Porset traveled widely in her youth, studying architecture and design in New York and Paris before visiting Germany to meet Walter Gropius and Hannes Meyer of the Bauhaus. Their focus on combining art, technology and craftsmanship in a larger dialogue with society resonated deeply with Porset, who returned home to Havana in 1932 to begin working as a professional designer. A few years later, she joined former Bauhaus instructors Josef and Anni Albers at Black Mountain College in North Carolina, where she spent a summer steeped in the Alberses’ modernist design teachings. The trio would become lifelong friends, and Porset’s designs would always evince a strong Bauhaus influence.
When the political situation in Cuba began to unravel, Porset left for Mexico, which would become her adopted home in 1940. There she found inspiration in traditional folk arts and crafts whose rustic qualities she translated into sleek furniture designs. This is best seen in her butaque chairs, handsome, low-slung loungers that could be long-lost tropical cousins of Alvar Aalto’s Armchair 406 and Hans Wegner’s CH25 Lounge Chair. Over the years, Porset would experiment with different proportions and ergonomics to make the chairs more functional, as well as various elastic materials and also natural fabrics like palm, jute and ixtle (a native Mexican plant fiber) for the chairs’ seat and back. One of the few female designers working in Mexico at the time, Porset made furniture for the residences of Mexico’s elite and received commercial commissions to furnish hotels and public housing projects; she is also remembered for her furniture design collaborations with the trailblazing Mexican architect Luis Barragan at his own residence and numerous others’. During this era Porset had a number of successful furniture collections put into production by the Mexican manufacturers IRGSA and DOMUS.
Beyond Mexico, Porset’s designs were featured in the Artek-Pascoe showroom in New York in 1946, and in the early 1950s they were championed by Esther McCoy in the pages of Arts & Architecture and The Los Angeles Times Home Magazine. Porset also collaborated with her husband, the muralist Xavier Guerrero, on a design proposal for MoMA’s Organic Design in Home Furnishings competition in 1940. Although Porset was the only female designer from Latin America to participate, Guerrero received sole credit for their competition entry, an oversight that MoMA later remedied. The couple also entered MoMA’s 1950 competition for low-cost furniture design, with a tubular steel chair strung with a plastic seat and back.
After the Cuban Revolution, in 1959, Porset was invited to return to Cuba to found a school of industrial design intended to help establish a new class of utilitarian designers on the now Communist island. Although her plan was never fully realized, she was also commissioned by Che Guevara (then Minister of Industries) to design furniture for a number of local schools and institutions. A few years later, Porset returned to Mexico, where she helped launch a program in industrial design at the National Autonomous University of Mexico, teaching there until her death in 1981. Porset’s will established a scholarship fund (now a design award) to encourage a new generation of young female designers in Mexico.
One of the first things we learned about in History of Architecture 101 was Romanesque architecture, and thus Roman arches. Moving beyond the Greeks and their boring lintels, the Romans figured out that you could create massive, self-supporting archways that locked into place once the keystone in the top was inserted.
Taking advantage of the compressive strength of stone, this system of archbuilding yielded magnificent structures. And unlike those moody Goths who later devised their own lousy take on the arch, the Roman arch was a nice, rational semicircle.
So effective is the technique that it even works with old computer monitors; all you need is an intern labor force that understands they are required to come in on Saturday to realize your architectural whims.
The thing about Roman arches is that the arch doesn't lock into place until the keystone is added. That means that until you get to that last piece, the entire structure must temporarily be held in place by scaffolding (or interns).
However, a Vancouver-based company called Lock-Block has devised a clever, mobile scaffold that happens to come in the shape of a truck. The company's mainstay is producing interlocking rectangular concrete blocks, akin to enormous Lego pieces. Their Arch-Lock system applies this concept to wedge-shaped pieces. Once their special Zipper Truck is driven to the site, the Arch-Lock blocks can be stacked using the truck itself as the scaffold. Have a look at this:
Pretty crazy, no? And it appears, judging by the way the driver occasionally scoots the truck forwards and backwards, that the array of beams is actually tapered towards the rear; this irregularity would allow the driver to jiggle the pieces into place, until the protrusions properly nest into the cavities and gravity takes over.
The benefits of the Zipper Truck system are manifold. For one thing, the structure goes up relatively quickly; the company claims their system "reduces construction time by as much as 90% [versus] a comparable conventional reinforced post and beam concrete structure. And with no embedded steel to limit the service life, the structure will last indefinitely."
The design is also seismic-resistant. And despite the fact that the structure will last--if Roman creations are any indication--for thousands of years, should you decide to relocate the structure, the process is completely reversible. In contrast to other concrete structures that must be taken down with dynamite and a wrecking ball, the Arch-Lock blocks can simply be plucked out of place and reused elsewhere. That's a wonderfully sustainable benefit to using gravity for joinery, and a lesson you can easily teach your interns over a weekend.