Shared posts

16 Jun 09:39

Educational Romanticism & Economic Development | Pseudoerasmus

Adam Victor Brandizzi

Agora um post falando mal das espectativas exageradas na educação (que, como todo mundo sabe, é a "solução")

An elaboration on Ricardo Hausmann’s article “The Education Myth” arguing that education is an overrated tool of economic development. This post also responds to a criticism of Hausmann’s views which appeared at the Spanish group blog Politikon; and also discusses whether developing countries really can raise scores on achievement tests.

[Edit: This blogpost has now been translated into Spanish as “El romanticismo educativo y el desarrollo económico“.]

The Harvard economist Ricardo Hausmann recently published a column in Project Syndicate called “The Education Myth“, arguing that education has been an overrated tool of economic development. His target is what he calls the “education, education, education” crowd, the sort you can find at Davos and other places where much bullshit is intoned with great piety. But I think his argument contains a valuable insight about the ability of developing countries to actually improve educational outcomes.

Hausmann’s primary observation is this: those countries which tripled their average years of schooling from 2.8 years to 8.3 years between 1960 and 2010 only managed to increase their GDP per worker by 167%. He cites Pritchett 2001, but I think a more interesting summary is found in Pritchett’s chapter 11 in The Handbook of the Economics of Education (Vol. 1) :

HEE_ch11_Pritchett

During those 35 years, the world’s standard deviation of GDP per worker increased, while the standard deviation of schooling per worker declined. The inequality between the 90th percentile amongst countries in GDP per worker and the bottom 10% nearly doubled, even as the 90/10 ratio of schooling per worker improved dramatically. If education is so important for development, this is a puzzle indeed.

Of course simply comparing schooling and growth rates is a crude correlational argument which omits the many other determinants of growth. But Hausmann is simplifying things for a popular audience in order to convey a solid finding from the empirical literature in economic growth: the variable “years of schooling” by itself has a low explanatory power for growth rates in GDP per worker whether in cross-country regressions that include standard controls, or in growth accounting which attempts to directly measure the contribution of human capital to the growth rates of individual countries.

Pritchett 2001 cannot find significant social returns to schooling in most developing countries — not much detectable impact above and beyond the sum of private returns to schooling. This is even though, within countries, those with more schooling still tend to have higher incomes. The table below for Africa is from Barro-Lee:

barrolee4.5

The above summarises the results of growth accounting — counting the inputs that go into the production process (labour, capital, educational capital) and comparing them with output. The unexplained or residual term (TFP) is conventionally interpreted as the growth in the efficiency with which the inputs are used to generate output.

What the above shows for Sub-Saharan Africa is that even as the contribution from educational investment was soaring, capital contribution and TFP were collapsing in the 1970s-90s. The associations here are not necessarily causal. But if they are causal, it could imply that the educated were doing socially useless things, such as taking bribes as functionaries in the customs bureau in exchange for import licences. If the associations are not causal, then it could imply that the supply of educated people was rising even as the demand for them was not growing fast enough. Or it could be a little of both.

Another possibility is that all these years of schooling were hollow, i.e., they implied no real learning nor any skill acquisition. Pritchett presents limited evidence this cannot be completely true, e.g., the fertility of women with more schooling declined. Evidence from Barro-Lee (also see their VoxEU article) also make that idea unlikely.

However, labour & education economist Eric Hanushek is quite blunt about it: despite large global increases in rates of school enrolment and in average years of school attendence, the “best available evidence shows that many of the students appeared not to learn anything” [emphasis mine]. This view is based on the very low test scores from developing countries in international assessments :

knowledge capital[Source: Hanushek & Woessman, The Knowledge Capital of Nations: Education and the Economics of Growth. The means don’t convey the full impression: see the distribution of scores, with 400 defined as “functionally illiterate”.]

So what’s the best way to interpret Hausmann’s article ?

  1. Yes, “years of schooling” is a poor proxy for educational outcomes. But it captures very well the policy instrument that governments can actually control easily — building large boxes and herding children into them like cattle. That investment has obviously not caused a convergence in test scores between developed and developing countries.
  2. There’s no evidence that education, how ever measured, promotes the sort of growth rates that result in eventual convergence with the rich countries.
  3. Nonetheless, there’s evidence to suggest education has contributed to the positive but relatively low growth rates which have been insufficient for convergence. Economic growth research implicitly assumes that the rapid convergence of East Asia with the western countries is ‘normal’ and the slow growth of other non-western countries ‘abnormal’. But maybe the former is the anomaly.

§  §  §

At the Spanish group blog Politikon, Roger Senserrich largely agreed with Hausmann. But two other Politikonistas, Octavio Medina and Lucas Gortázar took exception to the Hausmann-Senserrich view. (They were apparently trolling one another for fun.)

Medina-Gortázar’s main objection is that “years of schooling” is a bad proxy for education, so the quality of education, instead of measures of school access, is now being used to study the relationship between economic growth and education. “What a child learns in his first year of primary school is not the same in Kenya as what a child learns in Finland or Uruguay”. Then Medina and Gortázar present a plot very similar to the one below in order to argue, yes, indeed, there is an important relationship between economic growth and the “quality of education”:

knowledge capital[Source: the OECD publication “Universal Basic Skills” by Hanushek & Woessmann]

Unfortunately the Medina-Gortázar argument is also based on a bad proxy. Scores on PISA and TIMSS are not — I repeat, not — proxies for “educational quality”. They reflect student outcomes. Institutional input variables, like “teacher quality” or “school quality”, should not be conflated with student output variables. Hanushek and Woessmann consistently use test scores as a proxy for (their own words) cognitive skills. From the same OECD publication:

The focus on cognitive skills has a number of potential advantages. First, it captures variations in the knowledge and ability that schools strive to produce, and thus relates the putative outputs of schooling to subsequent economic success. Second, by emphasising total outcomes of education, it incorporates skills from any source – including families and innate ability as well as schools. Third, by allowing for differences in performance among students whose schooling differs in quality (but possibly not in quantity), it acknowledges – and invites investigation of – the effect of different policies on school quality. [pg 89]

So just how optimistic should we be about developing countries’ ability to improve test scores ? We do know poor countries have raised them, e.g., Brazil’s PISA score has gone up by about one-half standard deviation in the past decade. Rather it’s a question of whether the gap can be closed between developed and developing countries, i.e., Brazil’s is still more than a standard deviation below the OECD average which puts the country’s mean below the “functionally literate” category.

There’s a chain of causes that need to be addressed. We must know what achievement tests measure; whether ‘optimal’ schools can raise test scores to growth-accelerating levels; and what are the prospects for moving bad schools in developing countries closer to ‘optimal’ ones.

§  §  §

Although tests like PISA or TIMSS measure learning, they also implicitly and indirectly measure the ability to learn. Which is why these tests, along with the American SATs, are strongly correlated with IQ. (See Rindermann 2007 for the high correlations in country means; and Frey & Detterman 2004 for the individual-level correlation for the American SATs. Just FYI see the survey Bouchard 2004ungated version. I’ve extracted Table 1 containing a summary of results.)

I don’t want to get into the question of the malleability of cognitive ability, so I will simply assume that in developing countries (1) it is more malleable because of the much greater environmental variation and (2) the relationship between cognitive ability and achievement test scores is imperfect enough that in principle there is room for improvement in scores.

But those researchers who believe achievement test scores can be raised (in developed countries) usually stress the importance of very early childhood intervention. James Heckman, Nobel laureate and celebrated critic of The Bell Curve, presents the most informed case for optimism. Yet even he argues that the window for intervention in improving academic achievement (and other life outcomes) is substantially prior to conventional formal schooling. From Heckman :

Gaps in the capabilities that play important roles in determining diverse adult outcomes open up early across socioeconomic groups. The gaps originate before formal schooling begins and persist through childhood and into adulthood. Remediating the problems created by the gaps is not as cost effective as preventing them at the outset.

For example, schooling after the second grade plays only a minor role in creating or reducing gaps. Conventional measures of educational inputs — class size and teacher salaries — that receive so much attention in policy debates have small effects on creating or eliminating disparities. This is surprising when one thinks of the great inequality in schooling quality across the United States and especially among disadvantaged communities.

My colleagues and I have looked at this. We controlled for the effects of early family environments using conventional statistical models. The gaps substantially narrowed. This is consistent with evidence in the Coleman Report (which was published in 1966) that showed family characteristics, not those of schools, explain much of the variability in student test scores across schools.

Heckman argues improving family environments and implementing very early preschool programmes for infants (!!!) is the most cost-effective way to raise outcomes. These are things which developed countries even with their strong institutions can barely do, if at all. Imagine that for developing countries, many of whom barely manage universal school enrolment.

§  §  §

But I’m willing to believe developing countries have more room for improving test scores through better schools. That’s because having ‘schools’ on paper does not necessarily imply that even rudimentary education is taking place. Especially in some of the poorest countries, the problems with school quality often include the regular attendance of teachers, missing textbooks, or sometimes even no ceilings !

How to improve school quality in the first place, though ? Although I’m quite sceptical that middle-income countries would benefit from more spending per student, surely Afghanistan or Burundi might.

Glewwe et al. (2013) reviews studies between 1990 and 2010 about the impact of a variety of educational inputs on measures of student learning (such as test scores) in developing countries:

glewwe hanushek 2013

[The difference between the second and third columns (36) refer to studies containing OLS using only cross-sectional data which, according to the authors, did not adequately deal with omitted variables, endogeneity, self-selection, etc. etc.]

The inconclusive effect of even basic infrastructural inputs like textbooks is surprising. On the other hand, maybe there’s some hope from lowering the pupil-teacher ratio, and the impact of teacher absenteeism hasn’t been studied very much. But as the authors put it, “perhaps the most useful conclusion to draw for policy is that there is little empirical support for a wide variety of school and teacher characteristics that some observers may view as priorities for school spending”.

Of course there’s been an explosion of randomised controlled trials and experimental studies from developing countries in the last 10 years, so there will be much more evidence to come. But when I read about the collapse of a financial incentive experiment to reduce absenteeism by nurses (ungated) at Indian hospitals, I’m not terribly optimistic about the vast institutional changes that appear necessary to improve the quality of schools.

What is Hanushek’s advice to developing countries ? It’s mostly the fads that now animate the educational circles in the United States and some other developed countries:

  • get smarter teachers (and test them)
  • track students
  • use school leaving exams
  • decentralisation of educational decision-making

Hanushek actually argues “school choice”, by itself, is a bad idea in developing countries and could worsen student outcomes. So he prefers those reforms in combination, and his optimism is reflected in projections like this based on cross-country correlations with samples of 50 or whatever:

hanuwoess advice

Well, Nepal and El Salvador, problem solved !

§  §  §

Although the returns to improving school quality (with enrolment held constant) would be higher than increasing enrolment & attainment (with quality held constant), the latter seems the more realistic option for most developing countries. After all, the very rich and manageably sized Qatar can’t manage 400 ! And there’s still a lot of room left for raising percentage enrolled and years of schooling :

hanu-woess pie charts

We’ve come full circle. Things started with Hausmann’s observation that increasing access to schools per se hasn’t done very much. But despite the low productivity of schools and the high likelihood of diminishing returns to more inputs, maybe eliminating the educational ‘slack’ is still the low-hanging fruit for most developing countries.

Postscript: I posted a quick remark about causal identification regarding the relationship between test scores and economic growth in the first comment of the comments section.

Bookmarked at brandizzi Delicious' sharing tag and expanded by Delicious sharing tag expander.
27 Jun 00:51

Dear governments and aid agencies: Please stop hurting poor people with your skills training programs - Chris Blattman

Adam Victor Brandizzi

Hmmmm... Conte mais...

Here is an incredible number: From 2002 to 2012 the World Bank and its client governments invested $9 billion dollars across 93 skills training programs for the poor and unemployed. In lay terms, that is a hundred freaking million dollars per program.

Unfortunately, these skills probably did very little to create jobs or reduce poverty.

Virtually every program evaluation tells us the same thing: training only sometimes has a positive impact. Almost never for men. And the programs are so expensive—often $1000 or $2000 per person—that it’s hard to find one that passes a simple cost-benefit test.

You might think to yourself: That’s not so bad. Nobody hurt the poor. Plus the trainers and the firms probably benefited. So it’s not a total loss.

If you think this, I urge you to transfer to an organization where you can no longer affect the world. I can think of a couple UN agencies with excellent benefits.

Because when you take billions of dollars a year (because the World Bank is hardly the only spender on skills programs) and you spend them on vocational bridges to nowhere, you have denied those dollars to programs that actually work: an anti-retroviral treatment, a deworming pill, a cow, a well, or a cash transfer. You have destroyed value in the world.

I know what some are thinking: skills program just have to be more market-driven, or on-the-job, or linked to firms, or targeted to the right people.

Maybe. And these might pass a cost-benefit test if you can make them cost much less. But I want you to ask yourself: do you want to run programs that are hard to get right, or hard to get wrong?

Because if you want to create work for unemployed people, and reduce extreme poverty, there are in fact programs that are hard to get wrong.

It gets better. Currently, about two billion people live in countries that are deemed fragile or have high homicides rates. Jobs and incomes in these countries will probably mean less crime, and maybe even a decrease in other kinds of violence. Especially if they are targeted to the highest-risk men.

If you’re thinking to yourself “hey, I would like to read 20,000 more words on this, preferably in dry prose,” well do I have the paper for you. A new review paper with Laura Ralston: Generating employment in poor and fragile states: Evidence from labor market and entrepreneurship programs.

it is a draft for discussion, and comments and criticisms (in emails, blog comments, and prank calls) will be integrated over the coming months.

Fortunately the paper includes a 4-page executive summary. And, even better, an abstract!

The world’s poor—and programs to raise their incomes—are increasingly concentrated in fragile states. We review the evidence on what interventions work, and whether stimulating employment promotes social stability.

Skills training and microfinance have shown little impact on poverty or stability, especially relative to program cost. In contrast, injections of capital—cash, capital goods, or livestock—seem to stimulate self-employment and raise long term earning potential, often when partnered with low-cost complementary interventions. Such capital-centric programs, alongside cash-for-work, may be the most effective tools for putting people to work and boosting incomes in poor and fragile states.

We argue that policymakers should shift the balance of programs in this direction. If targeted to the highest risk men, we should expect such programs to reduce crime and other materially-motivated violence modestly. Policymakers, however, should not expect dramatic effects of employment on crime and violence, in part because some forms of violence do not respond to incomes or employment.

Finally, this review finds that more investigation is needed in several areas. First, are skills training and other interventions cost-effective complements to capital injections? Second, what non-employment strategies reduce crime and violence among the highest risk men, and are they complementary to employment programs?

Third, policymakers can reduce the high failure rate of employment programs by using small-scale pilots before launching large programs; investing in labor market panel data; and investing in multi-country studies to test and fine tune the most promising interventions.

Bookmarked at brandizzi Delicious' sharing tag and expanded by Delicious sharing tag expander.
29 Jun 20:11

Colors: A New Collection Film from The Mercadantes

by Christopher Jobson
Adam Victor Brandizzi

Silly & fun

In light of current events, the latest collection film by filmmaking duo The Mercadantes (previously here and here). Beautifully done, great ending.

14 Jun 19:12

Photographer Captures the Ruins of the Soviet Space Shuttle Program

0_cbbde_a04689ea_orig

Russian urban exploration photographer Ralph Mirebs recently paid a visit to the Baikonur Cosmodrome, where inside a giant abandoned hangar are decaying remnants of prototypes from the Soviet space shuttle program.

Gizmodo writes that the Buran program was in operation for nearly two decades from 1974 to 1993. One automated orbital flight resulted from the extensive program, but the project was shuttered when the Soviet Union collapsed.

Mirebs went into the massive 62-meter (~203 foot) tall hangar and captured a fascinating series of photos showing the detail and complexity of a space program that met an untimely end.

0_cbc15_82cc1342_orig

0_cbbdc_c0cc77d3_orig

0_cbbe2_c481e3bd_orig

0_cbbe4_e6abafc9_orig

0_cbbe6_f829481b_orig

0_cbbec_54e1d613_orig

0_cbbef_831cb8ca_orig

0_cbbf0_d23e7a88_orig

0_cbbf3_a8bf9cc4_orig

0_cbbf4_c2bfeb1f_orig

0_cbbfa_97321f32_orig

0_cbbff_6c5b945d_orig

0_cbc0d_4d949ecd_orig

0_cbc0e_a2c2dc0d_orig

0_cbc0b_c604957d_orig

0_cbc08_1603b9d0_orig

0_cbc10_2672845a_orig

0_cbc11_8d819149_orig

0_cbc13_3e8e4b26_orig

0_cbc14_81205244_orig

0_cbc09_36281d26_orig

Of the two run-down Buran shuttles found in the hangar, one was almost ready for flight back in 1992 and the other was a full-sized mock-up that was used for testing things like mating and load. Unfortunately for both, and for the countless scientists involved in the program, things came to an abrupt halt just one year later, and the hangar has remained in this state for over two decades now.

You can find a larger set of these photos and a writeup (in Russian) over on Mirebs’ blog.


Image credits: Photographs by Ralph Mirebs and used with permission

03 Jul 18:00

Photo









03 Jul 14:16

Israel precisará se aliar ao Hamas em Gaza para combater o ISIS?

by gustavochacra

O ISIS, também conhecido como Grupo Estado Islâmico ou Daesh, tem ameaçado entrar em Gaza para combater o Hamas, de quem é inimigo. Não será uma tarefa fácil e, por enquanto, ainda é improvável. Mas há uma possibilidade de aumento da instabilidade neste território palestino, que ainda tenta se reconstruir da guerra no ano passado.

O Hamas perdeu popularidade em Gaza. Não apenas pela guerra, mas também pela corrupção, por uma fraca e autoritária administração e por não ter conseguido nenhum resultado na luta para os palestinos terem um Estado ou  pelo menos para o bloqueio de Israel ser encerrado. O Fatah, que governa a Autoridade Palestina na Cisjordânia, tampouco é popular em Gaza.

Isso não significa que o ISIS seja bem visto pelos palestinos. A maioria absoluta deles considera esta organização péssima. Eles veem o que ocorre na Síria e no Iraque e não querem o mesmo no território. Embora haja um crescente número de salafistas em Gaza, estes ainda são uma fração da população. O Hamas, embora religioso, segue uma vertente do Islã político sunita associado à Irmandade Muçulmana, não ao wahabismo do ISIS e da Al Qaeda.

Os palestinos avaliam que organizações como o ISIS e a Al Qaeda apenas prejudicam a imagem palestina. Primeiro, porque a causa palestina é nacionalista, não religiosa ou islâmica – cristãos e socialistas estiveram na vanguarda dos movimentos de independência palestinos nos anos 1950 e 60. Em segundo lugar, porque o ISIS e a Al Qaeda nunca tiveram como prioridade combater Israel. Ao contrário, até hoje os israelenses não foram alvos de nenhum ataque destas organizações, que tem como foco acima de tudo o Irã, o Iraque e o regime de Assad na Síria, além do Ocidente.

O problema maior, em Gaza, seria a influência de grupos jihadistas inspirados ou mesmo associados ao ISIS no Sinai (Egito). Este território, alvo de mais atentados nos últimos dias, está se convertendo em terra de ninguém. O regime de Sissi no Egito, principal aliado de Israel na região,  não tem obtido sucesso no combate aos jihadistas e sua repressão tem radicalizado a oposição.

Israel tem apenas observado. Embora retoricamente tentará associar o Hamas ao ISIS, na prática sabe que o grupo palestino será importante conter o Grupo Estado Islâmico. Nos bastidores, atuarão ao lado do Hamas, com mediação do Egito e da Arábia Saudita. O novo comando do regime em Riad tem buscado atenuar os atritos entre o Cairo e o Hamas, indiretamente também reduzindo a tensão entre o grupo palestino e Israel, de quem os sauditas são aliados extra-oficialmente. Para completar, o Irã considera o ISIS o seu maior inimigo (mais do que Israel) e também agirá para evitar a entrada do grupo em Gaza.

Vamos observar o que pode acontecer. O ISIS, se conseguir algumas células em Gaza, talvez provoque Israel lançando foguetes para tentar causar uma reação israelense para bombardear o Hamas. Quanto maior o caos em Gaza, maior a possibilidade de crescimento do ISIS.

Guga Chacra, comentarista de política internacional do Estadão e do programa Globo News Em Pauta em Nova York, é mestre em Relações Internacionais pela Universidade Columbia. Já foi correspondente do jornal O Estado de S. Paulo no Oriente Médio e em NY. No passado, trabalhou como correspondente da Folha em Buenos Aires

Comentários islamofóbicos, antissemitas, anticristãos e antiárabes ou que coloquem um povo ou uma religião como superiores não serão publicados. Tampouco são permitidos ataques entre leitores ou contra o blogueiro. Pessoas que insistirem em ataques pessoais não terão mais seus comentários publicados. Não é permitido postar vídeo. Todos os posts devem ter relação com algum dos temas acima. O blog está aberto a discussões educadas e com pontos de vista diferentes. Os comentários dos leitores não refletem a opinião do jornalista

Acompanhe também meus comentários no Globo News Em Pauta, na Rádio Estadão, na TV Estadão, no Estadão Noite no tablet, no Twitter @gugachacra , no Facebook Guga Chacra (me adicionem como seguidor), no Instagram e no Google Plus.

03 Jul 11:48

O verdadeiro significado da história da Cinderela.

by Zanfa

cinderella

capinaremos?d=yIl2AUoC8zA capinaremos?i=VTdRgTpzB94:KjlwvLuuXxE:V_ capinaremos?d=dnMXMwOfBR0
03 Jul 14:50

Mentor vs. Apprentice: Ridiculously Amazing Father Versus Daughter Beatboxing

by Christopher Jobson

In the course of raising a child there comes a series of strange moments in when you discover your child is obtaining skills and perfecting their abilities that surpass what you yourself are capable of. It’s a humbling and awesome thing to witness. Such is the case with this friendly battle between St. Louis-based beatboxer Nicole Paris and her dad. He’s definitely a talented beatboxer and taught his daughter well, but it becomes extremely clear she’s taken things to a ridiculously different level. The video is a follow-up to a battle the duo posted online last year. Amazing. I’ve already watched this three times this morning. (via Leonard Beaty, Ambrosia for Heads, thnx Jess!)

03 Jul 14:26

The invention that could revolutionize batteries—and maybe American manufacturing too

Adam Victor Brandizzi

Optimistic to the point of a PR stunt on vaporware BUT the technical explanations are fascinating.

The world has been clamoring for a super-battery.

Since about 2010, a critical mass of national leaders, policy professionals, scientists, entrepreneurs, thinkers and writers have all but demanded a transformation of the humble lithium-ion cell. Only batteries that can store a lot more energy for a lower price, they have said, will allow for affordable electric cars, cheaper and more widely available electricity, and a reduction in greenhouse gas emissions. In the process, a lot of gazillionaires will be created.

But they have been vexed. Not only has nobody created a super-battery; a large number of researchers have lost faith in their powers to do so—perhaps ever. Entrepreneurs such as Tesla’s Elon Musk continue to tinker with off-the-shelf batteries for luxury electric cars and home power-storage systems, but industry hands seem generally to doubt that their cost will drop enough to attract a mass market any time soon. Increasingly, they are concluding that the primacy of fossil fuels will continue for decades to come, and probably into the next century.

This is where Yet-Ming Chiang enters the picture. A wiry, Taiwanese-American materials-science professor at the Massachusetts Institute of Technology (MIT), Chiang is best known for founding A123, a lithium-ion battery company that had the biggest IPO of 2009. The company ended up filing for bankruptcy in 2012 and selling itself in pieces at firesale prices to Japanese and Chinese rivals. Yet Chiang himself emerged untainted.

In 2010, having rounded up $12.5 million from Boston venture capital firms and federal funds, Chiang launched another company. Again, it was in batteries. And today, after five years in “stealth mode,” he is going public. There may be a way to revolutionize batteries, he says, but right now it is not in the laboratory. There may be a way to revolutionize batteries, but right now it is not in the laboratory. Instead, it’s on the factory floor.  Instead, it’s on the factory floor. Ingenious manufacturing, rather than an ingenious leap in battery chemistry, might usher in the new electric age.

When it starts commercial sales in about two years, Chiang says, his company will slash the cost of an entry-level battery plant 10-fold, as well as cut around 30% off the price of the batteries themselves. That’s thanks to a new manufacturing process along with a powerful new cell that adds energy while stripping away cost. Together, he says, they will allow lithium-ion batteries to begin to compete with fossil fuels.

But Chiang’s concept is also about something more than just cheaper, greener power. It’s a model for a new kind of innovation, one that focuses not on new scientific invention, but on new ways of manufacturing. For countries like the US that have lost industries to Asia, this opens the possibility of reinventing the techniques of manufacture. Those that take this path could own that intellectual property—and thus the next manufacturing future.

This is the story of how that came about.

24M batteries.(Kieran Kesner for Quartz.)

Manufacturing, the new frontier of innovation

Traditionally, big innovations have happened at the lab bench. A discovery is made and patented, then is handed off to a commercial player who scales it up. With luck, it turns out a blockbuster product.

But, according to a report published in February by the Brookings Institution, researchers are increasingly skeptical of the delineation between innovation and production. Breakthrough-scale invention, they say, happens not only in the lab, but also in factories.

This is not a new idea. Until 1856, for instance, steel was an ultra-expensive niche product. It was far more robust than iron, but no one knew how to make it economically. Its use was confined to specialty hand tools and eating utensils for the rich. But then British inventor Henry Bessemer, stirred by French gripes about the fragility of cast-iron cannons, devised a process that reduced the cost of steel by more than 80%, roughly equivalent to iron. Steel—along with oil—went on to propel the latter part of the Industrial Revolution, along with the gargantuan 20th century economic boom.

If Bessemer had made his breakthrough today, it would be called “advanced manufacturing”—a label that has been broadly applied to next-generation fabrication methods such as 3D printing, modular construction of skyscrapers, and robotics. There is some hype around this term: The Brookings report identifies 50 industries in the US alone as “advanced,” and historic factory hubs such as the English city of Sheffield are renaming themselves as variants of “advanced manufacturing cluster.”

Nonetheless, entrepreneurs who develop genuinely novel manufacturing processes can enjoy the advantage of a patent and standing ahead of the crowd. While others will inevitably copy them, it will be a race to catch up. To the degree that such authentic advanced manufacturing moves forward, and can offer the US a chance to reinstate its prowess as a manufacturing hub, it’s led in part by a few clean energy companies like Yet-Ming Chiang’s.

Yet-Ming Chiang, 24M’s founder.(Kieran Kesner for Quartz)

The birth of an idea

At 57, Chiang has short-cropped, gray-flecked black hair, and almost always wears blue, long-sleeved check shirts. He speaks in a soft, even cadence, and is prone to finishing his sentences with a disarming, open-jawed grin.

But if unassuming, Chiang is also tremendously driven. His science-centered business sense has earned tens of millions of dollars for his investors. He and his family live on a farm on the affluent outskirts of Boston, where he raises bees and chickens, and hunts and fishes nearby.

Chiang was born in Taiwan, where his father, a locomotive engineer, managed to save enough money to make a start in the United States. When the boy was 6, he found himself in Brooklyn, living with his family in an apartment with what he regarded as astonishingly high ceilings. When it was time for college, Chiang was admitted to MIT, and never left. His wife, Jeri, a Japanese-American from Hawaii, also has an MIT degree, as do his older sister and her husband.

Like Stanford University now and the University of Copenhagen in the 1920s, MIT is a maw of discovery and celebrity scientists.  The cost for an entry-level battery plant is more than $100 million. Chiang calls it a “meritocracy”—a “praise-free zone” where “you are what you do and what you create. You should continue to try to prove yourself.” Chiang has used his own MIT perch to launch four venture-capital-funded startups, including his latest, a battery company called 24M.

Manufacturers are secretive, but analysts say a lithium-ion battery pack costs an average of roughly $500 per kilowatt-hour, a measure of the energy a battery can store. That’s four times the price needed to compete directly with gasoline. Only about 30% of that $500 is the cost of materials. The largest portion, 40%, goes to manufacturing.

Battery factories themselves are typically cavernous buildings the size of aircraft hangars. They contain assembly-line machines dozens of yards in length, often stacked one atop the other. The cost for an entry-level plant is more than $100 million. In Midland, Michigan, XALT runs one of the most efficient and modern lithium-ion plants in the US. But, built with $300 million in federal and state grants and credits, it is also sprawling—just under a quarter of its 400,000-square-foot (37,000 sq m) facility is devoted to the equipment, a space the size of six soccer fields. Tesla is embarked on the mother of battery plant buildouts, a $5 billion lithium-ion factory in Nevada.

Such costs not only make batteries expensive. They also stifle innovation. Who, even with a promising new idea for a better battery chemistry, can build or borrow a $100 million plant to try it out?

Chiang’s goal is to bring production costs down below $100 a kilowatt-hour. That would allow startup plants to be built for much, much less, unleashing innovation. And it would also create a genuine contest with gasoline.

A cassette tape—the origin, incredibly, of how batteries are made.(AP Photo)

The battery’s ungainly legacy

The reason battery factories are so huge—and why Chiang’s business model seems to have substance—goes back to a chance event at the birth of lithium ion.

The rise of lithium-ion chemistry in the early 1990s owes a lot to the peak and slow decline of two big consumer technologies—magnetic audio tape and nickel cadmium batteries. These two collided in the Camcorder, Sony’s entry into the nascent market for lightweight video cameras.

Sony realized that, if video cameras were to take off, they needed both to shrink—to more or less fit snugly into a consumer’s hand— “We got sidetracked by a historical accident and a reluctance to switch to something that works (better)”—Yet-Ming Chiang  and to last longer on a single charge. The only way to accomplish that was to find a far more powerful, smaller battery.

The result was the first lithium-ion cell, which Sony commercialized in 1991. Two years later followed the TR1 8mm Camcorder, the first lithium-ion-operated video camera. Both were blockbuster commercial products for Sony, and ignited furious competition.

But Sony also had to quickly figure out how to manufacture this new kind of battery on a commercial scale. Providence stepped in: As it happened, increasingly popular compact discs were beginning to erode the market for cassette tapes, of which Sony was also a major manufacturer. The tapes were made on long manufacturing lines that coated a film with a magnetic slurry, dried it, cut it into long strips, and rolled it up. Looking around the company, Sony’s lithium-ion managers now noticed much of this equipment, and its technicians, standing idle.

It turned out that the very same equipment could also be used for making lithium-ion batteries. These too could be made by coating a slurry on to a film, then drying and cutting it. In this case the result isn’t magnetic tape, but battery electrodes.

This equipment, and those technicians, became the backbone of the world’s first lithium-ion battery manufacturing plant, and the model for how they have been made ever since. Today, factories operating on identical principles are turning out every commercial lithium-ion battery on the planet.

For Sony, the idle magnetic tape machines were a piece of good fortune. But Chiang regarded them as an ungainly legacy. The machines were big, and their process was slow and expensive. They were a large part of the reason batteries couldn’t compete with gasoline. It was time to correct that mistake and figure out a new way to make the battery. “We got sidetracked by a historical accident and a reluctance to switch to something that works (better),” Chiang said.

A 24M employee checking the conveyor belt production of batteries. Kieran Kesner for Quartz.(Kieran Kesner for Quartz.)

Going with the flow

At first, Chiang thought the best solution was an arcane and eccentric technology known as a “flow battery.” His interest flummoxed many of the people he talked to.

A battery is superficially fairly simple. It essentially consists of two electrodes, which are the source of the electric charge, embedded in an electrolyte, through which the charge flows. In a conventional lithium-ion battery, the electrodes are solids, all stored in a single cell or pack.

A diagram of a redox flow battery.A diagram of a redox flow battery.(Benboy00 via Wikimedia Commons/CC 3.0)

A flow battery, by contrast, consists of chemicals suspended in liquid. This liquid is held in two separate tanks, from which they are pumped through a cell. There they meet, separated by a membrane. The act of pumping them generates a current that flows between them across the membrane.

To increase the capacity of a battery, you need to either boost its energy density, or make it bigger. For lithium-ion batteries, increasing the energy density—by tweaking the battery chemistry or finding a new kind—is the holy grail scientists are starting to despair of ever finding. Making them bigger is easy; Tesla has done just that for its cars. But they get expensive fast, because they require more of the costly metals, like nickel and cobalt, that go into the electrodes of lithium-ion cells.

By contrast, making a flow battery bigger is just a matter of bolting on larger storage tanks with more liquid inside. But the device would quickly become far too big to fit inside a car, and the liquid chemicals in a flow battery have a much lower energy density than a lithium-ion battery.

But what if you could have the best of both worlds? That was the original thesis of Chiang’s new venture. If you could make a flow battery with lithium-ion chemistry—and its energy density—it would have smaller tanks than a regular flow battery. Above a certain size, the cost per kilowatt-hour would be below that of static batteries, and begin to compete with the economics of fossil fuels.

At MIT, Chiang assigned a Romanian undergraduate named Mihai Duduta to study the problem. A month later, Duduta had a working prototype. The rapid result was a surprise, and also evidence that Chiang was on to something. It was sufficient to attract $10 million in funding from Boston venture capital firms, and another $2.5 million from the Department of Energy. With that, Chiang opened 24M for business. Duduta was employee No. 1.

The company was operating in stealth mode, so little was released publicly. But in a 2011 paper in the journal Advanced Energy Materials, Duduta explained an order-of-magnitude increase in energy density through a “semi-solid” approach to flow—a lithium-ion battery that worked through “percolating networks of nanoscale conductors.” Now, as far as the world was concerned, Chiang’s latest startup was a quixotic hunt for a world-beating flow battery.

But that would soon change.

24M employees working on a machine that contributes to their battery production.(Kieran Kesner for Quartz.)

An economic quandary

The success or failure of Chiang’s idea was in part a function of size. How big would the tanks of lithium-ion flow batteries need to be in order for their cost per kilowatt-hour to drop below that of static batteries?

By late 2010, this problem weighed on Craig Carter, Chiang’s long-time collaborator at MIT. When original 24M employees gathered for weekly meetings to parse their data, no one seemed to know what size tanks, test cells and other equipment to buy and make. The cost model they were using did not make it clear enough when the economic crossover from static batteries would occur.

That wasn’t the only problem 24M was facing. Nobody had ever made a lithium-ion flow battery. Chiang’s engineers were having trouble figuring out how to pump the electrolyte liquid through the system. The denser they made the slurry, to increase its energy density, the thicker and more sluggish it became. Potential customers, after being briefed by senior executives, offered little encouragement. Conventional static batteries already worked fine; why did anyone need a new kind of lithium-ion battery that also required a pump?

Meanwhile, a side experiment within 24M was starting to attract the attention of Chiang’s junior researchers. For comparison purposes, Chiang had instructed them to create static lithium-ion cells alongside the flow project. “We can learn from them,” he said. The results were interesting: the team had used the same liquid slurries as they had in the flow battery to make hundreds of static cells, and they put them through thousands of charge-discharge cycles. Their capacity remained stable. Unlike the flow experiment, they worked superbly.

After work, some of the junior staff including Duduta would troop downstairs for milkshakes at an eatery called Friendly Toast. There, they discussed the results from the static cells. These younger researchers were less invested than Chiang and the senior staff in the idea of flow, recalled one of them, Tristan Doherty, a former race engineer for Dale Coyne’s Indy 500 racing team. Gradually they became convinced that the new manufacturing process they were developing should be devoted to making static, not flow, batteries. But how to get that message across to their elders?

Chiang in the control room where employees suit up before going to work.(Kieran Kesner for Quartz.)

The moment when it all fell apart

It was in this environment that Carter was trying to figure out at what point flow batteries would become economical. Chiang did not seem to think it was a problem. “You may be wasting your time,” he told Carter. Carter persisted, and finally decided to put aside the cost model they were using and build his own. He enlisted one of the young staff—Jeff Disko, a Wyoming native who favors cowboy boots and self-carved silver belt buckles. “Let’s build it from scratch,” Carter told the younger man.

What he didn’t do was tell Chiang what he and Disko were up to. “He might have seen it as a distraction from going forward since we already had a working tool,” Carter said.

Disko worked around the clock for two weeks on the data while Carter created software that could visually display almost any battery variable—energy density, speed of charge, cost of parts, and so on. To be competitive, a flow battery would have to be large enough to back up a facility the size of a nuclear power plant. When they were done, they had a tool that finally revealed the crossover point at which Chiang’s battery would prove economical.

To say it would require enormous tanks would be an understatement. To be competitive with fossil fuels, a lithium-ion flow battery would have to be large enough to back up a stationary facility the size of a nuclear power plant serving tens of thousands of people. It was such a jaw-dropping result that neither Carter nor Disko believed it initially. They spent two weeks redoing the numbers and discussing the results. Disko began to vet it with the rest of the group. But there was no getting around it—the idea on which the company had been founded did not make financial sense.

In early 2011, they held what Disko called a “come-to-Jesus meeting.” He presented the visual tool. Until then, there had been the grumblings, but no brutally concrete juxtaposition of flow and static batteries. Now it seemed clear—unless you were aiming to back up the electricity system of a small city, it was better to build a static battery.

Chiang stared at the results. “So are you willing to bet the company?” he asked Carter.

“Yes,” Carter replied.

“Okay,” Chiang replied simply. He would think about it.

Two days later, an email went to all employees. Flow was out. The company would build a static battery.

It was a typical shift for a startup, in which initial notions rarely survive through the commercial stage of development. For his part, Disko felt “relieved. I think a lot of people did.” The manufacturing problems still needed to be solved. But now they would attack them differently. The cost model had proven its value. “There are benefits of changing direction—of turning on a dime,” Carter said. “Now we had something in which you could plant a flag and know it would stick.”

Batteries in their units at 24M.(Kieran Kesner for Quartz)

Starting from zero

Now the researchers could return, metaphorically, to the age of the Camcorder and pose the question: If Sony hadn’t had those magnetic tape machines lying around and had started from a blank slate, what would have been the most natural and best way to manufacture a lithium-ion battery?

Pumps intended to initiate the flow of electron juice started to disappear from the 24M lab. Then Duduta, the conceiver of the original 24M flow cell, waited a few weeks, before declaring, “I am going to make these [static] cells myself.”

There was no machine for this task, so Duduta stuck his arms into the black rubber gloves of an airless research box—known as a glove box—and began to hand-make cells. That meant mixing up the goop, or slurry, that comprises the two electrodes—the anode and cathode—and slapping them onto a thin film, separated by another plastic film.

A couple of the others joined him. Soon, six or seven researchers had their hands in the rubber gloves. They had created their own manual assembly line. They became good at it—they were producing automobile battery-size cells in just six minutes. Compared to the day-long process required in a conventional factory, that was lightning fast. But it was nothing to the speed with which Chiang would eventually want the process to go.

In the conventional process, the application of the slurry is relatively quick, but the drying stage can take 22 or more hours. You start out with wet slurry, then coat it onto film—using glue-like substances to make it hold—press it flat to make the electrodes denser, and finally dry it in an oven along the long, slow assembly line. Finally, electrolyte is injected into the battery cell, thus making it wet all over again.

 The final method was “a total shot in the dark” and involved a tube, a plunger, and some Teflon. Apart from this slow process, conventional batteries have a second problem: 35% of their interior space is filled with material that doesn’t contribute to generating electricity. That includes the binder that holds the slurry to the film; a separator that keeps the anode and cathode from shorting each other out; and a current collector that brings the charge to an electronic device.

Chiang wanted to reduce the manufacturing process to a single hour. And he wanted to shrink the space filler to almost nothing.

He started out by whacking out whole parts of the filler. His researchers developed a way to make the electrodes without the glue-like binder. Lithium-ion cells typically contain 14 separate material layers; Chiang simplified them, allowing him to reduce the layers to just five. He reduced the filler to 8% of the battery cell. Finally, he overturned the foundations of lithium-ion manufacturing by figuring out how to dispense entirely with the drying process; instead, he would inject the wet electrolyte into the cell from the start.

These were defining improvements. But, while he was at it, Chiang made some tweaks to the science of the battery, too. Most significantly, he made the electrodes four times thicker—500 microns, or half a millimeter, in diameter—which added a lot to the cells’ energy density.

Still, there was the matter of how to actually get the wet electrode slurry onto film in a uniform density, thickness, continuity and rectangular shape, and to do so fast and in a way that could be replicated over and over again.

Some three dozen ways were attempted to get the slurry right. The final method was “a total shot in the dark,” Doherty said, and involved a tube, a plunger, and some Teflon.

But the result was a manufacturing platform that currently spits out a battery cell in about two and a half minutes. The machine that does it isn’t the size of a factory floor, but of a large refrigerator (see image below). As for the cells, Chiang calls them “semi-solid,” a nod to their birth in research into flow batteries.

When I was visiting their lab recently, Chiang and 24M’s CEO, Throop Wilder, stood around the machine as it spit out a fast cell in a perfect rectangle. Wilder started doing jumping jacks. “That’s huge. That’s what investors want to see,” he said, shouting.

Chiang’s sudden pivot to static batteries doesn’t appear to have unnerved 24M’s investors. In 2013, Chiang raised another round of $25 million in cash, and last year PTT, the Thai oil company, invested $15 million. In all, 24M has raised $54.5 million. “They are able to introduce a novel battery that’s 50% like bringing the economics of Moore’s Law into an industry that doesn’t have that,” said Izhar Armony of Charles River Ventures, one of Chiang’s VC investors.

The refrigerator-sized machine at the heart of 24M’s manufacturing process.(Kieran Kesner for Quartz.)

What it means for manufacturing

The push to improve the manufacturing, as opposed to the chemistry, of lithium-ion batteries has caught on with US government officials. The Department of Energy (DOE) is currently running a competition for three-year grants worth $6 million to $8 million for researchers promising better manufacturing techniques. If they can make such progress, and add it to any advance on the scientific side, “then you’ve double-dipped,” said David Howell, who runs the DOE battery research program. Howell said that is how he expects to make electric cars equivalent, in dollars per kilowatt-hour, with gasoline.

The advanced manufacturing movement has spread to solar as well. The solar-panel industry’s standard manufacturing method “is reminiscent of how the Greeks made glass.”  Frank Van Mierlo, founder of a Massachusetts solar panel company called 1366, said that his own industry’s standard manufacturing method “is reminiscent of how the Greeks made glass.” He said his company has devised a new way to make panels that chops out much of the inefficiency.

In a new report, McKinsey describes a broad new age of manufacturing that it calls Industry 4.0. The consulting firm says the changes under way are affecting most businesses. They are probably not “another industrial revolution,” it says, but together, there is “strong potential to change the way factories work.”

For decades, the US has watched its bedrock manufacturing industries wither away, as they’ve instead grown thick in Japan, in South Korea, in China, Taiwan and elsewhere in Asia. According to the Economic Policy Institute, the US lost about 5 million manufacturing jobs just from 1997 to 2014. This includes the production of lithium-ion batteries, which, though invented by Americans, were commercialized in Japan and later South Korea and China.

So Chiang’s innovation could be a poster-child for a new strain of thinking in the US. This says that, while such industries are not likely to return from Asia, the US can possibly reinvent how they manufacture. The country wouldn’t take back nearly as many jobs as it has lost. But there could be large profits, as the country once again moves a step ahead in crucial areas of technology.

To be clear, this is not Chiang’s goal. He is a professed universalist, divorced from scientific realpolitik. But should he succeed, as he plans to, then in addition to helping to decode the perplexing problem of batteries, he might contribute to continuing America’s political and economic dominance.

24M employee working in the Cambridge lab.(Kieran Kesner for Quartz)

The road ahead

Chiang and Wilder are about to embark on a third round of investment, seeking $20 million to $30 million. They would spend the money to scale up to production of a new machine that makes a cell every two to ten seconds. This machine, to be available for sale in two years, would be for stationary electric batteries—used to power businesses, neighborhoods and utilities, rather than cars.

The machine would have a capacity of 79 megawatt-hours a year and produce any kind of lithium-ion battery for a cost of about $160 per kilowatt-hour. By 2020, Chiang says, that will be down to about $85, 30% below where conventional lithium-ion batteries—whose cost is also dropping—may be by then. But most importantly, the machine would be priced at about $11 million. Hence, the startup cost of getting into lithium-ion battery manufacturing would plummet. “It’s so far out of the paradigm, you just don’t believe it,” said Wilder.

If 24M creates this machine, and if it can sell it into the market—an entirely different question—it will clearly shake up big industries, including stationary and electric car batteries, not to mention utilities. How quickly is anyone’s guess.

Chiang seems ambivalent as 24M begins to disclose what it’s been doing all these years. Until now, the entire industry has had a singular idea of how batteries are manufactured. Chiang’s own rivals were, until today, convinced that he was on a far-fetched crusade to figure out flow batteries.

But now, if they look hard at what he is really doing, and accept his approach, they may attempt to copy him. “If you haven’t seen the movie play out before, you don’t have the confidence it can be done,” he said. But staying a step ahead is also part of the startup game.

You may also like: The man who brought us the lithium-ion battery at the age of 57 has an idea for a new one at 92

Bookmarked at brandizzi Delicious' sharing tag and expanded by Delicious sharing tag expander.
03 Jul 17:04

New Ethereal Watercolor and Black Ink Cats That Fade into the Canvas by Endre Penovác

by Christopher Jobson

ink-1

We continue to be awed by Serbian artist Endre Penovác's ability to somehow control the unforgiving nature of water on paper to produce ghostly paintings of felines. As the mixture of water and black ink bleeds in every direction it appears to perfectly mimic the cat’s fur. In his newest pieces Penovác introduces elements of color and negative space to add a slightly new dimension. You can see more of his recent work on Facebook and Saatchi Art.

ink-2

ink-3

ink-4

ink-5

ink-6

ink-8

ink-9

03 Jul 16:42

(via 4gifs:video)



(via 4gifs:video)

02 Jul 11:04

Photo



06 Mar 02:53

antikythera-astronomy:What’s the closest galaxy to us?Apart from...



antikythera-astronomy:

What’s the closest galaxy to us?

Apart from the Milky Way of course… the answer is surprising.

Most people would probably answer the Andromeda Galaxy, but this would be totally wrong.

A little over a decade ago my school co-conducted a survey with another to detail the night sky around us.

Among many of the discoveries made in this survey was that there was something strange going on about 25,000 lightyears from Earth. The stars in that area were unusually dense.

In addition, the collection of stars was elliptical-shaped.

The incredible part?

It’s inside the Milky Way.

Canis Major Dwarf Galaxy, a small galaxy of a billion stars, is now thought to be the closest (non-Milky Way) galaxy to Earth at a mere 25,000 lightyears away from Earth.

It was likely an independent galaxy until our much larger one ate it. It’s since  been leaving a trail of stars as it orbits around the middle of the Milky Way.

This means, like the Galapagos, you’d better go there soon if you want to see what it’s like. In a few billion years its stars may all have been stolen by the gravity of the Milky Way.

(Image credit: VncntM)

28 Dec 19:32

Photo



02 Jul 17:11

(photo by djsosumi) [video]



(photo by djsosumi) [video]

02 Jul 22:00

How to Articulate Your Vision (rerun)

by Scott Meyer

Reader Martin V pointed this out.

I love me some Mario Kart, so I probably will check the game out, although I don’t understand what zombies have to do with old people racing mobility scooters.

I’m not at all irritated. If I wanted to make the game myself I should have learned to code, or pitched it to a studio, or made any move of any kind to actually turn the idea into a game. Besides, even if I was unhappy, I pretty much say in the comic that I’m waiting for someone else to make the game, so I doubt a lawsuit would end well.

On a totally unrelated note, for the entire month of July my novels, all three of my Magic 2.0 books are on sale over at Amazon US. The Kindle editions are $2.00!

As always, thanks for using my Amazon Affiliate links (USUKCanada).

02 Jul 15:10

Silk Road investigator pleads guilty to stealing bitcoins

by Daniel Cooper
Disgraced DEA agent Carl Force has pleaded guilty to charges of extortion, money laundering and obstruction of justice. The official committed the crimes while himself investigating the online black market Silk Road, as well as the activities of its ...
01 Jul 23:00

ultrafacts: SourceFor more facts, follow Ultrafacts



ultrafacts:

Source

For more facts, follow Ultrafacts

03 Jul 03:27

Throwback Thursday



Throwback Thursday

02 Jul 20:30

Photo



29 Jun 13:01

Парки развлечений для бедняков: Пакистанские Диснейленды

Оригинал взят у tipolog в Парки развлечений для бедняков: Пакистанские Диснейленды

Парки развлечений для бедняков: Пакистанские Диснейленды


Подборка изображений

Парки развлечений для бедняков: Пакистанские Диснейленды (1)



Подавляющее большинство населения Пакистана живет очень бедно. Ко всему этому она наполнена очень большим количеством беженцев из Афганистана, которые в основной своей массе живут еще хуже. Тем не менее, не смотря на низкий уровень жизни, в тамошних городских окраинах и трущобах люди не забывают о возможностях для семейных развлечений и отдыха для своих детей. Ими создаются кустарного изготовленные детские площадки и аттракционы, которые являются для тамошних детишек своеобразным подобием крутых парков развлечений, совершенно недоступных им в силу самых разных причин.



Парки развлечений для бедняков: Пакистанские Диснейленды (2)



Парки развлечений для бедняков: Пакистанские Диснейленды (3)



Парки развлечений для бедняков: Пакистанские Диснейленды (4)



Парки развлечений для бедняков: Пакистанские Диснейленды (5)



Парки развлечений для бедняков: Пакистанские Диснейленды (6)



Парки развлечений для бедняков: Пакистанские Диснейленды (7)



Парки развлечений для бедняков: Пакистанские Диснейленды (8)



Парки развлечений для бедняков: Пакистанские Диснейленды (9)



Парки развлечений для бедняков: Пакистанские Диснейленды (10)



Парки развлечений для бедняков: Пакистанские Диснейленды (11)



Парки развлечений для бедняков: Пакистанские Диснейленды (12)



Парки развлечений для бедняков: Пакистанские Диснейленды (13)



Парки развлечений для бедняков: Пакистанские Диснейленды (14)



Парки развлечений для бедняков: Пакистанские Диснейленды (15)



Источник: http://news.163.com/photoview/00AO0001/90850.html?from=ph_ss

Также смотри по данной теме другие материалы с тегами "Экстрим" и "Этнопсихология"

29 Jun 20:42

Some physicists believe we're living in a giant hologram — and it's not that far-fetched - Vox

Adam Victor Brandizzi

Eu tava vendo essa chamada apelativa por aí e nem dei bola, sabia que os artigos seriam ruins. Este ao menos explica o que é o tal "holograma" (que obviamente não é o que se imagina.)

hologram

(TU Wien)

Some physicists actually believe that the universe we live in might be a hologram.

The idea isn't that the universe is some sort of fake simulation out of The Matrix, but rather that even though we appear to live in a three-dimensional universe, it might only have two dimensions. It's called the holographic principle.

The thinking goes like this: Some distant two-dimensional surface contains all the data needed to fully describe our world — and much like in a hologram, this data is projected to appear in three dimensions. Like the characters on a TV screen, we live on a flat surface that happens to look like it has depth.

The laws of physics seem to make more sense when written in two dimensions than in three

It might sound absurd. But if when physicists assume it's true in their calculations, all sorts of big physics problems — such as the nature of black holes and the reconciling of gravity and quantum mechanics — become much simpler to solve. In short, the laws of physics seem to make more sense when written in two dimensions than in three.

"It's not considered some wild speculation among most theoretical physicists," says Leonard Susskind, the Stanford physicist who first formally defined the idea decades ago. "It's become a working, everyday tool to solve problems in physics."

But there's an important distinction to be made here. There's no direct evidence that our universe actually is a two-dimensional hologram. These calculations aren't the same as a mathematical proof. Rather, they're intriguing suggestions that our universe could be a hologram. And as of yet, not all physicists believe we have a good way of testing the idea experimentally.

Where did the idea that the universe might be a hologram come from?

The idea originally came out of a pair of paradoxes concerning black holes.

1) The black hole information loss problem

In 1974, Stephen Hawking famously discovered that black holes, contrary to what had long been thought, actually emit slight amounts of radiation over time. Eventually, as this energy bleeds away from the event horizon — the black hole's outer edge — the black hole should completely disappear.

black hole

An illustration of radiation escaping from a black hole. (Communicate Science)

However, this idea prompted what's known as the black hole information loss problem. It's long been thought that physical information can't be destroyed: All particles either retain their original form or, if they change, that change impacts other particles, so the first set of particles' original state could be inferred at the end.

As an analogy, think of a stack of documents that are fed through a shredder. Even though they're cut into tiny pieces, the information present on the pieces of paper still exists. It's been cut into tiny pieces, but it hasn't disappeared, and given enough time, the documents could be reassembled so that you'd know what was written on them originally. In essence, the same thing was thought to be true with particles.

But there was a problem: If a black hole disappears, then the information present in any object that may have been sucked into it seemingly disappears, too.

One solution, proposed by Susskind and Dutch physicist Gerard 't Hooft in the mid-'90s, was that when an object gets pulled into a black hole, it leaves behind some sort of 2D imprint encoded on the event horizon. Later, when radiation leaves the black hole, it picks up the imprint of this data. In this way, the information isn't really destroyed.

And their calculations showed that on just the 2D surface of a black hole, you could store enough information to completely describe any seemingly 3D objects inside it.

"The analogy that both of us independently were thinking about was that of a hologram — a two-dimensional piece of film which can encode all the information in a three-dimensional region of space," Susskind says.

The entropy problem: There was also the related problem of calculating the amount of entropy in a black hole — that is, the amount of disorder and randomness among its particles. In the '70s, Jacob Bekenstein had calculated that their entropy is capped, and that the cap is proportional to the 2D area of a black hole's event horizon.

"For ordinary matter systems, the entropy is proportional to the volume, not the area," says Juan Maldacena, an Argentinian physicist involved in studying the holographic principle. Eventually, he and others saw that this, too, pointed to the idea that what looked like a 3D object — a black hole — might be best understood using only two dimensions.

How did this idea go from black holes to the entire universe?

None of this was proof that black holes were holograms. But early on, Susskind says, physicists recognized that looking at the entire universe as a two-dimensional object that only looks three-dimensional might help solve some deeper problems in theoretical physics. And the math works just as well whether you're talking about a black hole, a planet, or an entire universe.

In 1998, Maldacena demonstrated that a hypothetical universe could be a hologram. His particular hypothetical universe was in what's called anti-de Sitter space (which, to simplify things, has a curved shape over huge distances, as opposed to our universe, which is believed to be flat):

anti de sitter space

Anti-de Sitter space (left) curves in on itself. Our universe (right) is believed to be flat. (The Physics Mill)

What's more, by viewing this universe in two dimensions, he found a way to make the increasingly popular idea of string theory — a broad framework in which the basic building blocks of the universe are one-dimensional strings, rather than particles — jibe neatly with the well-established laws of particle physics.

And even more importantly, by doing so, he united two hugely important, disparate concepts in physics under one theoretical framework. "The holographic principle connected the theory of gravity to theories of particle physics," Maldacena says.

Combining these two fundamental ideas into a single coherent theory (often called quantum gravity) remains one of the holy grails of physics. So the holographic principle making it possible in this hypothetical universe was a big deal.

Of course, all of this is still quite different from saying that our actual universe — not this weird hypothetical one — is a hologram.

But could our universe actually be a hologram — or does the idea only apply to hypothetical ones?

That's still a matter of active debate. But there's been some recent theoretical work that suggests the holographic principle might work for our universe too — including a high-profile paper by Austrian and Indian physicists that came out this past May.

Like Maldacena, they also sought to use the principle to find a similarity between the disparate fields of quantum physics and gravitational theory. In our universe, these two theories typically don't align: They predict different results regarding the behavior of any given particle.

But in the new paper, the physicists calculated how these theories would predict the degree of entanglement — the bizarre quantum phenomenon in which the states of two tiny particles can become correlated so that a change to one particle can affect the other, even if they're far away. They found that by viewing one particular model of a flat universe as a hologram, they could indeed get the results of both theories to match up.

Still, even though this was a bit closer to our universe than the one Maldacena had worked with, it was just one particular type of flat space, and their calculations didn't take time into account — just the other three spatial dimensions. What's more, even if this did apply directly to our universe, it'd only show that it's possible it could be a hologram. It wouldn't be hard evidence.

How could we prove that the universe is a hologram?

holometer

Fermilab's Holometer, used in tests that some say could find evidence for the holographic principle. (Fermilab)

The best type of proof would start with some testable prediction made by holographic theory. Experimental physicists could then gather evidence to see if it matches the prediction. For instance, the theory of the Big Bang predicted that we might find some form of remnant energy emanating throughout the universe as a result of the violent expansion 13.8 billion years ago — and in the 1960s, astronomers found exactly that, in the form of the cosmic microwave background.

At the moment, there's no universally agreed-upon test that would provide firm evidence for the idea. Still, some physicists believe that the holographic principle predicts there's a limit to how much information spacetime can contain, because our seemingly 3D spacetime is encoded by limited amounts of 2D information. As Fermilab's Craig Hogan recently put it to Motherboard, "The basic effect is that reality has a limited amount of information, like a Netflix movie when Comcast is not giving you enough bandwidth. So things are a little blurry and jittery."

Hogan and others are using an instrument called a Holomoter to look for this sort of blurriness. It relies on powerful lasers to see whether — at super-small, submicroscopic levels — there's a fundamental limit in the amount of information present in spacetime itself. If there is, they say, it could be evidence that we're living in a hologram.

Still, other physicists, including Susskind, reject the premise of this experiment and say it can't provide any evidence for the holographic principle.

Let's say we prove the universe is a hologram. What would that mean for my everyday life?

Everyday life in a holographic universe. (Shutterstock.com)

Everyday life in a holographic universe. (Shutterstock.com)

In one strict sense, it'd mean little. The same laws of physics you've been living with for your entire life would seem to remain exactly the same. Your house, dog, car, and body would keep appearing as three-dimensional objects, just like they always have.

But in a deeper sense, this discovery would revolutionize our existence on a profound level.

It doesn't matter much for your day-to-day life that the universe was formed 13.8 billion years in a sudden, violent expansion from a single point of matter. But the discovery of the Big Bang is instrumental for our current understanding of the history of the universe and our place within the cosmos.

Want more Vox in your inbox? Sign up for Vox newsletters!

By signing up, you agree to our terms. For more newsletters, check out our newsletters page.

Likewise, the bizarre principles of quantum mechanics — like entanglement, in which two distant particles somehow affect each other — don't really change your daily life either. You can't see atoms and don't notice them doing this. But these principles are another basic truth that tells us something utterly unexpected about the fundamental nature of the universe.

Proving the holographic principle would be much the same. Living our normal lives, we probably won't think much about the peculiar, counterintuitive fact that we live in a hologram. But the discovery would serve as an important step toward fully understanding the laws of physics — which dictate every action you've ever taken.

Vox Featured Video

Bookmarked at brandizzi Delicious' sharing tag and expanded by Delicious sharing tag expander.
02 Jul 16:38

Pride

Adam Victor Brandizzi

Am I wrong or is there a pun between "pride" and "parade" here?

I'm sorry, but waiting around for hours just to receive 5 minutes of entertainment is not my idea of fun

*goes back to playing smartphone games*
Expanded from Cheer Up, Emo Kid by XPath Expander.
02 Jul 18:42

Don’t bother the coder

01 May 18:20

"[There’s a] frequently misunderstood construction that linguists refer to as the “habitual be.” When..."

Adam Victor Brandizzi

Eu não tinha ideia deste significado, eu entendia tudo errado!

[There’s a] frequently misunderstood construction that linguists refer to as the “habitual be.” When speakers of standard American English hear the statement “He be reading,” they generally take it to mean “He is reading.” But that’s not what it means to a speaker of Black English, for whom “He is reading” refers to what the reader is doing at this moment. “He be reading” refers to what he does habitually, whether or not he’s doing it right now.

D'Jaris Coles, a doctoral student in the communication disorders department, and a member of the African-American English research team, gives the hypothetical example of Billy, a well-behaved kid who doesn’t usually get into fights. One day he encounters some special provocation and starts scuffling with a classmate in the school yard. “It would be correct to say that Billy fights,” Coles explains, “but he don’t be fighting.”

Janice Jackson, another team member who is also working on a Ph.D. in communication disorders, conducted an experiment using pictures of Sesame Street characters to test children’s comprehension of the “habitual be” construction. She showed the kids a picture in which Cookie Monster is sick in bed with no cookies while Elmo stands nearby eating cookies. When she asked, “Who be eating cookies?” white kids tended to point to Elmo while black kids chose Cookie Monster. “But,” Jackson relates, “when I asked, ‘Who is eating cookies?’ the black kids understood that it was Elmo and that it was not the same. That was an important piece of information.” Because those children had grown up with a language whose verb forms differentiate habitual action from currently occuring action (Gaelic also features such a distinction, in addition to a number of West African languages), they were able even at the age of five or six to distinguish between the two.



-

SYNERGY - African American English

The Sesame Street study is now a classic in “habitual be” research: here’s the article that it comes from (paywalled, but you can read the abstract and first few pages). 

(via scapetheserpentstongue)

02 Jul 14:46

Saturday Morning Breakfast Cereal - Conspiracy Theory

by admin@smbc-comics.com
02 Jul 16:22

Handmade Ceramic Animal Planters by Cumbuca Chic

by Christopher Jobson
Adam Victor Brandizzi

I saw the photos and thought how I would love to buy that. Then I see it is made by a Brazilian! Wonderful!
...
Then I see the price and get sad again.

cumbuca-1

If you’ve been on the hunt for the perfect ceramic capybara planter, look no further. Ceramicist Priscilla Ramos from São Paulo, Brazil, has a fantastic line of animal planters in the form of foxes, whales, anteaters, and yes, even the world’s largest rodent. She’s even working on a sloth! The handmade stoneware pieces are perfect for small succulents or cacti, and you can see more in her shop: Cumbuca Chic. (via NOTCOT)

cumbuca-7

cumbuca-8

cumbuca-4

cumbuca-5

cumbuca-6

01 Jul 20:55

Sabe qual é o seu problema? (partes I e II)

by brunomaron

Essas duas HQ’s foram publicadas na NÉBULA, projeto fodão do queridíssimo Rafael Coutinho que reúne um time de quadrinistas de primeira linha. Confiram porque vale a pena: NÉBULA

papelaria

estorvo


02 Jul 09:03

Namorar também é legal

02 Jul 06:16

Venus, Jupiter, and Noctilucent Clouds

Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.

2015 July 1
See Explanation.  Clicking on the picture will download
 the highest resolution version available.

Venus, Jupiter, and Noctilucent Clouds
Image Credit & Copyright: Petr Horálek

Explanation: Have you seen the passing planets yet? Today the planets Jupiter and Venus pass within half a degree of each other as seen from Earth. This conjunction, visible all over the world, is quite easy to see -- just look to the west shortly after sunset. The brightest objects visible above the horizon will be Venus and Jupiter, with Venus being the brighter of the two. Featured above, the closing planets were captured two nights ago in a sunset sky graced also by high-level noctilucent clouds. In the foreground, the astrophotographer's sister takes in the vista from a bank of the Sec Reservoir in the Czech Republic. She reported this as the first time she has seen noctilucent clouds. Jupiter and Venus will appear even closer together tonight and will continue to be visible in the same part of the sky until mid-August.

Tonight: See Venus & Jupiter together after sunset
Tomorrow's picture: open space < | Archive | Submissions | Index | Search | Calendar | RSS | Education | About APOD | Discuss | >

Authors & editors: Robert Nemiroff (MTU) & Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
NASA Web Privacy Policy and Important Notices
A service of: ASD at NASA / GSFC
& Michigan Tech. U.

Expanded from APOD by Feed Readabilitifier.