Shared posts

04 May 12:39

inspredwood: danlacek: the-future-now: Watch: Can video games...

23 Apr 15:59

Raw & Rendered: Experimental 3D Artworks by Joey Camacho

by Christopher Jobson

raw-1

In early 2014, Vancouver-based graphic artist Joey Camacho set out to learn more about rendering images using Cinema 4D and Octane Render, with the goal of creating a new piece each day. His first attempts were pretty rudimentary, but it wasn’t long before his exploration and experimentation began to pay off with increasinly subtle details inspired by biology, sound, and geometry. Only several months into his ‘Progress Before Perfection‘ project, he started getting requests for prints as his images were shared widely around Tumblr and elsewhere. You can see more of his work on Behance and prints of many pieces are available through his website.

anim

raw-2

raw-3

raw-4

raw-5

raw-6

raw-7

raw-8

raw-9

raw-10

09 Mar 12:17

(via d0gbl0g:darksilenceinsuburbia:Tanja Brandt)

04 Feb 10:52

24 Hours Comic – The Gaeneviad

by boulet
01 Feb 17:02

Expansive Finnish Landscapes Photographed by Mikko Lagerstedt

by Christopher Jobson

mikko-1
The Whole Universe Surrenders, Emäsalo, 2015

mikko-2
Divided – 2014, Meri-Pori, Finland

mikko-3
Highway – 2014, Finland

mikko-4
Pathway – 2014, Tuusula, Finland

mikko-5
Frozen Echo – 2014, Porvoo, Finland

mikko-6

mikko-7

mikko-8
Lost at Night, 2014

Self-taught photographer Mikko Lagerstedt (previously) is drawn into the night where he often finds himself camped next to his tripod, waiting hours for an exposure of a frozen coastal scene or a dark and brooding forest. Many of his images are composites of two photos taken from the same location, a shorter exposure of the sky merged with a significantly longer exposure of the ground which is then manipulated in Lightroom. Lagerstedt is extremely open about his process, sharing tutorials and blog posts about how he works on his website. You can also follow him on Instagram.

22 Jan 17:40

Seca e superpopulação urbana acabaram com o Império Assírio

by Carlos Orsi
Giseli.ramos

Interesting...

Fiz a nota abaixo para a coluna Telescópio, do Jornal ad Unicamp, em novembro passado. Por alguma razão, achei que valia a pena destacá-la aqui, agora, chamando principalmente a atenção dos leitores paulistas:

Superpopulação e seca levaram ao fim do Império Assírio, no século 7 antes da era comum, argumenta artigo publicado no periódico Climate Change. O chamado Novo Império Assírio chegou a dominar praticamente todo o Oriente Médio, do Egito ao Golfo Pérsico, incluindo terras que hoje pertencem a Israel, Palestina, Turquia, Síria e Iraque, no início do século 7 AEC, mas décadas depois já se encontrava em desintegração, fraturado por guerras civis.

Os autores do novo artigo, baseados nos Estados Unidos e na Turquia, associaram informações sobre o clima da época ao conteúdo de uma carta escrita por um astrólogo ao rei, informando que “não houve colheitas” no ano de 657 AEC. Dados paleoclimáticos corroboram o informe do astrólogo, e análises dos padrões de clima da região indicam que a seca de 657 foi apenas uma em uma série que se estendeu por vários anos. Além disso, a população de cidades como a capital, Nínive, teria sobrecarregado ainda mais a economia.

“Não estamos dizendo que os assírios de repente morreram de fome ou foram forçados a fugir das cidades e vagar pelo deserto”, disse, em nota, um dos autores do artigo, Adam Schneider, da Universidade da Califórnia em San Diego. “Estamos dizendo que a seca e a superpopulação afetaram a economia e desestabilizaram o sistema político até o ponto em que o império não era mais capaz de conter a desordem interna e a agressão de outros povos”.
20 Jan 11:41

SCULLY FOREVER





SCULLY FOREVER

20 Jan 10:19

child-of-thecosmos: "Exploration is in our nature. We began as...





















child-of-thecosmos:

"Exploration is in our nature. We began as wanderers, and we are wanderers still. We have lingered long enough on the shores of the cosmic ocean. We are ready at last to set sail for the stars." - Carl Sagan

Wanderers is a vision of humanity’s expansion into the Solar System, based on scientific ideas and concepts of what our future in space might look like, if it ever happens. The locations depicted in the film are digital recreations of actual places in the Solar System, built from real photos and map data where available. Watch the breathtaking short film on Vimeo.

18 Jan 20:57

undergroundmonorail: cactiofficial: pyronoid-d: text-mode: Th...



undergroundmonorail:

cactiofficial:

pyronoid-d:

text-mode:

The Morris worm or Internet worm of November 2, 1988 was one of the first computer worms distributed via the Internet. It was written by a student at Cornell University, Robert Tappan Morris, and launched on November 2, 1988 from MIT.

It’s trapped on a floppy tho this is some dark shit it has been denied its purpose forever bound to this obsolete storage

am i glad it’s in there and we’re out here

people reading fantasy novels ask “why did the ancient ones seal the evil away for ten thousand years instead of just killing it” but then we go ahead and do this shit

13 Jan 14:50

These are the top-25 photos from Flickr in 2014

by Bhautik Joshi

From the hundreds of millions of photos uploaded on Flick in 2014, these 25 bubbled to the top.

Though beauty is in the eye of the beholder, we’ve compiled this list based on a number of engagement and community factors. The photos were scored by looking at a combination of social and interactive elements, including how often the photo had been faved and viewed, among others.

There were several community members who appeared in the list several times; we picked their top-scoring image. We saw three of the Flickr 20under20 winners represented in the list. And it was perhaps little surprise the the European Space Agency’s Rosetta Philae photo made the cut. We also included four honorable mentions because we loved them so much.

Congratulations to these amazing photographers!

***

Uploaded on 1/10/2014 by aleshurik

Nightly shower 130812 F4332

Uploaded on 2/17/2014 by PeteHuu

p e r s i s t | lofoten, norway

Uploaded on 4/13/2014 by elmofoto

Wherever you lay your head

Uploaded on 2/26/2014 by rosiehardy

John.

Uploaded on 4/24/2014 by LJ.

Lightbulb

Uploaded on 8/12/2014 by Alexandr Tikki

ixspreparation2

Uploaded on 5/19/2014 by yard2380

Night Reading

Uploaded on 1/21/2014 by laurawilliams

"Besides my dad, she was the only one in my family who was like this..."

Uploaded on 3/11/2014 by humansofny

loopy sky

Uploaded on 5/1/2014 by SoulRiser

Bear Lake - Pentax 67 + Portra 400

Uploaded on 8/1/2014 by http://www.trentonmichael.com

NAVCAM top 10 at 10 km – 10

Uploaded on 11/11/2014 by europeanspaceagency

Oil Pastels

Uploaded on 3/11/2014 by WideEyedIlluminations

Here, once again

Uploaded on 1/1/2014 by Deltalex.

Chinatown

Uploaded on 3/22/2014 by Masa

Such is the price of leaving

Uploaded on 4/28/2014 by Whitney Justesen

I will learn to love the skies I'm under.

Uploaded on 6/4/2014 by David Uzochukwu

on the neighbour's grounds

Uploaded on 3/20/2014 by Rosie Anne

The Dreamy Coast

Uploaded on 1/7/2014 by Rob Macklin

Bagel&Lox

Uploaded on 3/24/2014 by davideluciano

Little Sherlock

Uploaded on 1/19/2014 by Adrian Sommeling

Pyramid Barn

Uploaded on 1/14/2014 by stevoarnold

HIPA, a non-profit photography show for the east of England in 2015, we are currently trying to raise the profile of the event to attract sponsorship, so if you feel like visiting the site and 'liking' the page it would help hugely, many thanks

Uploaded on 4/24/2014 by rastaschas

Fim de tarde

Uploaded on 6/7/2014 by Johnson Barros

320/365

Uploaded on 8/8/2014 by alexcurrie

Red Anemone

Uploaded on 3/31/2014 by j man.

The Backyard Falcon

Uploaded on 1/14/2014 by Avanaut

"And when it all comes crashing down, who will you be?" - Miles Away

Uploaded on 6/14/2014 by The Change Is Me.

***

Uploaded on 2/27/2014 by oprisco


15 Dec 11:11

A nerdinha que salvou a Apollo 11

by Carlos Cardoso

ops

Quando a Águia, o módulo de pouso estava a menos de 3 minutos de seu pouso histórico na Lua, algo deu errado, muito errado.

O computador de navegação acionou um alarme reportando erro. Algumas ordens de magnitude menos poderoso do que a CPU do seu microondas, não havia muito espaço para nada que não fosse estritamente necessário, e um módulo estava comendo 20% de CPU, em uma situação onde o sistema já estaria rodando a 85% da capacidade.

Steve Bales, Oficial de Orientação e Jack Garman, Especialista de Computação do controle da missão rapidamente comandaram um reset do alarme, achando que poderia ser algo aleatório. Armstrong e Aldrin assim fizeram, mas logo depois outro alarme surgiu. 

O grupo do MIT que programou as rotinas de pouso havia decidido utilizar o Radar de Subida e Rendezvous para rastrear o Módulo de Comando/Serviço. Isso já era feito pelo Abort Radar System, mas seria uma segurança extra caso algo desse errado. Mandaram os patches, os procedimentos, mas como foi uma mudança muito em cima, desistiram. Mandaram uma outra atualização para o radar do Módulo de Comando não ser acionado, mas esqueceram de enviar a alteração nos procedimentos. O botão que deveria ficar em MANUAL foi deixado em AUTO.

Com isso durante a descida o software tentava repetidamente ler dados do radar e calcular a posição do módulo. Como os dados não batiam (pois o radar não estava ligado), ele repetia e repetia o cálculo, consumindo mais e mais processamento, sobrecarregando os registradores e gerando os alarmes.

Em um computador comum teríamos uma falha catastrófica. As outras tarefas perderiam prioridade, pois a rotina do radar não devolveria o comando à CPU. Sem poder controlar orientação, propulsão, consumo de combustível e outros fatores, a Águia ficaria sem controle. O computador, travado em um loop seria destruído, junto com os astronautas na inevitável colisão, mas isso não aconteceu.

Ao contrário de sistemas operacionais mais simples, como o Windows 3.11, o computador da Apollo usava conceitos robustos.  Don Eyles, um garoto de 22 anos havia escrito algo revolucionário, saído das entranhas do MIT: um software com fixed-priority pre-emptive scheduling, a chamada multitarefa preemptiva com prioridades pré-definidas.

No DOS, Windows 3.11 e outros a multitarefa era cooperativa. Um programa era executado E, de tempos em tempos atendendo a uma interrupção de software retomava o controle ao sistema operacional, assim ele podia cuidar de outras coisas. Programas mal-educados não respeitavam isso, nem programas bugados.

O software de Don Eyles, aliado ao hardware implementava uma multitarefa onde não só cada rotina tinha um tempo máximo alocado, como as rotinas prioritárias tinham… prioridade. Assim se a CPU não está fazendo nada e sua rotina de radar bugada come 80% de CPU, azar, mas se a minha rotina de controle de pouso precisar de ciclos de máquina, ela vai ter, não importa o quanto ela grite e nem se um astronauta colocou o botão do radar em modo AUTO, acionando a rotina. A do radar vai rodar, pra ceder CPU para a minha.

O computador trabalhou no talo, mas o que era importante para o pouso tinha prioridade e processamento alocado. A arquitetura mega-power-robusta de Don Eyles salvou o dia. Só que o mérito mesmo é dessa nerdinha aqui:

Margaret_Hamilton

O nome dela é Margaret Hamilton. Formada em matemática em 1958, trabalhava no MIT como desenvolvedora de software. Na época Ciência da Computação e Engenharia de Software não existiam como cadeiras isoladas, na verdade nem existia o termo engenharia de software. Eram tempos pioneiros e você aprendia fazendo. E Margaret fazia muito bem.

Essa moça, que ficaria perfeitamente à vontade em uma convenção de Star Trek, cresceu nos rankings do MIT. E enquanto Don Draper dava tapinhas na bunda de mulheres na Sterling Cooper, Margaret comandava como diretora da Divisão de Engenharia de Software do Laboratório Charles Draper, do MIT.

Contratados pela NASA para desenvolver os softwares da Apollo, a equipe colocou em prática um monte de conceitos criados por Margaret Hamilton. Alguns sites dizem que ela escreveu os programas da Apollo. Não, crianças, ela foi muito além, ela criou os conceitos e a metodologia, a arquitetura e a modelagem.

Margaret_in_action_1

Entre outros conceitos criados ou implementados de forma pioneira por ela:

  • Software assíncrono
  • Priority Schedulling
  • human-in-the-loop
  • end-to-end testing
  • System Oriented Objects
  • Linguagens de modelagem
  • Desenvolvimento distribuído
  • Detecção e correção de erros em RTOS
  • Metodologias de teste e certificação
  • Automated life cycle environments

·  
Fora o problema com o radar nenhum outro bug ocorreu durante as missões Apollo, graças aos requisitos de desenvolvimento e metodologias de teste criadas por Margaret. lembre-se, isso foi bem antes de UML e outros frufrus.

Só ser a Projetista-Chefe do software de vôo do projeto Apollo e do SkyLab já seria um prêmio e tanto, mas em 2003 a NASA tirou o escorpião do bolso e em um gesto inédito, outorgou a Maragaret Hamilton o NASA Exceptional Space Act Award for scientific and technical contributions, com direito a um agrado de US$ 37.200,00. Foi a primeira e única vez que a NASA deu um prêmio em dinheiro a alguém.

78596561_o

Margaret Hamilton publicou mais de 130 papers na área de ciência da computação, cunhou o termo Engenharia de Software. Trabalhou em 60 projetos e 6 programas principais na NASA.

Hoje, aos 76 anos essa velhinha porreta é CEO da Hamilton Technologies, onde desenvolve as metodologias de Universal Systems Language — nesta não se trabalha com orientação a objetos ou modelos, mas sistemas e a Development Before the Fact, cujo princípio é simples: “não conserte, faça certo da primeira vez”.

O legado de Margaret Hamilton é imenso, cada vez que a Microsoft faz um teste beta com gente do mundo inteiro está usando o conceito de human-in-the-loop criado por ela. Incontáveis bugs são encontrados quando gente testa o software, além dos testes automáticos. Simples? Hoje pode ser, em 1965 não era.

Tudo que você usa hoje e tem alguma complexidade em termos de software tem o dedo dela, mas você não verá Margaret Hamilton em documentários cheios de foguetes e astronautas corajosos. Ela nunca apareceu nos filmes antigos, sequer estava no Controle da Missão. Ainda bem, naquela época o prédio nem tinha banheiro feminino.

Ela é uma mulher que se destacou em um campo quase 100% dominado por homens, em uma época onde ter órgãos reprodutivos internos era garantia de que não seria levada a sério. Se a Apollo foi um pequeno passo para um homem, foi um passo gigantesco para as mulheres em computação.

Pois se para alguns é fácil zoar as mulheres que queimaram sutiã, é muito mais complicado zoar uma que queimou um escudo ablativo de calor a 40 mil km/h reentrando na atmosfera terrestre, por pura expertise de seu software.

Leia também:

The post A nerdinha que salvou a Apollo 11 appeared first on Meio Bit.








12 Dec 13:38

The story of Grace Hopper (aka Amazing Grace)

11 Dec 15:20

R.I.P. Bikes

Giseli.ramos

O doido é que nos ocasionais sonhos pós-apocalípticos que costumo ter, sempre tenho uma bike à disposição :P

R.I.P. Bikes

Submitted by:

Tagged: bikes , bicycles
10 Dec 11:19

Ain’t these two happy owlets adorable? 😍 #9gag



Ain’t these two happy owlets adorable? 😍 #9gag

19 Nov 11:46

Photographs That Reveal the Intricate Innards of Old Mechanical Calculators

by Gannon Burgett

012-1

Photographer Kevin Twomey has a fascination with capturing complex objects in the most simple of compositions, and his series Low Tech is the epitome of this. The series features photos of old, mechanical calculators stripped bare, exposing the exquisitely complicated creations that they were from the inside out.

Similar to the advancements of cameras, calculators have evolved from a purely mechanical piece of machinery to digital gadgets that are now almost small enough to hide behind a silver dollar. Regardless of how large and cumbersome those old mechanic calculators were though, there is a certain beauty in the various designs that shows off their seemingly infinite intricacies.

The calculators themselves weren’t the only intricate aspect of this series though. To ensure he captured every motor, key and spring, Twomey took multiple shots of each composition and focus-stacked them using Helicon Focus. The results, as you can see below, are fascinating from top to bottom.

008

010

008-1

006-1

011

014

007-1

002

006

003-1

005

005-1

007

004

004-1

001

002-1

001-1

009

003

012

To see more of Twomey’s work, pay his website a visit by clicking here, or give him a follow on Facebook.

(via WIRED)


Image credits: Photographs by Kevin Twomey and used with permission

17 Nov 19:49

Good morning! (Apply when applicable in your timezone😁) #9gag



Good morning! (Apply when applicable in your timezone😁) #9gag

08 Nov 01:39

Jupiter 'shepherds' the asteroid belt, preventing the asteroids from falling into the sun or accreting into a new planet.

27 Oct 11:05

ginnymydear: DUUUUUUDE

















ginnymydear:

DUUUUUUDE

22 Oct 12:59

What the Flag Says

re:formAndy Warner on Oct 13

5 min

    Change the title or subtitle to customize how your story is presented across Medium.

    Don’t write alone

    Copy and send this draft link to anyone. They’ll be able to leave you notes before you publish.

    Set as featured in collection








    Go to re:form

    re:form

    Written by Andy Warner

    Email me when re:form publishes stories

    07 Oct 17:51

    booksgamesmovies: For your viewing pleasure: a squirrel trying...



    booksgamesmovies:

    For your viewing pleasure: a squirrel trying to bury an acorn in a dog.

    07 Oct 17:50

    Outside the Box

    by Grant
     

    You can order a poster print of this comic at my shop.
    02 Oct 20:57

    Where Science and Engineering Meet

    Giseli.ramos

    Tinha esse texto também no lab de eletrônica da minha graduação.

    26 Sep 11:45

    the-foxandthe-bodhi: THE TIGER MADE MY HEART MELT





















    the-foxandthe-bodhi:

    THE TIGER MADE MY HEART MELT

    12 Sep 11:55

    Photo





















    09 Sep 11:02

    asperatus cloud x



    asperatus cloud x

    20 Aug 16:42

    Manul – the Cat that Time Forgot

    by RJ Evans
    Have you ever wanted to take a trip through time to see what animals looked like millions of years ago? When it comes to cats there is little or no need.  This beautiful specimen is a Manul, otherwise known as Pallas’s Cat.  About twelve million years ago it was one of the first two modern cats to evolve and it hasn’t changed since. The other species, Martelli’s Cat, is extinct so what you are looking at here is a unique window in to the past of modern cats.

    Although the Manul is only the size of the domestic cat, reaching about 26 inches in length its appearance makes it appear somewhat larger.  It is stocky and has very lengthy, thick fur, which gives it, perhaps to human eyes, an unintentional appearance of feline rotundity.  Yet although it appears stout and somewhat ungainly it has a natural elegance and poise – exactly what you would expect from the genus Felis in other words.  Plus it can certainly look after itself in a fight!

    The main reason for its survival throughout the ages has been its isolation. In the wild it lives on the Asian steppes at substantial heights – up to 13,000 feet.  Based in India, Pakistan, western China and Mongolia as well as Afghanistan and Turkemistan, it has even been discovered recently in the wilds of the Sayan region of Siberia. In these places it prefers rocky areas, semidesert and barren hillsides.  In other words places where we are less likely to live – but even having said that you will no doubt be able to hazard a guess which species is the Manul’s greatest enemy.

    Take a close look at the eyes of the Manul.  Do you see a difference between it and the domestic cat? That’s right, the pupils of the Manul are round, not slit-like.  Proportionally too, the legs are smaller than cats we know and they can’t run anywhere near as quickly.  As for the ears, well, when you actually can catch sight of them they are very low and much further apart than you would see in a domestic cat.

    It also has a much shorter face than other cats, which makes its face look flattened.  Some people, when they see their first Manus mistakenly believe that it is a monkey because of its facial appearance and bulky looking frame.  It is easier to see why, from some angles.

    The Manus has not been studied a great deal in the wild, where it is classified as near threatened.  This is because it is distributed very patchily throughout its territory, not to mention the fact it is still hunted despite protection orders made by the various governments who create human law in its range. Before it was legally protected tens of thousands of Manuls were hunted and killed each year, mostly for their fur.

    It is thought that the cat hunts mostly at dawn and dusk where it will feed on small rodents and birds. Ambush and stalking are their favorite methods of conducting a hunt and although they tend to shelter in abandoned burrows in the day they have been seen basking in the sun. In other words, behaviorally they are much like the domesticated moggy that we know and love.

    The Manul is a solitary creature and individuals do not tend to meet purposefully when it is outside the breeding season and will avoid the company of others of its kind where possible. When it is threatened it raises and quivers the upper lip, Elvis like, revealing a large canine tooth.

    When breeding does happen the male has to get in quickly as oestrus usually only lasts just under two days. It usually births up to six kittens, very rarely a single one, and it is believed that the size of its litters reflect the high rate of mortality the infant cats can expect. Yet they are expected to be able to hunt at sixteen weeks and are very much on their own and independent by six months. Although their life expectancy in the wild is unknown in captivity they have lived to over eleven years.

    Don’t rush to your local pet store, however.  The Manul does not domesticate and even if it did they are incredibly hard to breed in captivity with many kittens dying.  This is thought to be because in the wild, due to its isolation, the cat’s immune system did not have a need to develop and so when they come in contact with us and other species, this under-developed immune system lets them down.

    Yet as a living, breathing glimpse in to twelve million years of feline history these amazing animals are irreplaceable. Unique is a word which, in this day and age, is mightily overused. Yet these cats are quite simply just that – unique.

    04 Jul 14:36

    Visualizing Algorithms

    The power of the unaided mind is highly overrated… The real powers come from devising external aids that enhance cognitive abilities. Donald Norman

    Algorithms are a fascinating use case for visualization. To visualize an algorithm, we don’t merely fit data to a chart; there is no primary dataset. Instead there are logical rules that describe behavior. This may be why algorithm visualizations are so unusual, as designers experiment with novel forms to better communicate. This is reason enough to study them.

    But algorithms are also a reminder that visualization is more than a tool for finding patterns in data. Visualization leverages the human visual system to augment human intellect: we can use it to better understand these important abstract processes, and perhaps other things, too. This is an adaption of my talk at Eyeo 2014. A video of the talk will be available soon. (Thanks, Eyeo folks!)

    #Sampling

    Before I can explain the first algorithm, I first need to explain the problem it addresses.

    Van Gogh’s The Starry Night

    Light — electromagnetic radiation — the light emanating from this screen, traveling through the air, focused by your lens and projected onto the retina — is a continuous signal. To be perceived, we must reduce light to discrete impulses by measuring its intensity and frequency distribution at different points in space.

    This reduction process is called sampling, and it is essential to vision. You can think of it as a painter applying discrete strokes of color to form an image (particularly in Pointillism or Divisionism). Sampling is further a core concern of computer graphics; for example, to rasterize a 3D scene by raytracing, we must determine where to shoot rays. Even resizing an image requires sampling.

    Sampling is made difficult by competing goals. On the one hand, samples should be evenly distributed so there are no gaps. But we must also avoid repeating, regular patterns, which cause aliasing. This is why you shouldn’t wear a finely-striped shirt on camera: the stripes resonate with the grid of pixels in the camera’s sensor and cause Moiré patterns.

    Photo: retinalmicroscopy.com

    This micrograph is of the human retina’s periphery. The larger cone cells detect color, while the smaller rod cells improve low-light vision.

    The human retina has a beautiful solution to sampling in its placement of photoreceptor cells. The cells cover the retina densely and evenly (with the exception of the blind spot over the optic nerve), and yet the cells’ relative positions are irregular. This is called a Poisson-disc distribution because it maintains a minimum distance between cells, avoiding occlusion and thus wasted photoreceptors.

    Unfortunately, creating a Poisson-disc distribution is hard. (More on that in a bit.) So here’s a simple approximation known as Mitchell’s best-candidate algorithm.

    Best-candidate

    You can see from these dots that best-candidate sampling produces a pleasing random distribution. It’s not without flaws: there are too many samples in some areas (oversampling), and not enough in other areas (undersampling). But it’s reasonably good, and just as important, easy to implement.

    Here’s how it works:

    Best-candidate

    For each new sample, the best-candidate algorithm generates a fixed number of candidates, shown in gray. (Here, that number is 10.) Each candidate is chosen uniformly from the sampling area.

    The best candidate, shown in red, is the one that is farthest away from all previous samples, shown in black. The distance from each candidate to the closest sample is shown by the associated line and circle: notice that there are no other samples inside the gray or red circles. After all candidates are created and distances measured, the best candidate becomes the new sample, and the remaining candidates are discarded.

    Now here’s the code:

    function sample() {
      var bestCandidate, bestDistance = 0;
      for (var i = 0; i  bestDistance) {
          bestDistance = d;
          bestCandidate = c;
        }
      }
      return bestCandidate;
    }

    As I explained the algorithm above, I will let the code stand on its own. (And the purpose of this essay is to let you study code through visualization, besides.) But I will clarify a few details:

    The external numCandidates defines the number of candidates to create per sample. This parameter lets you trade-off speed with quality. The lower the number of candidates, the faster it runs. Conversely, the higher the number of candidates, the better the sampling quality.

    The distance function is simple geometry:

    function distance(a, b) {
      var dx = a[0] - b[0],
          dy = a[1] - b[1];
      return Math.sqrt(dx * dx + dy * dy);
    }
    You can omit the sqrt here, if you want, since it’s a monotonic function and doesn’t change the determination of the best candidate.

    The findClosest function returns the closest sample to the current candidate. This can be done by brute force, iterating over every existing sample. Or you can accelerate the search, say by using a quadtree. Brute force is simple to implement but very slow (quadratic time, in O-notation). The accelerated approach is much faster, but more work to implement.

    Speaking of trade-offs: when deciding whether to use an algorithm, we evaluate it not in a vacuum but against other approaches. And as a practical matter it is useful to weigh the complexity of implementation — how long it takes to implement, how difficult it is to maintain — against its performance and quality.

    The simplest alternative is uniform random sampling:

    function sample() {
      return [random() * width, random() * height];
    }

    It looks like this:

    Uniform random

    Uniform random is pretty terrible. There is both severe under- and oversampling: many samples are densely-packed, even overlapping, leading to large empty areas. (Uniform random sampling also represents the lower bound of quality for the best-candidate algorithm, as when the number of candidates per sample is set to one.)

    Dots patterns are one way of showing sample pattern quality, but not the only way. For example, we can attempt to simulate vision under different sampling strategies by coloring an image according to the color of the closest sample. This is, in effect, a Voronoi diagram of the samples where each cell is colored by the associated sample.

    What does The Starry Night look like through 6,667 uniform random samples?

    Uniform random

    Hold down the mouse to compare to the original.

    The lackluster quality of this approach is again apparent. The cells vary widely in size, as expected from the uneven sample distribution. Detail has been lost because densely-packed samples (small cells) are underutilized. Meanwhile, sparse samples (large cells) introduce noise by exaggerating rare colors, such as the pink star in the bottom-left.

    Now observe best-candidate sampling:

    Best-candidate

    Hold down the mouse to compare to the original.

    Much better! Cells are more consistently sized, though still randomly placed. Despite the quantity of samples (6,667) remaining constant, there is substantially more detail and less noise thanks to their even distribution. If you squint, you can almost make out the original brush strokes.

    We can use Voronoi diagrams to study sample distributions more directly by coloring each cell according to its area. Darker cells are larger, indicating sparse sampling; lighter cells are smaller, indicating dense sampling. The optimal pattern has nearly-uniform color while retaining irregular sample positions. (A histogram showing cell area distribution would also be nice, but the Voronoi has the advantage that it shows sample position simultaneously.)

    Here are the same 6,667 uniform random samples:

    Uniform random

    The black spots — large gaps between samples — would be localized deficiencies in vision due to undersampling. The same number of best-candidate samples exhibits much less variation in cell area, and thus more consistent coloration:

    Best-candidate

    Can we do better than best-candidate? Yes! Not only can we produce a better sample distribution with a different algorithm, but this algorithm is faster (linear time). It’s at least as easy to implement as best-candidate. And this algorithm even scales to arbitrary dimensions.

    This wonder is called Bridson’s algorithm for Poisson-disc sampling, and it looks like this:

    Poisson-disc

    This algorithm functions visibly differently than the other two: it builds incrementally from existing samples, rather than scattering new samples randomly throughout the sample area. This gives its progression a quasi-biological appearance, like cells dividing in a petri dish. Notice, too, that no samples are too close to each other; this is the minimum-distance constraint that defines a Poisson-disc distribution, enforced by the algorithm.

    Here’s how it works:

    Poisson-disc

    Red dots represent “active” samples. At each iteration, one is selected randomly from the set of all active samples. Then, some number of candidate samples (shown as hollow black dots) are randomly generated within an annulus surrounding the selected sample. The annulus extends from radius r to 2r, where r is the minimum-allowable distance between samples.

    Candidate samples within distance r from an existing sample are rejected; this “exclusion zone” is shown in gray, along with a black line connecting the rejected candidate to the nearby existing sample. A grid accelerates the distance check for each candidate. The grid size r/√2 ensures each cell can contain at most one sample, and only a fixed number of neighboring cells need to be checked.

    If a candidate is acceptable, it is added as a new sample, and a new active sample is randomly selected. If none of the candidates are acceptable, the selected active sample is marked as inactive (changing from red to black). When no samples remain active, the algorithm terminates.

    The area-as-color Voronoi diagram shows Poisson-disc sampling’s improvement over best-candidate, with no dark-blue or light-yellow cells:

    Poisson-disc

    The Starry Night under Poisson-disc sampling retains the greatest amount of detail and the least noise. It is reminscent of a beautiful Roman mosaic:

    Poisson-disc

    Hold down the mouse to compare to the original.

    Now that you’ve seen a few examples, let’s briefly consider why to visualize algorithms.

    Entertaining - I find watching algorithms endlessly fascinating, even mesmerizing. Particularly so when randomness is involved. And while this may seem a weak justification, don’t underestimate the value of joy! Further, while these visualizations can be engaging even without understanding the underlying algorithm, grasping the importance of the algorithm can give a deeper appreciation.

    Teaching - Did you find the code or the animation more helpful? What about pseudocode — that euphemism for code that won’t compile? While formal description has its place in unambiguous documentation, visualization can make intuitive understanding more accessible.

    Debugging - Have you ever implemented an algorithm based on formal description? It can be hard! Being able to see what your code is doing can boost productivity. Visualization does not supplant the need for tests, but tests are useful primarily for detecting failure and not explaining it. Visualization can also discover unexpected behavior in your implementation, even when the output looks correct. (See Bret Victor’s Learnable Programming and Inventing on Principle for excellent related work.)

    Learning - Even if you just want to learn for yourself, visualization can be a great way to gain deep understanding. Teaching is one of the most effective ways of learning, and implementing a visualization is like teaching yourself. I find it easier to remember an algorithm intuitively, having seen it, than to memorize code where I am bound to forget small but essential details.

    #Shuffling

    Shuffling is the process of rearranging an array of elements randomly. For example, you might shuffle a deck of cards before dealing a poker game. A good shuffling algorithm is unbiased, where every ordering is equally likely.

    The Fisher–Yates shuffle is an optimal shuffling algorithm. Not only is it unbiased, but it runs in linear time, uses constant space, and is easy to implement.

    function shuffle(array) {
      var n = array.length, t, i;
      while (n) {
        i = Math.random() * n-- | 0; // 0 ≤ i 

    Above is the code, and below is a visual explanation:

    For a more detailed explanation of this algorithm, see my post on the Fisher–Yates shuffle.

    Each line represents a number. Small numbers lean left and large numbers lean right. (Note that you can shuffle an array of anything — not just numbers — but this visual encoding is useful for showing the order of elements. It is inspired by Robert Sedgwick’s sorting visualizations in Algorithms in C.)

    The algorithm splits the array into two parts: the right side of the array (in black) is the shuffled section, while the left side of the array (in gray) contains elements remaining to be shuffled. At each step it picks a random element from the left and moves it to the right, thereby expanding the shuffled section by one. The original order on the left does not need to be preserved, so to make room for the new element in the shuffled section, the algorithm can simply swap the element into place. Eventually all elements are shuffled, and the algorithm terminates.

    If Fisher–Yates is a good algorithm, what does a bad algorithm look like? Here’s one:

    // DON’T DO THIS!
    function shuffle(array) {
      return array.sort(function(a, b) {
        return Math.random() - .5; // ಠ_ಠ
      });
    }

    This approach uses sorting to shuffle by specifying a random comparator function. A comparator defines the order of elements. It takes arguments a and b — two elements from the array to compare — and returns a value less than zero if a is less than b, a value greater than zero if a is greater than b, or zero if a and b are equal. The comparator is invoked repeatedly during sorting. If you don’t specify a comparator to array.sort, elements are ordered lexicographically.

    Here the comparator returns a random number between -.5 and +.5. The assumption is that this defines a random order, so sorting will jumble the elements randomly and perform a good shuffle.

    Unfortunately, this assumption is flawed. A random pairwise order (for any two elements) does not establish a random order for a set of elements. A comparator must obey transitivity: if a > b and b > c, then a > c. But the random comparator returns a random value, violating transitivity and causing the behavior of array.sort to be undefined! You might get lucky, or you might not.

    How bad is it? We can try to answer this question by visualizing the output:

    Another reason this algorithm is bad is that sorting takes O(n lg n) time, making it significantly slower than Fisher–Yates which takes O(n). But speed is less damning than bias.

    This may look random, so you might be tempted to conclude that random comparator shuffle is adequate, and dismiss concerns of bias as pedantic. But looks can be misleading! There are many things that appear random to the human eye but are substantially non-random.

    This deception demonstrates that visualization is not a magic wand. Showing a single run of the algorithm does not effectively assess the quality of its randomness. We must instead carefully design a visualization that addresses the specific question at hand: what is the algorithm’s bias?

    To show bias, we must first define it. One definition is based on the probability that an array element at index i prior to shuffling will be at index j after shuffling. If the algorithm is unbiased, every element has equal probability of ending up at every index, and thus the probability for all i and j is the same: 1/n, where n is the number of elements.

    Computing these probabilities analytically is difficult, since it depends on knowing the exact sorting algorithm used. But computing them empirically is easy: we simply shuffle thousands of times and count the number of occurrences of element i at index j. An effective display for this matrix of probabilities is a matrix diagram:

    SHUFFLE BIAS
    column = index before shuffle
    row = index after shuffle
    green = positive bias
    red = negative bias

    The column (horizontal position) of the matrix represents the index of the element prior to shuffling, while the row (vertical position) represents the index of the element after shuffling. Color encodes probability: green cells indicate positive bias, where the element occurs more frequently than we would expect for an unbiased algorithm; likewise red cells indicate negative bias, where it occurs less frequently than expected.

    Random comparator shuffle in Chrome, shown above, is surprisingly mediocre. Parts of the array are only weakly biased. However, it exhibits a strong positive bias below the diagonal, which indicates a tendency to push elements from index i to i+1 or i+2. There is also strange behavior for the first, middle and last row, which might be a consequence of Chrome using median-of-three quicksort.

    The unbiased Fisher–Yates algorithm looks like this:

    No patterns are visible in this matrix, other than a small amount of noise due to empirical measurement. (That noise could be reduced if desired by taking additional measurements.)

    The behavior of random comparator shuffle is heavily dependent on your browser. Different browsers use different sorting algorithms, and different sorting algorithms behave very differently with (broken) random comparators. Here’s random comparator shuffle on Firefox:

    For an interactive version of these matrix diagrams to test alternative shuffling strategies, see Will It Shuffle?

    This is egregiously biased! The resulting array is often barely shuffled, as shown by the strong green diagonal in this matrix. This does not mean that Chrome’s sort is somehow “better” than Firefox’s; it simply means you should never use random comparator shuffle. Random comparators are fundamentally broken.

    #Sorting

    Sorting is the inverse of shuffling: it creates order from disorder, rather than vice versa. This makes sorting a harder problem with diverse solutions designed for different trade-offs and constraints.

    One of the most well-known sorting algorithms is quicksort.

    Quicksort

    Quicksort first partitions the array into two parts by picking a pivot. The left part contains all elements less than the pivot, while the right part contains all elements greater than the pivot. After the array is partitioned, quicksort recurses into the left and right parts. When a part contains only a single element, recursion stops.

    The partition operation makes a single pass over the active part of the array. Similar to how the Fisher–Yates shuffle incrementally builds the shuffled section by swapping elements, the partition operation builds the lesser (left) and greater (right) parts of the subarray incrementally. As each element is visited, if it is less than the pivot it is swapped into the lesser part; if it is greater than the pivot the partition operation moves on to the next element.

    Here’s the code:

    function quicksort(array, left, right) {
      if (left > 1;
        pivot = partition(array, left, right, pivot);
        quicksort(array, left, pivot);
        quicksort(array, pivot + 1, right);
      }
    }
    
    function partition(array, left, right, pivot) {
      var pivotValue = array[pivot];
      swap(array, pivot, --right);
      for (var i = left; i 

    There are many variations of quicksort. The one shown above is one of the simplest — and slowest. This variation is useful for teaching, but in practice more elaborate implementations are used for better performance.

    A common improvement is “median-of-three” pivot selection, where the median of the first, middle and last elements is used as the pivot. This tends to choose a pivot closer to the true median, resulting in similarly-sized left and right parts and shallower recursion. Another optimization is switching from quicksort to insertion sort for small parts of the array, which can be faster due to the overhead of function calls. A particularly clever variation is Yaroslavskiy’s dual-pivot quicksort, which partitions the array into three parts rather than two. This is the default sorting algorithm in Java and Dart.

    The sort and shuffle animations above have the nice property that time is mapped to time: we can simply watch how the algorithm proceeds. But while intuitive, animation can be frustrating to watch, especially if we want to focus on an occasional weirdness in the algorithm’s behavior. Animations also rely heavily on our memory to observe patterns in behavior. While animations are improved by controls to pause and scrub time, static displays that show everything at once can be even more effective. The eye scans faster than the hand.

    A simple way of turning an animation into a static display is to pick key frames from the animation and display those sequentially, like a comic strip. If we then remove redundant information across key frames, we use space more efficiently. A denser display may require more study to understand, but is faster to scan since the eye travels less.

    Below, each row shows the state of the array prior to recursion. The first row is the initial state of the array, the second row is the array after the first partition operation, the third row is after the first partition’s left and right parts are again partitioned, etc. In effect, this is breadth-first quicksort, where the partition operation on both left and right proceeds in parallel.

    Quicksort

    As before, the pivots for each partition operation are highlighted in red. Notice that the pivots turn gray at the next level of recursion: after the partition operation completes, the associated pivot is in its final, sorted position. The total depth of the display — the maximum depth of recursion — gives a sense of how efficiently quicksort performed. It depends heavily on input and pivot choice.

    Another static display of quicksort, less dense but perhaps easier to read, represents each element as a colored thread and shows each sequential swap. (This form is inspired by Aldo Cortesi’s sorting visualizations.) Smaller values are lighter, and larger values are darker.

    At the start of each partition, the pivot is moved to the end (the right) of the active subarray.

    Partitioning then proceeds from left to right. At each step, a new element is added either of the set of lesser values (in which case a swap occurs) or to the set of greater values (in which case no swap occurs).

    When a swap occurs, the left-most value greater than the pivot is moved to the right, so as to make room on the left for the new lesser value. Thus, notice that in all swap operations, only values darker than the pivot move right, and only values lighter than the pivot move left.

    When the partition operation has visited all elements in the array, the pivot is placed in its final position between the two parts. Then, the algorithm recurses into the left part, followed by the right part (far below). This visualization doesn’t show the state of the stack, so it can appear to jump around arbitrarily due to the nature of recursion. Still, you can typically see when a partition operation finishes due to the characteristic movement of the pivot to the end of the active subarray. Quicksort

    You’ve now seen three different visual representations of the same algorithm: an animation, a dense static display, and a sparse static display. Each form has strengths and weaknesses. Animations are fun to watch, but static visualizations allow close inspection without being rushed. Sparse displays may likely easier to understand, but dense displays show the “macro” view of the algorithm’s behavior in addition to its details.

    Before we move on, let’s contrast quicksort with another well-known sorting algorithm: mergesort.

    function mergesort(array) {
      var n = array.length, a0 = array, a1 = new Array(n);
      for (var m = 1; m = end || a0[i0] 

    Again, above is the code and below is an animation:

    Mergesort

    As you’ve likely surmised from either the code or the animation, mergesort takes a very different approach to sorting than quicksort. Unlike quicksort, which operates in-place by performing swaps, mergesort requires an extra copy of the array. This extra space is used to merge sorted subarrays, combining the elements from pairs of subarrays while preserving order. Since mergesort performs copies instead of swaps, we must modify the animation accordingly (or risk misleading readers).

    Mergesort works from the bottom-up. Initially, it merges subarrays of size one, since these are trivially sorted. Each adjacent subarray — at first, just a pair of elements — is merged into a sorted subarray of size two using the extra array. Then, each adjacent sorted subarray of size two is merged into a sorted subarray of size four. After each pass over the whole array, mergesort doubles the size of the sorted subarrays: eight, sixteen, and so on. Eventually, this doubling merges the entire array and the algorithm terminates.

    Because mergesort performs repeated passes over the array rather than recursing like quicksort, and because each pass doubles the size of sorted subarrays regardless of input, it is easier to design a static display. We simply show the state of the array after each pass.

    Mergesort

    Let’s again take a moment to consider what we’ve seen. The goal here is to study the behavior of an algorithm rather than a specific dataset. Yet there is still data, necessarily — the data is derived from the execution of the algorithm. And this means we can use the type of derived data to classify algorithm visualizations.

    Level 0 / black box - The simplest class just shows the output. This does not explain the algorithm’s operation, but it can still verify correctness. And by treating the algorithm as a black box, you can more easily compare outputs of different algorithms. Black box visualizations can also be combined with deeper analysis of output, such as the shuffle bias matrix diagram shown above.

    Level 1 / gray box - Many algorithms (though not all) build up output incrementally. By visualizing the intermediate output as it develops, we start to see how the algorithm works. This explains more without introducing new abstraction, since the intermediate and final output share the same structure. Yet this type of visualization can raise more questions than it answers, since it offers no explanation as to why the algorithm does what it does.

    Level 2 / white box - To answer “why” questions, white box visualizations expose the internal state of the algorithm in addition to its intermediate output. This type has the greatest potential to explain, but also the highest burden on the reader, as the meaning and purpose of internal state must be clearly described. There is a risk that the additional complexity will overwhelm the reader; layering information may make the graphic more accessible. Lastly, since internal state is highly-dependent on the specific algorithm, this type of visualization is often unsuitable for comparing algorithms.

    There’s also the practical matter of implementating algorithm visualizations. Typically you can’t just run code as-is; you must instrument it to capture state for visualization. (View source on this page for examples.) You may even need to interleave execution with visualization, which is particularly challenging for recursive algorithms that capture state on the stack. Language parsers such as Esprima may facilitate algorithm visualization through code instrumentation, cleanly separating execution code from visualization code.

    #Maze Generation

    The last problem we’ll look at is maze generation. All algorithms in this section generate a spanning tree of a two-dimensional rectangular grid. This means there are no loops and there is a unique path from the root in the bottom-left corner to every other cell in the maze.

    I apologize for the esoteric subject — I don’t know enough to say why these algorithms are useful beyond simple games, and possibly something about electrical networks. But even so, they are fascinating from a visualization perspective because they solve the same, highly-constrained problem in wildly-different ways.

    And they’re just fun to watch.

    Random traversal

    The random traversal algorithm initializes the first cell of the maze in the bottom-left corner. The algorithm then tracks all possible ways by which the maze could be extended (shown in red). At each step, one of these possible extensions is picked randomly, and the maze is extended as long as this does not reconnect it with another part of the maze.

    Like Bridon’s Poisson-disc sampling algorithm, random traversal maintains a frontier and randomly selects from that frontier to expand. Both algorithms thus appear to grow organically, like a fungus.

    Randomized depth-first traversal follows a very different pattern:

    Randomized depth-first traversal

    Rather than picking a new random passage each time, this algorithm always extends the deepest passage — the one with the longest path back to the root — in a random direction. Thus, randomized depth-first traversal only branches when the current path dead-ends into an earlier part of the maze. To continue, it backtracks until it can start a new branch. This snake-like exploration leads to mazes with significantly fewer branches and much longer, winding passages.

    Prim’s algorithm constructs a minimum spanning tree, a spanning tree of a graph with weighted edges with the lowest total weight. This algorithm can be used to construct a random spanning tree by initializing edge weights randomly:

    Randomized Prim’s

    At each step, Prim’s algorithm extends the maze using the lowest-weighted edge (potential direction) connected to the existing maze. If this edge would form a loop, it is discarded and the next-lowest-weighted edge is considered.

    Prim’s algorithm is commonly implemented using a heap, which is an efficient data structure for prioritizing elements. When a new cell is added to the maze, connected edges (shown in red) are added to the heap. Despite edges being added in arbitrary order, the heap allows the lowest-weighted edge to be quickly removed.

    Lastly, a most unusual specimen:

    Wilson’s

    Wilson’s algorithm uses loop-erased random walks to generate a uniform spanning tree — an unbiased sample of all possible spanning trees. The other maze generation algorithms we have seen lack this beautiful mathematical property.

    The algorithm initializes the maze with an arbitrary starting cell. Then, a new cell is added to the maze, initiating a random walk (shown in red). The random walk continues until it reconnects with the existing maze (shown in white). However, if the random walk intersects itself, the resulting loop is erased before the random walk continues.

    Initially, the algorithm can be frustratingly slow to watch, as early random walks are unlikely to reconnect with the small existing maze. As the maze grows, random walks become more likely to collide with the maze and the algorithm accelerates dramatically.

    These four maze generation algorithms work very differently. And yet, when the animations end, the resulting mazes are difficult to distinguish from each other. The animations are useful for showing how the algorithm works, but fail to reveal the resulting tree structure.

    A way to show structure, rather than process, is to flood the maze with color:

    Random traversal

    Color encodes tree depth — the length of the path back to the root in the bottom-left corner. The color scale cycles as you get deeper into the tree; this is occasionally misleading when a deep path circles back adjacent to a shallow one, but the higher contrast allows better differentiation of local structure. (This is not a convential rainbow color scale, which is nominally considered harmful, but a cubehelix rainbow with improved perceptual properties.)

    We can further emphasize the structure of the maze by subtracting the walls, reducing visual noise. Below, each pixel represents a path through the maze. As above, paths are colored by depth and color floods deeper into the maze over time.

    Random traversal

    Concentric circles of color, like a tie-dye shirt, reveal that random traversal produces many branching paths. Yet the shape of each path is not particularly interesting, as it tends to go in a straight line back to the root. Because random traversal extends the maze by picking randomly from the frontier, paths are never given much freedom to meander — they end up colliding with the growing frontier and terminate due to the restriction on loops.

    Randomized depth-first traversal, on the other hand, is all about the meander:

    Randomized depth-first traversal

    This animation proceeds at fifty times the speed of the previous one. This speed-up is necessary because randomized depth-first traversal mazes are much, much deeper than random traversal mazes due to limited branching. You can see that typically there is only one, and rarely more than a few, active branches at any particular depth.

    Now Prim’s algorithm on a random graph:

    Randomized Prim’s

    This is more interesting! The simultaneously-expanding florets of color reveal substantial branching, and there is more complex global structure than random traversal.

    Wilson’s algorithm, despite operating very differently, seems to produce very similar results:

    Wilson’s

    Just because they look the same does not mean they are. Despite appearances, Prim’s algorithm on a randomly-weighted graph does not produce a uniform spanning tree (as far as I know — proving this is outside my area of expertise). Visualization can sometimes mislead due to human error. An earlier version of the Prim’s color flood had a bug where the color scale rotated twice as fast as intended; this suggested that Prim’s and Wilson’s algorithms produced very different trees, when in fact they appear much more similar than different.

    Since these mazes are spanning trees, we can also use specialized tree visualizations to show structure. To illustrate the duality between maze and tree, here the passages (shown in white) of a maze generated by Wilson’s algorithm are gradually transformed into a tidy tree layout. As with the other animations, it proceeds by depth, starting with the root and descending to the leaves:

    Wilson’s

    For comparison, again we see how randomized depth-first traversal produces trees with long passages and little branching.

    Randomized depth-first traversal

    Both trees have the same number of nodes (3,239) and are scaled to fit in the same area (960×500 pixels). This hides an important difference: at this size, randomized depth-first traversal typically produces a tree two-to-five times deeper than Wilson’s algorithm. The tree depths above are _ and _, respectively. In the larger 480,000-node mazes used for color flooding, randomized depth-first traversal produces a tree that is 10-20 times deeper!

    #Using Vision to Think

    This essay has focused on algorithms. Yet the techniques discussed here apply to a broader space of problems: mathematical formulas, dynamical systems, processes, etc. Basically, anywhere there is code that needs understanding.

    Shan Carter, Archie Tse and I recently built a new rent vs. buy calculator; powering the calculator is a couple hundred lines of code to compute the total cost of renting or buying a home. It’s a simplistic model, but more complicated than fits in your head. The calculator takes about twenty input parameters (such as purchase price and mortgage rate) and considers opportunity costs on investments, inflation, marginal tax rates, and a variety of other factors.

    The goal of the calculator is to help you decide whether you should buy or rent a home. If the total cost of buying is cheaper, you should buy. Otherwise, you should rent.

    Except, it’s not that simple.

    To output an accurate answer, the calculator needs accurate inputs. While some inputs are well-known (such as the length of your mortgage), others are difficult or impossible to predict. No one can say exactly how the stock market will perform, how much a specific home will appreciate or depreciate, or how the renting market will change over time.
    Is It Better to Buy or Rent?

    We can make educated guesses at each variable — for example, looking at Case–Shiller data. But if the calculator is a black box, then readers can’t see how sensitive their answer is to small changes.

    To fix this, we need to do more than output a single number. We need to show how the underlying system works. The new calculator therefore charts every variable and lets you quickly explore any variable’s effect by adjusting the associated slider.

    The slope of the chart shows the associated variable’s importance: the greater the slope, the more the decision depends on that variable. Since variables interact with each other, changing a variable may change the slope of other charts.

    This design lets you inspect many aspects of the system. For example, should you make a large down payment? Yes, if the down payment rate slopes down; or no, if the down payment rate slopes up, as with a higher investment return rate. This suggests that the optimal loan size depends on the difference between the opportunity cost on the down payment (money not invested) and the interest cost on the mortgage.

    So, why visualize algorithms? Why visualize anything? To leverage the human visual system to improve understanding. Or more simply, to use vision to think.

    #Related Work

    I mentioned Aldo Cortesi’s sorting visualizations earlier. (I also like Cortesi’s visualizations of malware entropy.) Others abound, including: sorting.at, sorting-algorithms.com, and Aaron Dufour’s Sorting Visualizer, which lets you plug in your own algorithm. YouTube user andrut’s audibilizations are interesting. Robert Sedgwick has published several new editions of Algorithms since I took his class, and his latest uses traditional bars rather than angled lines.

    Amit Patel explores “visual and interactive ways of explaining math and computer algorithms.” The articles on 2D visibility, polygonal map generation and pathfinding are particularly great. Nicky Case published another lovely explanation of 2D visibility and shadow effects. I am heavily-indebted to Jamis Buck for his curation of maze generation algorithms. Christopher Wellons’ GPU-based path finding implementation uses cellular automata — another fascinating subject. David Mimno gave a talk on visualization for models and algorithms at OpenVis 2014 that was an inspiration for this work. And like many, I have long been inspired by Bret Victor, especially Inventing on Principle and Up and Down the Ladder of Abstraction.

    Jason Davies has made numerous illustrations of mathematical concepts and algorithms. Some of my favorites are: Lloyd’s Relaxation, Coalescing Soap Bubbles, Biham-Middleton-Levine Traffic Model, Collatz Graph, Random Points on a Sphere, Bloom Filters, Animated Bézier Curves, Animated Trigonometry, Proof of Pythagoras’ Theorem, and Morley’s Trisector Theorem. Pierre Guilleminot’s Fourier series explanation is great, as are Lucas V. Barbosa’s Fourier transform time and frequency domains and an explanation of Simpson’s paradox by Lewis Lehe & Victor Powell; also see Powell’s animations of the central limit theorem and conditional probabilities. Steven Wittens makes mind-expanding visualizations of mathematical concepts in three dimensions, such as Julia fractals.

    In my own work, I’ve used visualization to explain topology inference (including a visual debugger), D3’s selections and the Fisher–Yates shuffle. There are more standalone visualizations on my bl.ocks. If you have suggestions for interesting visualizations, or any other feedback, please contact me on Twitter.

    Thank you for reading! June 26, 2014 Mike Bostock

    01 Jul 11:10

    The Fermi Paradox

    Everyone feels something when they’re in a really good starry place on a really good starry night and they look up and see this:

    Stars

    Some people stick with the traditional, feeling struck by the epic beauty or blown away by the insane scale of the universe. Personally, I go for the old “existential meltdown followed by acting weird for the next half hour.” But everyone feels something.

    Physicist Enrico Fermi felt something too—”Where is everybody?”

    ________________

    A really starry sky seems vast—but all we’re looking at is our very local neighborhood. On the very best nights, we can see up to about 2,500 stars (roughly one hundred-millionth of the stars in our galaxy), and almost all of them are less than 1,000 light years away from us (or 1% of the diameter of the Milky Way). So what we’re really looking at is this:

    Milky Way

    When confronted with the topic of stars and galaxies, a question that tantalizes most humans is, “Is there other intelligent life out there?” Let’s put some numbers to it (if you don’t like numbers, just read the bold)—

    As many stars as there are in our galaxy (100 – 400 billion), there are roughly an equal number of galaxies in the observable universe—so for every star in the colossal Milky Way, there’s a whole galaxy out there. All together, that comes out to the typically quoted range of between 1022 and 1024 total stars, which means that for every grain of sand on Earth, there are 10,000 stars out there.

    The science world isn’t in total agreement about what percentage of those stars are “sun-like” (similar in size, temperature, and luminosity)—opinions typically range from 5% to 20%. Going with the most conservative side of that (5%), and the lower end for the number of total stars (1022), gives us 500 quintillion, or 500 billion billion sun-like stars.

    There’s also a debate over what percentage of those sun-like stars might be orbited by an Earth-like planet (one with similar temperature conditions that could have liquid water and potentially support life similar to that on Earth). Some say it’s as high as 50%, but let’s go with the more conservative 22% that came out of a recent PNAS study. That suggests that there’s a potentially-habitable Earth-like planet orbiting at least 1% of the total stars in the universe—a total of 100 billion billion Earth-like planets.

    So there are 100 Earth-like planets for every grain of sand in the world. Think about that next time you’re on the beach.

    Moving forward, we have no choice but to get completely speculative. Let’s imagine that after billions of years in existence, 1% of Earth-like planets develop life (if that’s true, every grain of sand would represent one planet with life on it). And imagine that on 1% of those planets, the life advances to an intelligent level like it did here on Earth. That would mean there were 10 quadrillion, or 10 million billion intelligent civilizations in the observable universe.

    Moving back to just our galaxy, and doing the same math on the lowest estimate for stars in the Milky Way (100 billion), we’d estimate that there are 1 billion Earth-like planets and 100,000 intelligent civilizations in our galaxy.[1]The Drake Equation provides a formal method for this narrowing-down process we’re doing.

    SETI (Search for Extraterrestrial Intelligence) is an organization dedicated to listening for signals from other intelligent life. If we’re right that there are 100,000 or more intelligent civilizations in our galaxy, and even a fraction of them are sending out radio waves or laser beams or other modes of attempting to contact others, shouldn’t SETI’s satellite array pick up all kinds of signals?

    But it hasn’t. Not one. Ever.

    Where is everybody?

    It gets stranger. Our sun is relatively young in the lifespan of the universe. There are far older stars with far older Earth-like planets, which should in theory mean civilizations far more advanced than our own. As an example, let’s compare our 4.54 billion-year-old Earth to a hypothetical 8 billion-year-old Planet X.

    Planet X

    If Planet X has a similar story to Earth, let’s look at where their civilization would be today (using the orange timespan as a reference to show how huge the green timespan is):

    Planet X vs Earth

    The technology and knowledge of a civilization only 1,000 years ahead of us could be as shocking to us as our world would be to a medieval person. A civilization 1 million years ahead of us might be as incomprehensible to us as human culture is to chimpanzees. And Planet X is 3.4 billion years ahead of us…

    There’s something called The Kardashev Scale, which helps us group intelligent civilizations into three broad categories by the amount of energy they use:

    A Type I Civilization has the ability to use all of the energy on their planet. We’re not quite a Type I Civilization, but we’re close (Carl Sagan created a formula for this scale which puts us at a Type 0.7 Civilization).

    A Type II Civilization can harness all of the energy of their host star. Our feeble Type I brains can hardly imagine how someone would do this, but we’ve tried our best, imagining things like a Dyson Sphere.

    Dyson Sphere

    A Type III Civilization blows the other two away, accessing power comparable to that of the entire Milky Way galaxy.

    If this level of advancement sounds hard to believe, remember Planet X above and their 3.4 billion years of further development. If a civilization on Planet X were similar to ours and were able to survive all the way to Type III level, the natural thought is that they’d probably have mastered inter-stellar travel by now, possibly even colonizing the entire galaxy.

    One hypothesis as to how galactic colonization could happen is by creating machinery that can travel to other planets, spend 500 years or so self-replicating using the raw materials on their new planet, and then send two replicas off to do the same thing. Even without traveling anywhere near the speed of light, this process would colonize the whole galaxy in 3.75 million years, a relative blink of an eye when talking in the scale of billions of years:

    Colonize Galaxy

    Source: Scientific American: “Where Are They”

    Continuing to speculate, if 1% of intelligent life survives long enough to become a potentially galaxy-colonizing Type III Civilization, our calculations above suggest that there should be at least 1,000 Type III Civilizations in our galaxy alone—and given the power of such a civilization, their presence would likely be pretty noticeable. And yet, we see nothing, hear nothing, and we’re visited by no one.

    So where is everybody?

    _____________________

    Welcome to the Fermi Paradox.

    We have no answer to the Fermi Paradox—the best we can do is “possible explanations.” And if you ask ten different scientists what their hunch is about the correct one, you’ll get ten different answers. You know when you hear about humans of the past debating whether the Earth was round or if the sun revolved around the Earth or thinking that lightning happened because of Zeus, and they seem so primitive and in the dark? That’s about where we are with this topic.

    In taking a look at some of the most-discussed possible explanations for the Fermi Paradox, let’s divide them into two broad categories—those explanations which assume that there’s no sign of Type II and Type III Civilizations because there are none of them out there, and those which assume they’re out there and we’re not seeing or hearing anything for other reasons:

    Explanation Group 1: There are no signs of higher (Type II and III) civilizations because there are no higher civilizations in existence.

    Those who subscribe to Group 1 explanations point to something called the non-exclusivity problem, which rebuffs any theory that says, “There are higher civilizations, but none of them have made any kind of contact with us because they all _____.” Group 1 people look at the math, which says there should be so many thousands (or millions) of higher civilizations, that at least one of them would be an exception to the rule. Even if a theory held for 99.99% of higher civilizations, the other .01% would behave differently and we’d become aware of their existence.

    Therefore, say Group 1 explanations, it must be that there are no super-advanced civilizations. And since the math suggests that there are thousands of them just in our own galaxy, something else must be going on.

    This something else is called The Great Filter.

    The Great Filter theory says that at some point from pre-life to Type III intelligence, there’s a wall that all or nearly all attempts at life hit. There’s some stage in that long evolutionary process that is extremely unlikely or impossible for life to get beyond. That stage is The Great Filter.

    Great Filter

    If this theory is true, the big question is, Where in the timeline does the Great Filter occur?

    It turns out that when it comes to the fate of humankind, this question is very important. Depending on where The Great Filter occurs, we’re left with three possible realities: We’re rare, we’re first, or we’re fucked.

    1. We’re Rare (The Great Filter is Behind Us)

    One hope we have is that The Great Filter is behind us—we managed to surpass it, which would mean it’s extremely rare for life to make it to our level of intelligence. The diagram below shows only two species making it past, and we’re one of them.

    Great Filter - Behind Us

    This scenario would explain why there are no Type III Civilizations…but it would also mean that we could be one of the few exceptions now that we’ve made it this far. It would mean we have hope. On the surface, this sounds a bit like people 500 years ago suggesting that the Earth is the center of the universe—it implies that we’re special. However, something scientists call “observation selection effect” suggests that anyone who is pondering their own rarity is inherently part of an intelligent life “success story”—and whether they’re actually rare or quite common, the thoughts they ponder and conclusions they draw will be identical. This forces us to admit that being special is at least a possibility.

    And if we are special, when exactly did we become special—i.e. which step did we surpass that almost everyone else gets stuck on?

    One possibility: The Great Filter could be at the very beginning—it might be incredibly unusual for life to begin at all. This is a candidate because it took about a billion years of Earth’s existence to finally happen, and because we have tried extensively to replicate that event in labs and have never been able to do it. If this is indeed The Great Filter, it would mean that not only is there no intelligent life out there, there may be no other life at all.

    Another possibility: The Great Filter could be the jump from the simple prokaryote cell to the complex eukaryote cell. After prokaryotes came into being, they remained that way for almost two billion years before making the evolutionary jump to being complex and having a nucleus. If this is The Great Filter, it would mean the universe is teeming with simple prokaryote cells and almost nothing beyond that.

    There are a number of other possibilities—some even think the most recent leap we’ve made to our current intelligence is a Great Filter candidate. While the leap from semi-intelligent life (chimps) to intelligent life (humans) doesn’t at first seem like a miraculous step, Steven Pinker rejects the idea of an inevitable “climb upward” of evolution: “Since evolution does not strive for a goal but just happens, it uses the adaptation most useful for a given ecological niche, and the fact that, on Earth, this led to technological intelligence only once so far may suggest that this outcome of natural selection is rare and hence by no means a certain development of the evolution of a tree of life.”

    Most leaps do not qualify as Great Filter candidates. Any possible Great Filter must be one-in-a-billion type thing where one or more total freak occurrences need to happen to provide a crazy exception—for that reason, something like the jump from single-cell to multi-cellular life is ruled out, because it has occurred as many as 46 times, in isolated incidents, just on this planet alone. For the same reason, if we were to find a fossilized eukaryote cell on Mars, it would rule the above “simple-to-complex cell” leap out as a possible Great Filter (as well as anything before that point on the evolutionary chain)—because if it happened on both Earth and Mars, it’s almost definitely not a one-in-a-billion freak occurrence.

    If we are indeed rare, it could be because of a fluky biological event, but it also could be attributed to what is called the Rare Earth Hypothesis, which suggests that though there may be many Earth-like planets, the particular conditions on Earth—whether related to the specifics of this solar system, its relationship with the moon (a moon that large is unusual for such a small planet and contributes to our particular weather and ocean conditions), or something about the planet itself—are exceptionally friendly to life.

    2. We’re the First

    We're the First

    For Group 1 Thinkers, if the Great Filter is not behind us, the one hope we have is that conditions in the universe are just recently, for the first time since the Big Bang, reaching a place that would allow intelligent life to develop. In that case, we and many other species may be on our way to super-intelligence, and it simply hasn’t happened yet. We happen to be here at the right time to become one of the first super-intelligent civilizations.

    One example of a phenomenon that could make this realistic is the prevalence of gamma-ray bursts, insanely huge explosions that we’ve observed in distant galaxies. In the same way that it took the early Earth a few hundred million years before the asteroids and volcanoes died down and life became possible, it could be that the first chunk of the universe’s existence was full of cataclysmic events like gamma-ray bursts that would incinerate everything nearby from time to time and prevent any life from developing past a certain stage. Now, perhaps, we’re in the midst of an astrobiological phase transition and this is the first time any life has been able to evolve for this long, uninterrupted.

    3. We’re Fucked (The Great Filter is Ahead of Us)

    We're fucked

    If we’re neither rare nor early, Group 1 thinkers conclude that The Great Filter must be in our future. This would suggest that life regularly evolves to where we are, but that something prevents life from going much further and reaching high intelligence in almost all cases—and we’re unlikely to be an exception.

    One possible future Great Filter is a regularly-occurring cataclysmic natural event, like the above-mentioned gamma-ray bursts, except they’re unfortunately not done yet and it’s just a matter of time before all life on Earth is suddenly wiped out by one. Another candidate is the possible inevitability that nearly all intelligent civilizations end up destroying themselves once a certain level of technology is reached.

    This is why Oxford University philosopher Nick Bostrom says that “no news is good news.” The discovery of even simple life on Mars would be devastating, because it would cut out a number of potential Great Filters behind us. And if we were to find fossilized complex life on Mars, Bostrom says “it would be by far the worst news ever printed on a newspaper cover,” because it would mean The Great Filter is almost definitely ahead of us—ultimately dooming the species. Bostrom believes that when it comes to The Fermi Paradox, “the silence of the night sky is golden.”

    Explanation Group 2: Type II and III intelligent civilizations are out there—and there are logical reasons why we might not have heard from them.

    Group 2 explanations get rid of any notion that we’re rare or special or the first at anything—on the contrary, they believe in the Mediocrity Principle, whose starting point is that there is nothing unusual or rare about our galaxy, solar system, planet, or level of intelligence, until evidence proves otherwise. They’re also much less quick to assume that the lack of evidence of higher intelligence beings is evidence of their nonexistence—emphasizing the fact that our search for signals stretches only about 100 light years away from us (0.1% across the galaxy) and suggesting a number of possible explanations. Here are 10:

    Possibility 1) Super-intelligent life could very well have already visited Earth, but before we were here. In the scheme of things, sentient humans have only been around for about 50,000 years, a little blip of time. If contact happened before then, it might have made some ducks flip out and run into the water and that’s it. Further, recorded history only goes back 5,500 years—a group of ancient hunter-gatherer tribes may have experienced some crazy alien shit, but they had no good way to tell anyone in the future about it.

    Possibility 2) The galaxy has been colonized, but we just live in some desolate rural area of the galaxy. The Americas may have been colonized by Europeans long before anyone in a small Inuit tribe in far northern Canada realized it had happened. There could be an urbanization component to the interstellar dwellings of higher species, in which all the neighboring solar systems in a certain area are colonized and in communication, and it would be impractical and purposeless for anyone to deal with coming all the way out to the random part of the spiral where we live.

    Possibility 3) The entire concept of physical colonization is a hilariously backward concept to a more advanced species. Remember the picture of the Type II Civilization above with the sphere around their star? With all that energy, they might have created a perfect environment for themselves that satisfies their every need. They might have crazy-advanced ways of reducing their need for resources and zero interest in leaving their happy utopia to explore the cold, empty, undeveloped universe.

    An even more advanced civilization might view the entire physical world as a horribly primitive place, having long ago conquered their own biology and uploaded their brains to a virtual reality, eternal-life paradise. Living in the physical world of biology, mortality, wants, and needs might seem to them the way we view primitive ocean species living in the frigid, dark sea. FYI, thinking about another life form having bested mortality makes me incredibly jealous and upset.

    Possibility 4) There are scary predator civilizations out there, and most intelligent life knows better than to broadcast any outgoing signals and advertise their location. This is an unpleasant concept and would help explain the lack of any signals being received by the SETI satellites. It also means that we might be the super naive newbies who are being unbelievably stupid and risky by ever broadcasting outward signals. There’s a debate going on currently about whether we should engage in METI (Messaging to Extraterrestrial Intelligence—the reverse of SETI) or not, and most people say we should not. Stephen Hawking warns, “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.” Even Carl Sagan (a general believer that any civilization advanced enough for interstellar travel would be altruistic, not hostile) called the practice of METI “deeply unwise and immature,” and recommended that “the newest children in a strange and uncertain cosmos should listen quietly for a long time, patiently learning about the universe and comparing notes, before shouting into an unknown jungle that we do not understand.” Scary.[2]Thinking about this logically, I think we should disregard all the warnings get the outgoing signals rolling. If we catch the attention of super-advanced beings, yes, they might decide to wipe out our whole existence, but that’s not that different than our current fate (to each die within a century). And maybe, instead, they’d invite us to upload our brains into their eternal virtual utopia, which would solve the death problem and also probably allow me to achieve my childhood dream of bouncing around on the clouds. Sounds like a good gamble to me.

    Possibility 5) There’s only one instance of higher-intelligent life—a “superpredator” civilization (like humans are here on Earth)—who is far more advanced than everyone else and keeps it that way by exterminating any intelligent civilization once they get past a certain level. This would suck. The way it might work is that it’s an inefficient use of resources to exterminate all emerging intelligences, maybe because most die out on their own. But past a certain point, the super beings make their move—because to them, an emerging intelligent species becomes like a virus as it starts to grow and spread. This theory suggests that whoever was the first in the galaxy to reach intelligence won, and now no one else has a chance. This would explain the lack of activity out there because it would keep the number of super-intelligent civilizations to just one.

    Possibility 6) There’s plenty of activity and noise out there, but our technology is too primitive and we’re listening for the wrong things. Like walking into a modern-day office building, turning on a walkie-talkie, and when you hear no activity (which of course you wouldn’t hear because everyone’s texting, not using walkie-talkies), determining that the building must be empty. Or maybe, as Carl Sagan has pointed out, it could be that our minds work exponentially faster or slower than another form of intelligence out there—e.g. it takes them 12 years to say “Hello,” and when we hear that communication, it just sounds like white noise to us.

    Possibility 7) We are receiving contact from other intelligent life, but the government is hiding it. This is an idiotic theory, but I had to mention it because it’s talked about so much.

    Possibility 8) Higher civilizations are aware of us and observing us (AKA the “Zoo Hypothesis”). As far as we know, super-intelligent civilizations exist in a tightly-regulated galaxy, and our Earth is treated like part of a vast and protected national park, with a strict “Look but don’t touch” rule for planets like ours. We wouldn’t notice them, because if a far smarter species wanted to observe us, it would know how to easily do so without us realizing it. Maybe there’s a rule similar to the Star Trek’s “Prime Directive” which prohibits super-intelligent beings from making any open contact with lesser species like us or revealing themselves in any way, until the lesser species has reached a certain level of intelligence.

    Possibility 9) Higher civilizations are here, all around us. But we’re too primitive to perceive them. Michio Kaku sums it up like this:

    Lets say we have an ant hill in the middle of the forest. And right next to the ant hill, they’re building a ten-lane super-highway. And the question is “Would the ants be able to understand what a ten-lane super-highway is? Would the ants be able to understand the technology and the intentions of the beings building the highway next to them?

    So it’s not that we can’t pick up the signals from Planet X using our technology, it’s that we can’t even comprehend what the beings from Planet X are or what they’re trying to do. It’s so beyond us that even if they really wanted to enlighten us, it would be like trying to teach ants about the internet.

    Along those lines, this may also be an answer to “Well if there are so many fancy Type III Civilizations, why haven’t they contacted us yet?” To answer that, let’s ask ourselves—when Pizarro made his way into Peru, did he stop for a while at an anthill to try to communicate? Was he magnanimous, trying to help the ants in the anthill? Did he become hostile and slow his original mission down in order to smash the anthill apart? Or was the anthill of complete and utter and eternal irrelevance to Pizarro? That might be our situation here.

    Possibility 10) We’re completely wrong about our reality. There are a lot of ways we could just be totally off with everything we think. The universe might appear one way and be something else entirely, like a hologram. Or maybe we’re the aliens and we were planted here as an experiment or as a form of fertilizer. There’s even a chance that we’re all part of a computer simulation by some researcher from another world, and other forms of life simply weren’t programmed into the simulation.

    ________________

    As we continue along with our possibly-futile search for extraterrestrial intelligence, I’m not really sure what I’m rooting for. Frankly, learning either that we’re officially alone in the universe or that we’re officially joined by others would be creepy, which is a theme with all of the surreal storylines listed above—whatever the truth actually is, it’s mindblowing.

    Beyond its shocking science fiction component, The Fermi Paradox also leaves me with a deep humbling. Not just the normal “Oh yeah, I’m microscopic and my existence lasts for three seconds” humbling that the universe always triggers. The Fermi Paradox brings out a sharper, more personal humbling, one that can only happen after spending hours of research hearing your species’ most renowned scientists present insane theories, change their minds again and again, and wildly contradict each other—reminding us that future generations will look at us the same way we see the ancient people who were sure that the stars were the underside of the dome of heaven, and they’ll think “Wow they really had no idea what was going on.”

    Compounding all of this is the blow to our species’ self-esteem that comes with all of this talk about Type II and III Civilizations. Here on Earth, we’re the king of our little castle, proud ruler of the huge group of imbeciles who share the planet with us. And in this bubble with no competition and no one to judge us, it’s rare that we’re ever confronted with the concept of being a dramatically inferior species to anyone. But after spending a lot of time with Type II and III Civilizations over the past week, our power and pride are seeming a bit David Brent-esque.

    That said, given that my normal outlook is that humanity is a lonely orphan on a tiny rock in the middle of a desolate universe, the humbling fact that we’re probably not as smart as we think we are, and the possibility that a lot of what we’re sure about might be wrong, sounds wonderful. It opens the door just a crack that maybe, just maybe, there might be more to the story than we realize.

    To humble you further:

    Putting Time In Perspective

    More from Wait But Why:

    Why Generation Y Yuppies Are Unhappy
    7 Ways to be Insufferable on Facebook
    Why Procrastinators Procrastinate
    How to Name a Baby
    How to Pick Your Life Partner
    Your Life in Weeks
    10 Types of 30-Year-Old Single Guys
    The Great Perils of Social Interaction
     

    Sources:
    PNAS: Prevalence of Earth-size planets orbiting Sun-like stars
    SETI: The Drake Equation
    NASA: Workshop Report on the Future of Intelligence In The Cosmos
    Cornell University Library: The Fermi Paradox, Self-Replicating Probes, and the Interstellar Transportation Bandwidth
    NCBI: Astrobiological phase transition: towards resolution of Fermi’s paradox
    André Kukla: Extraterrestrials: A Philosophical Perspective
    Nick Bostrom: Where Are They?
    Science Direct: Galactic gradients, postbiological evolution and the apparent failure of SETI
    Nature: Simulations back up theory that Universe is a hologram
    Robin Hanson: The Great Filter – Are We Almost Past It?
    John Dyson: Search for Artificial Stellar Sources of Infrared Radiation

    Join 50,749 others and have our posts delivered to you by email.

    (No spam, ever. We promise.)

    LOOK AT THIS BIG BUTTON WE MADE

    116,916

    20 Jun 12:23

    spocksfatalboner: spocksfatalboner: Have you always wanted to explore a tiny pixilated version of...

    spocksfatalboner:

    spocksfatalboner:

    Have you always wanted to explore a tiny pixilated version of the Enterprise-D as a tiny pixilated Data? Well you’re in luck!

    look at this shit

    image

    lol what is even happening here

    image

    did u know there was a bathroom on the bridge?! this andorian did apparently

    image

    like wow say goodbye to my evening plans

    16 Jun 23:15

    Beautiful Iceland

    by Jason Kottke

    I've seen the waterfalls and the hot springs and the rocky desolation, but I didn't know that Iceland was also this:

    Iceland

    Iceland

    Iceland

    I mean, come on. Photos by Max Rive, Menno Schaefer, and Johnathan Esper. Many more here. (via mr)

    Tags: Iceland   photography