(credit: Walter Reed National Military Medical Center)
It's not an exaggeration to say that functional MRI has revolutionized the field of neuroscience. Neuroscientists use MRI machines to pick up changes in blood flow that occur when different areas of the brain become more or less active. This allows them to noninvasively figure out which areas of the brain get used when performing different tasks, from playing economic games to reading words.
But the approach and its users have had their share of critics, including some who worry about over-hyped claims about our ability to read minds. Others point out that improper analysis of fMRI data can produce misleading results, such as finding areas of brain activity in a dead salmon. While that was the result of poor statistical techniques, a new study in PNAS suggests that the problem runs significantly deeper, with some of the basic algorithms involved in fMRI analysis producing false positive "signals" with an alarming frequency.
The principle behind fMRI is pretty simple: neural activity takes energy, which then has to be replenished. This means increased blood flow to areas that have been recently active. That blood flow can be picked up using a high-resolution MRI machine, allowing researchers to identify structures in the brain that become active when certain tasks are performed.
The Franz desktop messaging app has received a big update, adding a host of popular web services to its roster.
This post, TweetDeck, HipChat and Gmail Added to Franz Messaging Client, was written by Joey-Elijah Sneddon and first appeared on OMG! Ubuntu!.
(credit: Axel Krieger])
Step aside, Ben Carson. The once lauded ability to perform delicate operations with gifted hands may soon be replaced with the consistent precision of an autonomous robot. And—bonus—robots don’t get sleepy.
In a world’s first, researchers report using an autonomous robot to perform surgical operations on soft tissue and in living pigs, where the adroit droid stitched up broken bowels. The researchers published the robotic reveal in the journal Science Translational Medicine, and they noted the new machinery surpassed the consistency and precision of expert surgeons, laparoscopy, and robot-assisted (non-autonomous robotic) surgery.
The authors, led by Peter Kim at Children’s National Health System in Washington, DC, emphasized this feat is not intended to be a step toward completely replacing surgeons. Rather, they want the technology to provide new tools that help every operation go smoothly. “By having a tool like this and by making the procedures more intelligent, we can ensure better outcomes for patients,” Kim said.
A tiger doing some problem-solving. (credit: Greg Stricker/Sarah Benson-Amram)
Animal intelligence varies widely. Some have cognitive abilities that were once thought to be limited to humans, while others seem to act purely on instinct. It's not simply a matter of having large brains; birds don't have especially large ones, but they can master complicated problems or learn the solution from others in their social network.
So what can explain animal intelligence? One general trend that has been noted is that the size of the brain relative to the rest of the body seems to matter. Birds may not have big brains on an absolute scale, but their brains are relatively large compared to their body mass. Others have also noted that lots of the animals we consider smart seem to operate in social groups. These include birds, primates, elephants, and dolphins.
A new study looks at problem-solving across a wide range of carnivores and finds mixed support for these ideas. Belonging to a social group didn't seem to make a difference, but having a large brain to body ratio did. The surprising (or perhaps worrying) thing is that the brain to body ratio was high in some of the biggest carnivores tested: bears.
Can you use a magnifying glass and moonlight to light a fire?
At first, this sounds like a pretty easy question.
A magnifying glass concentrates light on a small spot. As many mischevious kids can tell you, a magnifying glass as small as a square inch in size can collect enough light to start a fire. A little Googling will tell you that the Sun is 400,000 times brighter than the Moon, so all we need is a 400,000-square-inch magnifying glass. Right?
Wrong. Here's the real answer: You can't start a fire with moonlightPretty sure this is a Bon Jovi song. no matter how big your magnifying glass is. The reason is kind of subtle. It involves a lot of arguments that sound wrong but aren't, and generally takes you down a rabbit hole of optics.
First, here's a general rule of thumb: You can't use lenses and mirrors to make something hotter than the surface of the light source itself. In other words, you can't use sunlight to make something hotter than the surface of the Sun.
There are lots of ways to show why this is true using optics, but a simpler—if perhaps less satisfying—argument comes from thermodynamics:
Lenses and mirrors work for free; they don't take any energy to operate.And, more specifically, everything they do is fully reversible—which means you can add them in without increasing the entropy of the system. If you could use lenses and mirrors to make heat flow from the Sun to a spot on the ground that's hotter than the Sun, you'd be making heat flow from a colder place to a hotter place without expending energy. The second law of thermodynamics says you can't do that. If you could, you could make a perpetual motion machine.
The Sun is about 5,000°C, so our rule says you can't focus sunlight with lenses and mirrors to get something any hotter than 5,000°C. The Moon's sunlit surface is a little over 100°C, so you can't focus moonlight to make something hotter than about 100°C. That's too cold to set most things on fire.
"But wait," you might say. "The Moon's light isn't like the Sun's! The Sun is a blackbody—its light output is related to its high temperature. The Moon shines with reflected sunlight, which has a "temperature" of thousands of degrees—that argument doesn't work!"
It turns out it does work, for reasons we'll get to later. But first, hang on—is that rule even correct for the Sun? Sure, the thermodynamics argument seems hard to argue with,Because it's correct. but to someone with a physics background who's used to thinking of energy flow, it may seem hard to swallow. Why can't you concentrate lots of sunlight onto a point to make it hot? Lenses can concentrate light down to a tiny point, right? Why can't you just concentrate more and more of the Sun's energy down onto the same point? With over 1026 watts available, you should be able to get a point as hot as you want, right?
Except lenses don't concentrate light down onto a point—not unless the light source is also a point. They concentrate light down onto an area—a tiny image of the Sun.Or a big one! This difference turns out to be important. To see why, let's look at an example:
This lens directs all the light from point A to point C. If the lens were to concentrate light from the Sun down to a point, it would need to direct all the light from point B to point C, too:
But now we have a problem. What happens if light goes back from point C toward the lens? Optical systems are reversible, so the light should be able to go back to where it came from—but how does the lens know whether the light came from B or to A?
In general, there's no way to "overlay" light beams on each other, because the whole system has to be reversible. This keeps you from squeezing more light in from a given direction, which puts a limit on how much light you can direct from a source to a target.
Maybe you can't overlay light rays, but can't you, you know, sort of smoosh them closer together, so you can fit more of them side-by-side? Then you could gather lots of smooshed beams and aim them at a target from slightly different angles.
Nope, you can't do this.We already know this, of course, since earlier we said that it would let you violate the second law of thermodynamics.
It turns out that any optical system follows a law called conservation of étendue. This law says that if you have light coming into a system from a bunch of different angles and over a large "input" area, then the input area times the input angleNote to nitpickers: In 3D systems, this is technically the solid angle, the 2D equivalent of the regular angle, but whatever. equals the output area times the output angle. If your light is concentrated to a smaller output area, then it must be "spread out" over a larger output angle.
In other words, you can't smoosh light beams together without also making them less parallel, which means you can't aim them at a faraway spot.
There's another way to think about this property of lenses: They only make light sources take up more of the sky; they can't make the light from any single spot brighter,A popular demonstration of this: Try holding up a magnifying glass to a wall. The magnifying glass collects light from many parts of the wall and sends them to your eye, but it doesn't make the wall look brighter. because it can be shownThis is left as an exercise for the reader. that making the light from a given direction brighter would violate the rules of étendue.My résumé says étendue is my forté. In other words, all a lens system can do is make every line of sight end on the surface of a light source, which is equivalent to making the light source surround the target.
If you're "surrounded" by the Sun's surface material, then you're effectively floating within the Sun, and will quickly reach the temperature of your surroundings.(Very hot)
If you're surrounded by the bright surface of the Moon, what temperature will you reach? Well, rocks on the Moon's surface are nearly surrounded by the surface of the Moon, and they reach the temperature of the surface of the Moon (since they are the surface of the Moon.) So a lens system focusing moonlight can't really make something hotter than a well-placed rock sitting on the Moon's surface.
Which gives us one last way to prove that you can't start a fire with moonlight: Buzz Aldrin is still alive.
Hovertext: Anyone wanna teach an ethics class called And Why is *This* SMBC Wrong?
The expression of a Faroese starling who's listened to too much vocoder. (credit: flickr user: Arne List)
Humans are obviously pretty special when it comes to language. One of our cleverest tricks is the ability to process the sounds of spoken language at high speed—even more remarkable when you consider just how variable these sounds are. People have very different voices and very differently shaped throats and mouths, which all affect the sound waves that come out of them. And yet we have very little trouble communicating with speech.
There are many ways to try to figure out how this wizardry evolved, but one particularly useful source of information is birds. Their evolutionary relationship to humans goes pretty far back on the family tree, so anything unusual we have in common with them—like vocal learning—is unlikely to be because of our shared genetic history. Instead, it's more likely to result from similar evolutionary pressures causing both of us to hit on the similar solutions.
This is why a paper in this week's PNAS is so fascinating: it found that songbirds process sounds in a way that is very similar to humans. Like us, they're able to process how all the complex frequencies bound up in a single sound relate to one another. It’s very close to how humans process vowels.
(credit: Getty Images)
Calories consumed minus calories burned—it’s the simple formula for weight loss or gain, but dieters often find that it doesn’t work. Cynthia Graber and Nicola Twilley of Gastropod investigate for Mosaic science, where this story first appeared. It's republished here under a Creative Commons license.
“For me, a calorie is a unit of measurement that’s a real pain in the rear.”
Bo Nash is 38. He lives in Arlington, Texas, where he’s a technology director for a textbook publisher. He has a wife and child. And he’s 5’10” and 245 lbs—which means he is classed as obese.
(credit: New Line Cinema)
Lots of economic theory is based on the idea that humans will naturally seek to maximize their profits, but is that really the case? The field of behavioral economics involves a variety of attempts to find out. Things like game theory are used to create simplified economic systems in which people's behavior can be tracked.
A number of results indicate that some people do in fact behave as selfish, profit-maximizing individuals. But many others behave more altruistically, forging cooperative relationships in order to obtain greater benefits.
Or so it appeared. A group of Oxford researchers has now published a study in which they looked a bit more carefully at the people who were taking these tests, discovering that they'd be just as altruistic toward a computer. And that's probably because most of them simply don't understand the rules of the game they're playing.
Enlarge / New Caledonian crow uses a tool to grab insects deep inside a piece of wood.
Though we've long known that crows use tools to get food (and occasionally to amuse themselves), scientists have lacked definitive evidence. Which is why two intrepid researchers invented the crow tailcam, to record the inventiveness of these birds in the wild.
UK researchers Jolyon Troscianko and Christian Metz had observed crows making tools in the wild, as had some of their colleagues. But none of them ever caught this amazing feat of intelligence on video. A couple of years ago, Metz co-authored a paper about how crows make hooked tools, carefully fashioning them out of branches, in order to get at hard-to-reach grubs inside a piece of wood. But he was quick to point out that those feats of tool-making were done in captivity—where animals often develop a penchant for tool-making that they wouldn't have in the wild. In a paper out last week from Biology Letters, however, Troscianko and Metz describe how they finally caught wild crows making their hooked tools on video.
Not to put too fine a point on it, they put cameras on the crows' butts. More precisely, they used biodegradable rubber to attach tiny cameras to the birds' two strongest tail feathers, giving the researchers a below-the-belly view of the crow's activities. Because crows often lower their heads to foot level to eat and make tools, this was also an excellent vantage point to capture tool-making in action.
Two years ago Google and NASA went halfsies on a D-Wave quantum computer, mostly to find out whether there are actually any performance gains to be had when using quantum annealing instead of a conventional computer. Recently, Google and NASA received the latest D-Wave 2X quantum computer, which the company says has "over 1000 qubits."
At an event yesterday at the NASA Ames Research Center, where the D-Wave computer is kept, Google and NASA announced their latest findings—and for highly specialised workloads, quantum annealing does appear to offer a truly sensational performance boost. For an optimisation problem involving 945 binary variables, the D-Wave X2 is up to 100 million times faster (108) than the same problem running on a single-core classical (conventional) computer.
Google and NASA also compared the D-Wave X2's quantum annealing against Quantum Monte Carlo, an algorithm that emulates quantum tunnelling on a conventional computer. Again, a speed-up of up to 108 was seen in some cases.
There have long been rumors, leaks, and statements about the NSA "breaking" crypto that is widely believed to be unbreakable, and over the years, there's been mounting evidence that in many cases, they can do just that. Now, Alex Halderman and Nadia Heninger, along with a dozen eminent cryptographers have presented a paper at the ACM Conference on Computer and Communications Security (a paper that won the ACM's prize for best paper at the conference) that advances a plausible theory as to what's going on. In some ways, it's very simple -- but it's also very, very dangerous, for all of us. (more…)
What if the Earth were made entirely of protons, and the Moon were made entirely of electrons?
This is, by far, the most destructive What-If scenario to date.
You might imagine an electron Moon orbiting a proton Earth, sort of like a gigantic hydrogen atom. On one level, it makes a kind of sense; after all, electrons orbit protons, and moons orbit planets. In fact, a planetary model of the atom was briefly popular (although it turned out not to be very useful for understanding atoms.This model was (mostly) obsolete by the 1920s, but lived on in an elaborate foam-and-pipe-cleaner diorama I made in 6th grade science class.)
If you put two electrons together, they try to fly apart. Electrons are negatively charged, and the force of repulsion from this charge is about 20 orders of magnitude stronger than the force of gravity pulling them together.
If you put 1052 electrons together—to build a Moon—they push each other apart really hard. In fact, they push each other apart so hard, each electron would be shoved away with an unbelievable amount of energy.
It turns out that, for the proton Earth and electron Moon in Noah's scenario, the planetary model is even more wrong than usual. The Moon wouldn't orbit the Earth because they'd barely have a chance to influence each other;I interpreted the question to mean that the Moon was replaced with a sphere of electrons the size and mass of the Moon, and ditto for the Earth. There are other interpretations, but practically speaking the end result is the same. the forces trying to blow each one apart would be far more powerful than any attractive force between the two.
If we ignore general relativity for a moment—we'll come back to it—we can calculate that the energy from these electrons all pushing on each other would be enough to accelerate all of them outward at near the speed of light.But not past it; we're ignoring general relativity, but not special relativity. Accelerating particles to those speeds isn't unusual; a desktop particle accelerator can accelerate electrons to a reasonable fraction of the speed of light. But the electrons in Noah's Moon would each be carrying much, much more energy than those in a normal accelerator—orders of magnitude more than the Planck energy, which is itself many orders of magnitude larger than the energies we can reach in our largest accelerators. In other words, Noah's question takes us pretty far outside normal physics, into the highly theoretical realm of things like quantum gravity and string theory.
So I contacted Dr. Cindy Keeler, a string theorist with the Niels Bohr Institute. I explained Noah's scenario, and she was kind enough to offer some thoughts.
Dr. Keeler agreed that we shouldn't rely on any calculations that involve putting that much energy in each electron, since it's so far beyond what we're able to test in our accelerators. "I don't trust anything with energy per particle over the Planck scale. The most energy we've really observed is in cosmic rays; more than LHC by circa 106, I think, but still not close to the Planck energy. Being a string theorist, I'm tempted to say something stringy would happen—but the truth is we just don't know."
Luckily, that's not the end of the story. Remember how we're ignoring general relativity? Well, this is one of the very, very rare situations where bringing in general relativity makes a problem easier to solve.
There's a huge amount of potential energy in this scenario—the energy that we imagined would blast all these electrons apart. That energy warps space and time just like mass does.If we let the energy blast the electrons apart at near the speed of light, we'd see that energy actually take the form of mass, as the electrons gained mass relativistically. That is, until something stringy happened. The amount of energy in our electron Moon, it turns out, is about equal to the total mass and energy of the entire visible universe.
An entire universe worth of mass-energy—concentrated into the space of our (relatively small) Moon—would warp space-time so strongly that it would overpower even the repulsion of those 1052 electrons.
Dr. Keeler's diagnosis: "Yup, black hole." But this is no an ordinary black hole; it's a black hole with a lot of electric charge.The proton Earth, which would also be part of this black hole, would reduce the charge, but since an Earth-mass of protons has much less charge than a Moon-mass of electrons, it doesn't affect the result much. And for that, you need a different set of equations—rather than the standard Schwarzschild equations, you need the Reissner–Nordström ones.
In a sense, the Reissner-Nordström equations compare the outward force of the charge to the inward pull of gravity. If the outward push from the charge is large enough, it's possible the event horizon surrounding the black hole can disappear completely. That would leave behind an infinitely-dense object from which light can escape—a naked singularity.
Once you have a naked singularity, physics starts breaking down in very big ways. Quantum mechanics and general relativity give absurd answers, and they're not even the same absurd answers. Some people have argued that the laws of physics don't allow that kind of situation to arise. As Dr. Keeler put it, "Nobody likes a naked singularity."
In the case of an electron Moon, the energy from all those electrons pushing on each other is so large that the gravitational pull wins, and our singularity would form a normal black hole. At least, "normal" in some sense; it would be a black hole as massive as the observable universe.A black hole with the mass of the observable universe would have a radius of 13.8 billion light-years, and the universe is 13.8 billion years old, which has led some people to say "the Universe is a black hole!" (It's not.)
Would this black hole cause the universe to collapse? Hard to say. The answer depends on what the deal with dark energy is, and nobody knows what the deal with dark energy is.
But for now, at least, nearby galaxies would be safe. Since the gravitational influence of the black hole can only expand outward at the speed of light, much of the universe around us would remain blissfully unaware of our ridiculous electron experiment.
Today, The Lancet released the results of a large field trial of a vaccine against Ebola, and the results are more than promising. Within the limitations of the study, the vaccine appears to be 100 percent effective. The results were so good that the trial itself has been stopped, and the vaccine is now being used to control the spread of the disease.
The vaccine is made by the pharmaceutical giant Merck, which licensed it from the Public Health Agency of Canada. It was developed through what has become a fairly standard approach. A harmless virus (vesicular stomatitis virus, or VSV) was engineered so that it also carried the gene for Ebola's major surface protein, simply called glycoprotein. When people receive the vaccination, a harmless infection follows, which triggers an immune response. This response targets not only VSV but the Ebola protein as well. Ideally, once the infection is eliminated, the immune system is able to recognize both VSV and Ebola.
The trial, performed in southern Guinea, ran from April through July 20th of this year (the analysis, paper writing, and peer review must have proceeded at a staggering pace). It used what is called a "ring" design: once an infected individual was identified, a ring of potentially exposed individuals around them was identified. These individuals lived with the infected one, had contact with them after symptoms appeared, or came in contact with their clothes, bedding, or bodily fluids.