Shared posts

07 Dec 16:44

Handelsstreit erfasst Luxusgüteraktien

by zautor

Die US-Regierung droht Frankreich mit hohen Strafzöllen auf Handtaschen und Champagner. Die Aktienkurse französischer Luxusgüterkonzerne wie Hermès und LVMH fielen daraufhin. Für Anleger ergeben sich daraus attraktive Einstiegschancen Es ist noch nicht lange her, da zeigten sich Donald Trump und Bernard Arnault, Chef des französischen Luxusgüterherstellers LVMH, in trautem Einvernehmen. Im Oktober eröffneten die beiden […]

Der Beitrag Handelsstreit erfasst Luxusgüteraktien erschien zuerst auf Capital.de.

02 Dec 17:16

2017 – der Jahresrückblick

by Teilzeitinvestor

Mittlerweile ist man mit einem Jahresrückblick Mitte Dezember ja schon spät dran, RTL & Co haben die großen „So war 2017“ Shows schon Anfang des Monats gesendet. Daher also auch hier der Rückblick auf die letzten 12 Monate Teilzeitinvestor, obwohl das Jahr noch nicht ganz durch ist. Wenn in den letzten zwei Dezemberwochen noch ein spontaner Aktiencrash kommt, kann ich den Artikel ja immer noch überarbeiten…

Generell war 2017 geprägt vom Zuschauen an der Seitenlinie: Zinsen auf Tiefsständen, Aktien auf Rekordniveau. Keine Anlageklasse hat mich dieses Jahr so richtig überzeugen können, mal von Kryptowährungen abgesehen. Anlagenotstand nennt man das wohl. Aber, der Reihe nach:

Aktien

Auf dem derzeitigen Kursniveau sind Aktien schlicht zu teuer, vor allem in Amerika. Bevor hier nicht ein massiver Rücksetzer kommt, investiere ich nicht weiter. Habe ich leider schon seit Anfang des Jahres gesagt. Und damit knapp 20 Prozent Kurssteigerung (plus Dividenden) sausen lassen, die in den letzten 12 Monaten bei amerikanischen Aktien hinzugekommen sind (am Beispiel des S&P 500 Index).

Interessanterweise war mein einziges Aktieninvestment in 2017 dann treffsicher eine Aktie, die gegen den Trend gefallen ist und mitterweile rund 20 Prozent in den Miesen ist. Fundamental traue ich CVS Health zwar weiterhin was zu und überlege, nachzukaufen. Aber rein objektiv muss man sagen, dass dieses Jahr eindrucksvoll bestätigt hat, dass Stockpicking nicht funktioniert. Oder zumindest nicht bei mir.

Bestperformendste Aktie im Depot: Lufthansa.

Ein sattes Plus von fast 240% seit Januar. Insgesamt ist mein Gewinn nicht ganz so hoch, da ich bereits 2008 eingestiegen bin zu einem damaligen Kurs von 10,50 Euro. Aber immerhin noch eine fast Verdreifachung zum Einstandskurs, da kann man nicht meckern. Manchmal funktioniert Buy and Hold also wirklich 😉 Und mittlerweile zahlen die Lufthanseaten ja auch wieder eine anständige Dividende.

Schlechtperformendste Aktie im Depot: Solarworld

Solarworld war schon zu Beginn des Jahres kaum noch was wert, ist aber im Jahresverlauf nochmal um schlanke siebzig Prozent eingebrochen und zum sprichwörtlichen Pennystock verkommen. Das einzig gute daran: Mental hatte ich die Aktie schon länger abgeschrieben, da der große Wertverlust schon in den Jahren davor passiert ist. Real hielt sich der Wertverlust 2017 für mich daher in Grenzen. Aber manchmal hilft halt auch Buy and Hold nicht wirklich…

Indexfonds

Wo Stockpicking bei mir erwiesenermaßen nicht funktioniert, sollte ich mich doch lieber auf breitgestreute Indexfonds konzentrieren. Aber auch hier habe ich der allgemeinen Kursentwicklung das Jahr über nur fassungslos hinterhergesehen und auf einen deutlichen Rücksetzer zum Einstieg gewartet. Der dann aber leider nie kam.

Zumindest war ich schlau genug, einen Sparplan abzuschließen, der das ganze Jahr über durchlief und für ein kontinuierliches Basisinvestment gesorgt hat. Der Sparplan bespart allerdings nur den Stoxx 600, was dazu geführt hat dass meine regionale Gewichtung noch unausgewogener geworden ist als sie es eh schon war. Home Bias lässt grüßen. Amerikanische Aktien waren und sind mir nach wie vor nicht geheuer auf dem derzeitigen Niveau, das wird noch böse enden. Also entweder für die amerikanische Wirtschaft mit einem großen Crash. Oder für mich, weil ich an der Hausse des Jahrhunderts nicht teilgenommen habe.

Tagesgeld

Die Zinsen für täglich verfügbares Geld haben sich 2017 weiter dem Nullpunkt genähert. Die DKB lag Anfang des Jahres noch bei überschaubaren 0,4 Prozent, die im März dann nochmal um die Hälfte auf 0,2 Prozent reduziert wurden. Und das auch nur für die ersten 100.000 Euro, darüber gibt es gar keine Zinsen mehr. Mittlerweile muss man fast froh sein, dass noch keine Negativzinsen fällig werden. Im halbwegs seriösen Euro-Ausland siehts noch etwas besser aus: Die niederländische Moneyou zahlt mir noch 0,45 Prozent (nach 0,55 Prozent zu Jahresbeginn), reich wird man damit aber auch nicht. Und all mein Geld will ich auch nicht unbedingt auf einer holländischen Bank deponieren.

Festgeld

Bei längerfristigen Anlagen sieht es leider auch nicht besser aus. Konnte man Anfang des Jahres bei der holländischen NIBC zumindest noch 0,8 Prozent Zinsen bei einem Jahr Laufzeit bekommen, gibt es mittlerweile nur noch 0,6 Prozent. Bei Anbietern mit deutscher Einlagensicherung gibt es nochmal die Hälfte weniger. Der einzige Grund, im Moment Geld für einen längeren Zeitraum festzulegen, ist, sich vor den vielleicht doch nochmal aufkommenden Negativzinsen zu schützen. Wobei es aktuell ja eher nach Zinserhöhung für 2018 aussieht, insofern habe ich mein Festgeld gestaffelt immer nur bis maximal 24 Monaten Laufzeit angelegt, wobei die Restlaufzeit im Schnitt bei weniger als 12 Monaten liegt.

Gesamtperformance

Mein „Net Worth“ ist in den letzten 12 Monaten um rund 11 Prozent gestiegen. Das beinhaltet aber sowohl Kursgewinne, Zinsen und Dividenden, als auch in diesem Jahr zusätzlich angespartes Geld. Genau aufgedröselt habe ich nicht, was wieviel dazu beigetragen hat, aber insgesamt ist das eine sehr positive Entwicklung. Der Aktienanteil ist von 28 auf knapp 33 Prozent gestiegen, was immer noch deutlich zu niedrig ist. Zu größeren Investitionen fehlt mir aber bei den aktuellen Rekordständen der Aktienmärkte schlicht der Mut.

Bitcoins

Mein gut dokumentierter Ausflug ins Bitcoin-Mining ist mittlerweile wieder eingestellt. Die anfangs noch ganz interessanten Margen waren zwischenzeitlich auf Centbeträge runtergedampft, so dass sich der ganze Aufwand schlicht nicht mehr gelohnt hat. Allerdings habe ich, meine Arbeitszeit mal nicht mitgerechnet, zumindest das Investment in meine neue Grafikkarte durchs Mining finanziert. Und da ich mir eh eine neue Karte kaufen wollte, verbuche ich das mal als Gewinn 🙂 Die aktuelle Kursexplosion bei Bitcoin und Ethereum macht Mining zwar wieder etwas interessanter. Aber mehr als ein Hobbyprojekt wird das für mich nie werden, das Geld muss anderswo verdient werden.

Blog

Die Zurückhaltung bei Aktienkäufen hat sich auch hier im Blog niedergeschlagen. Die Grundsatzthemen à la „der beste ETF“ hab ich durchgekaut, und ohne neue Käufe gibts daher auch nicht viel neues zu berichten (und, nebenbei, leider auch im Moment schlicht nicht die Zeit, was zu schreiben). Das soll sich aber in 2018 ändern, man braucht ja gute Vorsätze.

Allen Lesern schon einmal vielen Dank fürs fleißige Mitlesen und Kommentieren in diesem Jahr, und eine entspannte Restlaufzeit für 2017.

02 Dec 17:16

Aktuelle Dividendenrenditen von Vanguard Indexfonds

by Teilzeitinvestor

Ich hatte ja schon vor Jahren festgestellt, dass es gar nicht so einfach ist, die Dividendenrendite eines ETF zu ermitteln. Was schade ist, denn für uns passive Investoren sind Dividenden ja der neue Zins, und daher ein wichtiges Kriterium für die Geldanlage.

Mittlerweile hat sich die Situation ein wenig gebessert: Anbieter wie Vanguard listen nicht nur die konkreten Ausschüttungszahlungen ihrer Fonds auf der Homepage, sondern auch die daraus errechnete prozentuale Rendite. Bei Vanguard heisst das dann z.B. „historisch Rendite per Schlusskurs“, bei iShares schlicht „Ausschüttungsrendite“. Bei JustETF ist die Dividendenrendite immer noch ein kostenpflichtiges Premium-Extra. Dafür gibt es auf ExtraETF.com diesen Wert auch kostenlos.

Was (mir) nach wie vor aber nicht so ganz transparent ist, ist die Frage, wie diese Rendite jeweils berechnet wird. Das ist gar nicht mal so trivial, da die Ausschüttungen nicht immer regelmäßig erfolgen, teilweise erfolgen sie in Dollar und müssen umgerechnet werden, und der Bezugszeitraum ist unklar.

Jetzt helfe ich mir selbst

Hilft also nix, eine Excel-Tabelle muss her, um das ganze selber nachzurechnen. Die Frage die ich beantworten will lautet:

„Wenn ich heute Indexfonds X kaufe zum Preis von Y, wieviel Prozent Dividendenrendite kann ich erwarten auf Basis der vergangenen Performance“

In der Frage ist allerdings schon die größte Einschränkung mit formuliert: „Rendite aufgrund vergangener Performance“. Ich habe leider keinerlei Anspruch darauf, dass die Ausschüttungen in Zukunft so bleiben. Oder, wie es in Anlegerprospekten immer so schön heisst: „Die Wertentwicklung der Vergangenheit lässt keine verlässlichen Rückschlüsse auf die zukünftige Entwicklung zu“.

Da es aber keine sinnvolle Alternative zur Rückwärtsbetrachtung gibt, akzeptieren wir die historischen Ausschüttungen zähneknirschend als einzige Möglichkeit, auf eine Vergleichswert zu kommen.

Die Rechnung bitte

Gerechnet wird wie folgt:

  • Alle Ausschüttungen der letzten 12 Monate, gerechnet ab heute, werden addiert
  • Die addierten Ausschüttungen ergeben im Verhältnis zum aktuellen Kurs des ETF die prozentuale Rendite
  • zum Vergleich erfolgt auch eine kalenderjährliche Betrachtung, also alle Auschüttungen vom 1. Januar bis 31. Dezember des vergangenen Jahres
  • Gewertet wird das Ex-Dividend Date, nicht der Zeitpunkt der Auszahlung, der meist ein paar Tage später liegt.
  • Ausschüttungsbeträge sind direkt von den Anbieter-Websites übernommen
  • In Dollar aufgeführte Ausschüttungen werden zum historischen Kurs in Euro umgerechnet. Eine Ausschüttung von einem Dollar im Mai 2018 geht also zum Umrechnungskurs Euro/Dollar aus demselben Monat in die Berechnung ein, denn so wäre der Betrag auch auf meinem Konto gelandet, hätte ich den ETF zu dem Zeitpunkt besessen.

Ich habe mich auf die gängigen Vanguard-ETF beschränkt, da diese in fast allen Bereichen die attraktivsten Konditionen anbieten. Den beliebten iShares Stoxx Europe 600 habe ich auch noch reingenommen, schlicht weil dieser den größten Teil meines eigenen ETF Depots ausmacht.

Ladies and gentlemen, here are the results of the teilzeitinvestorian jury:

Vanguard FTSE AllWorld
Vanguard FTSE Dev. Europe
Vanguard S&P 500
Vanguard FTSE Dev. Asia ex J. Vanguard FTSE Japan Vanguard FTSE Emerg. Markets iShares STOXX Europe 600
Ticker fra:vgwl fra:vgeu fra:vusa fra:vgej fra:vjpn fra:vfem fra:exsa
Kurs (EUR) vom 7.11.19 81,19 € 31,95 € 52,98 € 23,03 € 28,63 € 53,13 € 40,22 €
Ausschüttung 12 Monate 1,59 € 1,05 € 0,80 € 0,77 € 0,48 € 1,33 € 1,38 €
Dividenden-rendite 1,96% 3,29% 1,51% 3,33% 1,69% 2,50% 3,42%
Ausschüttung 1.1.-31.12. 1,48 € 0,96 € 0,72 € 0,71 € 0,46 € 1,27 € 1,01 €
Dividenden- rendite 1,82% 3,00% 1,36% 3,10% 1,60% 2,39% 2,50%

Was fällt auf?

Dass der ETF auf den amerikanischen S&P 500 prozentual die geringsten Ausschüttungen hat, verwundert mich nicht. US-Unternehmen gelten zwar als fleißige Dividendenzahler, die Kurse sind aber so dermaßen heißgelaufen in den letzten Jahren, dass die Dividenden nicht mithalten konnten. Das ist einer der Gründe, warum ich mich bis auf ein paar Einzelaktien im US-Markt stark zurückgehalten habe.

Erstaunlich finde ich das gute Abschneiden des Asia/Pacific ETF mit deutlich über drei Prozent Rendite. Samsung & Co. hatte ich jetzt eher nicht als profilierte Dividendenzahler in Erinnerung (Südkorea gehört bei Vanguard, anders als bei MSCI-basierten Indizes, zu Asia/Pacific und nicht zu den Emerging Markets). Allerdings sind im Asia/Pacific Index auch australische Minenunternehmen wie BHP enthalten, die wiederum vergleichsweise großzügig ausschütten.

Bemerkenswert auch das deutlich unterschiedliche Ergebnis der ETF auf europäische Aktien von iShares und Vanguard. Die Indizes sind zwar nicht dieselben (FTSE vs. Stoxx), aber die enthaltenen Aktien und Gewichtungen nicht so unterschiedlich. Der iShares Stoxx Europe 600 liegt im 12-Monatsvergleich knapp vor dem Vanguard Developed Europe, für 2018 schüttet er allerdings ein halbes Prozent weniger aus. Die Unterschiede können auch mit dem unterschiedlichen Ausschüttungsverhalten der Fonds zusammenhängen. Während Vanguard pünktlich wie ein Uhrwerk im März, Juni, September und Dezember auszahlt, ist das Auszahlungsverhalten bei anderen Anbietern eher erratisch.

Was soll das?

Für einen disziplinierten passiven Anleger, der streng nach Regionsgewichtung sein Geld auf Nordamerika, Europa, Asien und Emerging Markets aufteilt, bringt der Blick auf die Dividendenrendite nichts. Denn die Idee ist ja gerade, das Geld nach einem festen prozentualen Anteil zu investieren, unabhängig davon wie gerade die Aussichten sind oder wie hoch die Dividendenzahlungen. Wenn man allerdings wild zwischen passivem ETF-Investieren und Dividendenstrategie hin- und herpendelt, so wie ich das mache, ist der Renditevergleich ganz interessant. Für mich ist am Ende des Tages wichtig, wie gut mein Geld „verzinst“ wird, wohl wissend, dass Dividenden keine Zinsen sind.

Der Plan ist, die Auswertung hier im Blog quartalsweise zu aktualisieren, um zu gucken, wie sich die Dividendenperformance in den verschiedenen Regionen im Zeitverlauf entwickelt.

16 May 09:24

Who Will Own Your Data If the Tech Bubble Bursts?

by Kaveh Waddell

Imagine that Silicon Valley’s nightmare comes true: The bubble bursts. Unicorns fall to their knees. The tech giants that once fought to attract talented developers with mini-golf and craft beer scramble to put out fires.

This is the setting of a cyber-doomsday scenario developed by researchers at Berkeley’s Center for Long-Term Cybersecurity and published last month. They gamed out five different scenarios based on current trends in online security—and this one is by far the most alarming.

If stock prices plunge, the researchers ask, what will be left of the Facebooks and Twitters of the world? Like a broken-down car that can only be scrapped for parts, the only thing worth salvaging from the shells of former tech companies may be user data.

In their report, the researchers imagine a slow start to the tailspin that eventually leads to the collapse of our current Internet Age. It starts with a disillusionment with Silicon Valley (“When did we stop trying to change the world and instead just make indulgent products for rich 30-year-old singles?”) and subsequent developer exodus to Asia. Europe starts regulating technology even more aggressively, and investors start rolling their eyes at buzzwords like “innovation.” Finally, some outside event—a revolution overseas, a contentious election—shakes up markets, and the collapse begins. Stock prices soon fall by 90 percent.

Desperate companies will resort, if they can, to selling the detailed data they’ve meticulously collected about their users—whether it’s personally identifiable information, data about preferences, habits, and hobbies, or national-security files. That data, formerly walled-off and spoon-fed only to paying advertisers, would be attractive to both licit and criminal buyers. Easily searchable datasets could generate new innovations and  investments—but it would be difficult to know who’s buying up sensitive datasets, and why.

If contracts and privacy policies prevent a floundering company from selling user data, there’s still another way to profit. Most privacy policies that promise not to sell user data include a caveat in case of bankruptcy or sale. In fact, a New York Times analysis of the top 100 websites in the U.S. last year found that 85 of them include clauses in their privacy policies like this one from Facebook:

If the ownership or control of all or part of our Services or their assets changes, we may transfer your information to the new owner.

This is the virtual equivalent of a beater getting hauled to the junkyard. If a Facebook-like social media company can’t legally sell off its data, then it may just sell itself in order to cut its losses. Among the post-crash rubble, the principal value that a potential buyer might see in snapping up the company is its data. It’s like an acquisition hire, but for a huge and detailed dataset.

But it’s not just social-networking, online shopping, and other technology companies that have to plan for this eventuality. Just about every company holds user data now, in one form or another. Even our own privacy policy here at The Atlantic says that a sale, merger, or bankruptcy may lead to a transfer of personally identifiable information. (To be sure, the data that a magazine maintains doesn’t measure up to the trove of private tidbits that people share with their social-networking apps—it’s mainly information about print subscribers.)

Even without doomsday scenarios, there’s already evidence of what the demise of a data-rich company would look like. When RadioShack filed for bankruptcy last year, one of the assets it put up for sale was its meticulously compiled database of information on millions of its customers. This set off a scramble of opposition from all sides: AT&T and Apple claimed to be the rightful owner of some of the data, and officials in a handful of states warned that the sale could violate state laws. The Federal Trade Commission stepped in, too, suggesting to a judge that RadioShack should only be able to sell the data to a company “substantially in the same line of business,” and that the buyer should be bound by the same privacy policy that was in place when consumers shared their personal data with RadioShack. If the buyer wanted to use the data differently, the FTC said, it should have to get the consent of the consumers.

This wasn’t the FTC’s first such intervention, either: In 2000, the commission sued a website called Toysmart.com for deceptive data practices. That case is what led many companies to add language about selling data to their privacy policies in the first place.

It’s one thing for federal regulators to keep an eye out for consumer data when a big retailer or tech company folds every few years. But in the event of a crash, it’s unlikely the FTC would be able to keep up with the sheer number of previously overvalued data-rich companies offering themselves up for sale. If that’s the case, the post-bubble technology industry will take your data down with it as it slips beneath the waves.

15 Dec 11:57

The New Armor That Lets You Sense Surveillance Cameras

by Robinson Meyer

We pass under surveillance cameras every day, appearing on perhaps hundreds of minutes of film. We rarely notice them. London-based artist James Bridle would like to remind us.

Bridle has created a wearable device he calls the “surveillance spaulder.” Inspired by the original spaulder—a piece of medieval plate armor that protected “the wearer from unexpected and unseen blows from above”—the surveillance spaulder alerts the wearer to similarly unseen, if electronic, attacks. Whenever its sensor detects the the type of infrared lighting commonly used with surveillance cameras, it sends an electric signal to two “transcutaneous electrical nerve stimulation” pads, which causes the wearer to twitch.

The plating that wraps around the
armor’s shoulder? It’s a spaulder.

That is: Whenever the spaulder detects a security camera, it makes your shoulder jump a little. You can see the spaulder in action in the video above.

The surveillance spaulder isn’t the only project that explains how hard-to-see surveillance might be countered. In October, a Dutch artist claimed to invent a shirt that confused facial-recognition algorithms; before that, the American designer Adam Harvey explored make-up, hair-dos and shawls that could confuse the facial- or body-recognition software used in drones. And many of these ideas hail back to science fiction writer William Gibson’s “ugly t-shirt,” a theoretical garment so hideous that surveillance cameras couldn’t stand to look at it.

But Bridle’s spaulder has a slightly different goal. Instead of obstructing cameras and algorithms, it merely alerts the wearer to their presence. It’s a technology—and an art project—of reminding. The surveillance spaulder provides a “a tap on the shoulder,” Bridle writes, “every time one comes under the gaze of power.”


    






15 Dec 11:55

The Only Thing Weirder Than a Telemarketing Robot

by Alexis C. Madrigal
The Turk Chess Automaton (Bibliodyssey)

Sometimes, you have to think like a scammer.

So, when I saw that an apparent robot telemarketer named Samantha West had randomly called a Time writer and denied she/it/they was a robot, I wondered: where could I buy such an interactive voicebot? 

This query led me down a strange rabbit hole. And along the way, I discovered that Samantha West may be something even stranger than a telemarketing robot. Samantha West may be a human sitting in a foreign call center playing recorded North American English through a soundboard.

I know. It's weird. But let me explain.

First, let's hear "Samantha": 

Clearly, this is not human conversation: there are repeated laughs and weird phrases. "She failed several other [humanity] tests," Time wrote. "When asked 'What vegetable is found in tomato soup?' she said she did not understand the question. When asked multiple times what day of the week it was yesterday, she complained repeatedly of a bad connection."

It seems so open and shut.

So, Time's story ran with the plausible headline, "Meet the Robot Telemarketer Who Denies She's a Robot." And many other blogs went with that explanation, too.

But if this kind of robotic telemarketing is possible, why don't we see it more often? Every other kind of spam, if it is technically possible, becomes pervasive.

* **

The first step to acquiring a voicebot like this was to figure out what the people selling it might call it. Certainly they would not refer to their services as "robot telemarketing."

I started looking for the right jargon to Google. As it turns out, there are two key phrases: "interactive voice response" and "outbound." Interactive voice response refers to telephone systems that can process what you're saying and respond appropriately (even intelligently at times). Outbound call centers make calls; the inverse, inbound, refers to systems that receive calls from customers.

So, put them together and you have, "Outbound IVR," which Datamonitor projected should be a half billion dollar market by now.

Outbound IVR, though, is not generally supposed to be used for telemarketing. It's supposed to be used to deliver automated messages and provide just a smidgen of interactivity. So, a common use case might be to call a debtor up and ask them to pay a bill. Then the machine can take that payment without transferring you to a human. Or automated scheduling: a doctor's office could confirm that a patient has an appointment with the voicebot. 

Why isn't outbound IVR used for telemarketing?

Well, primarily because IVR is really, really hard. It is widely recognized that voice-to-text with your phone (i.e. Siri) is far from reliable. And Siri actually has a lot better data to work with. An IVR bot has to work with the low-quality audio that's transmitted through the public switched telephone network (PSTN). Quality, in this case, being a quantitative measure of how much data is in the audio. 

That's why the voice recognition on company telephone systems is a target for mockery. ("I said three. No, no, I said THREE. THREE!") And when someone is calling into a company, the company severely restricts the scenarios that the IVR bot has to work within. The bot knows what it's listening for. And it's still just OK.

Now, Samantha West actually uses a bunch of different responses as it tries to pose as a general-purpose salesperson. The queries that the editors launch at Samantha are pretty complex, and yet she comes back with an appropriate (if limited) response. 

When I contacted outbound marketing companies and showed them the story with the clips, they all said they don't or can't do this sort of interactive voice response.

One source, who agreed to explain the problem on the condition that they would not be associated with this marketing bot, gave a fascinating explanation of why the telemarketing robot probably was not possible

Getting this to work so quickly would be very difficult to achieve automatically as the audio on PSTN calls is 8000 Hz mono.  For reference that is one less channel and 120,000 less hertz than the low quality mp3s in your music collection.  This is why voice recognition is so aggravating over the phone - there is very little signal upon which to perform feature recognition.  Even the fastest in the business (Nuance) doesn't respond this quickly.

Even provided you could do the recognition under 50 milliseconds, the answers the gentleman is giving on the call are very fuzzy.  These aren't boolean "yes" and "no" - they are simple and complex sentences wildly divergent from the prompt of the robot.  So some [natural language processing] would need to be performed on the human's response to translate what was being said then fuzzy match the result against what an appropriate response would be.
 
Doing all that in a delay for natural conversation doesn't sound possible to me.  The only product that might have a shot at it are Nuance, but even then I don't think they are fast enough.
Other sources also suggested that Nuance might be the only company whose technology could do it. But when I contacted Nuance, a representative told me that they were not involved directly in the design of the software, nor did they know of anyone who was doing such a thing. (They did admit that there are people who could have gotten their hands on the software through resellers, but to their knowledge, this had not happened.)
 
So, if it's not a robot, what gives then? Because clearly someone is giving canned responses. 
 
The theory I heard — and keep in mind it is just a hypothesis to explain a perplexing situation — goes like this:
 
Samantha West is a human being who understands English but who is responding with a soundboard of different pre-recorded messages. So a human parses the English being spoken and plays a message from Samantha West. It is IVR, but the semantic intelligence is being provided by a human. You could call it a cyborg system. Or perhaps an automaton in that 18th-century sense

If you're reading this, you must be wondering: WHY?!?!

Well, while Americans accept customer service and technical help from people with non-American accents, they do not take well to telemarketing calls from non-Americans. The response rates for outbound marketing via call center are apparently abysmal. 

So, Samantha West, could be the rather strange solution to this set of circumstances and technical capabilities. Perhaps a salesperson like this doesn't have to say too many things to figure out if someone might be interested in buying insurance.

I tried to contact the company who Samantha West was working for. They hung up on me after I said I was an editor with The Atlantic


    






22 Sep 08:45

A Response to “Falling with Helium”

by Jason Martinez
lilgator

Yes.

Recently the author of xkcd, Randall Munroe, was asked the question of how long it would be necessary for someone to fall in order to jump out of an airplane, fill a large balloon with helium while falling, and land safely. Randall unfortunately ran into some difficulties with completing his calculation, including getting his IP address banned by Wolfram|Alpha. (No worries: we received his request and have already fixed that.)

While I don’t pretend to give as rigorous a calculation as he attempted, let’s see if we can’t come up with a rough answer of our own.

The first things to determine are the initial and final weights for the person, the balloon, the helium, and, very importantly, the helium tanks that the person carries.

The average weight of a human being is 62 kilograms. Let’s round this up to 70 kilograms to account for clothing as well as whatever safety gear and harnesses are necessary.

The weight of the balloon depends on how much helium we’re using. We’re going to assume for the purposes of this calculation that we want to achieve neutral buoyancy for the balloon and the person (we will presume that the tanks are jettisoned once the balloon is full).

We’ll also assume we’re using some sort of modified weather balloon. The larger weather balloons can weigh 600 grams with an inflated diameter of 177 centimeters. Assuming a roughly spherical balloon, this yields a surface area of 98,423 square centimeters, which gives a surface density of 0.00609614 grams per square centimeter.

The buoyancy of the balloon and person is equal to the difference between the mass of the air displaced and the mass of the helium inside the balloon as well as the balloon itself and the person. We want these to balance out, so we need to solve this equation:

Mass balance for displacement of air

where p refers to the density of the gas where the subscript indicates “Air” or helium, r is the radius of the balloon, sballoon refers to the surface density of the balloon calculated above, and mperson is the mass of the person.

Air density varies with altitude. At sea level it’s 1.225 kilograms per cubic meter, while at 5,000 feet it’s 1.1 kilograms per cubic meter. Since we are mostly interested in stopping near the ground, we will assume a value of 1.2 kilograms per cubic meter. The density of helium at normal pressure is only 1.785*10^-4 grams per cubic centimeter. This would also decrease with altitude, but for our purposes we will treat it as constant.

Now we can solve for r:

Solve for r

A 2.6 meter radius sphere yields a volume of 74 cubic meters, or 2,613 cubic feet. This is how much helium will be needed.

Our final mass will also be:

Final mass

Now, to determine the initial mass, we need to know how much mass the containers have. 250 cubic feet is on the upper end of standardly sized helium containers. Some searching around suggests that an empty container weighs 108 pounds, or 49 kilograms, and has an internal volume of 43 liters and a diameter roughly 23 centimeters across. The person will need at least 11 of them to fill the balloon. (I hope it’s not a commercial flight, since checking the canisters as luggage at today’s rates would cost roughly $600. Before tax.)

Initial mass

Of crucial importance is how fast the balloon can be inflated. Too fast and the deceleration becomes dangerously high; too slow and our person hits the ground.

The time a container takes to release a percentage of its mass can be approximated by the following equation:

Time to decompress gas container

R. B. Bird, W. E. Stewart, and E. N. Lightfoot, Transport Phenomena, New York:John Wiley & Sons, 1960
where F is the fraction of the gas remaining in the container, k is the heat capacity ratio (1.667 for helium), C is the discharge coefficient, A is the cross-sectional area of the opening, V is the volume of the container, p0 is the initial gas density, and P0 is the initial pressure. This represents an uncontrolled release of gas, but we do want to fill the balloon as quickly as possible.

The discharge coefficient is the ratio of the actual discharge to the theoretical discharge. For this example, we will assume the value is 0.72. If the nozzle is 2 millimeters in diameter, that yields a cross-sectional area of 3.141 square millimeters.

To determine the initial pressure and density, we can treat the helium as an ideal gas and use the ideal gas law:

Ideal gas law

where V is the volume of the container, n is the number of moles of helium, R is the molar gas constant, and T is the temperature. Assuming a constant temperature of 0 °Celsius, we can work out that the initial number of moles in 250 cubic feet of helium gas under standard pressure and temperature is 315.839. Thus, putting these values into the ideal gas law, we can solve to find that the initial pressure is 16.681 megapascals.

Density is just the mass of the helium divided by the volume of the container:

Density is mass of helium / volume of container

We will assume that the person wants to empty 90% of each container before switching to a new one. The process of diminishing returns in emptying a container of gas means you can never really empty it. Placing these numbers into our equation, we find the time to be:

UnitConvert

That is obviously too long to wait for a single container to empty. In fact, we can see that as more time progresses, the outflow of the container gets slower and slower:

Plot

We can speed this up by making a bigger hole. We might also use some apparatus to release the gas from all the containers at once. If we do this and increase the width of the nozzles of each container by a factor of 5 to 10 millimeters, then the time to empty all the containers is only:

Time to empty containers

So the person falling could in theory fill the balloon in time by rapidly (and dangerously) evacuating the compressed gas containers. Then how fast would the person decelerate?

Well, that depends on the drag created. The equation for drag force is:

Drag force equation

where Fd is the drag force, Cd is the drag coefficient, p is the mass density of the surrounding air, u is the speed relative to the air, and A is the cross-sectional area. Cd depends on the shape of the object. Initially the drag coefficient would be quite low, since the assembly would consist of a ring of gas containers and a hurriedly working occupant. Ultimately we can consider the entire system a sphere, with a person tethered to the bottom, once the containers are dropped.

So, how long would it take for our jumper to fall without the balloon being inflated? Assuming the jump is at 12,500 feet, which is a common height for parachuting, we need to determine the drag coefficient and the cross-sectional area. We will assume the drag coefficient is comparable to that of a long streamlined body like the individual compressed gas containers, which would be around 0.1.

We can approximate the cross-sectional area by considering the 11 containers arranged in a ring as a circle. With a circumference 11 * 23 centimeters, or 2.53 meters long, the circle has an area of roughly 0.509 square meters. Turning to Wolfram|Alpha, we find that the time to fall would be 29 seconds and that the final velocity would be 250 meters per second, or 570 miles per hour.

So, the person has under 29 seconds to get the balloon filled, and we have worked out that it will take about 16 seconds once the valve on the contraption is turned on. If the balloon is inflated by 7,500 feet, the person would be moving at 170 meters per second. How quickly would the balloon slow the jumper down?

The cross-sectional area of a 2.8 meter radius sphere is 21.2 square meters. Taking our conditions, we can write a function for the acceleration for the drag force as a function of velocity by taking equation 4 and dividing both sides by the initial mass:

Function for the acceleration for the drag force as a function of velocity by taking equation 4 and dividing both sides by the initial mass

A safe landing speed would be something less than 10 miles per hour, or 4.47 meters per second. To find when the person would reach that speed, we need to solve the equation of motion for our system:

Equation of motion

The left-hand side of the equation is our usual mass times acceleration. On the right-hand side, we have our drag force plus the force of gravity pulling down and the buoyancy forces pushing up. The last two terms cancel out in our case. Plugging this into NDSolve, we can get a numerical solution to the problem:

NDSolve

This yields a curve with very rapid deceleration that tails off slowly. It would take roughly 3.5 seconds to slow down to a safe speed.

Plotvt

If this acceleration occurred instantly, it would be dangerously high:

Dangerously high acceleration

1701.84 meters per second squared is over 170 Gs, and is likely instantly fatal.

Luckily the jumper would have already begun decelerating while the balloon was inflated. Let’s try to approximate that.

We will start by going back to our equation for the time it takes to empty a gas container. We can break this into a constant term and a portion based on F and t.

Back to our equation for the time it takes to empty a gas container

We are then left with the equation:

t==(F[t](1-1.667)/2-1)constant

Rearranging this, we find that the equation for the gas remaining in the containers is:

F[t]==(t/constant+1)^(2/(1-1.667))

But we will need the equation for the gas being released. We will also want to compensate for the fact that the total gas volume (2,750 cubic feet) is greater than the balloon’s volume (2,613 cubic feet).

gasfilling

This gives us an equation for how filled the balloon is.

Next we derive the area in terms of time:

Area in terms of time

From this we can get the drag force in terms of time and velocity at the initial point of the drop. We will assume that the partially inflated balloon along with the containers has a drag coefficient of 0.2.

dragforce

We will also need to determine the buoyancy as a function of time. This is just a matter of finding how much air has been displaced by the partially inflated balloon. We take the ratio of the final mass and the initial mass and multiply it by gravity and the percentage of the balloon filled:

Ratio of the final mass and the initial mass and multiply it by gravity and the percentage of the balloon filled

Finally, we return to our equation to solve for the velocity starting at the moment of the drop:

Solve for the velocity starting at the moment of the drop

So now we can see that the jumper accelerates until the balloon becomes large enough to stop acceleration, which happens after about 10 seconds.

distance

From the above calculations, we find the person has traveled only about 298 meters, or 1,000 feet, and is now moving at 39.8 meters per second, 89 miles per hour. If the containers were dropped right now, the deceleration would be:

Deceleration

This is over 9 Gs, which is still dangerously high. Luckily the ground is still a long way away. If, for example, 8 of the containers were dropped, the deceleration would be:

Deceleration with 8 containers dropped

Or a bit under 2 Gs. Once the speed was stabilized, the person could drop the rest of the containers to reach a safe speed.

So, after 10 seconds from the initial jump, 8 containers are dropped, with 3 remaining.

dragforce2

At the end of another 10 seconds (20 seconds into the jump), the speed has been reduced to under 20 meters per second. Now the person can drop the rest.

UnitConvert3

The jumper suffers a momentary deceleration of 2.2 Gs, then quickly decreases to a safe velocity.

dragforce3

Putting these stages together, we find a fast drop in speed at the beginning of each stage, but that after 22 seconds, the velocity has been reduced to a safe level.

Velocity reduced to a safe level after 22 seconds

The total distance fallen is 598 meters, or only 1,962 feet.

Our jumper would now be floating several thousand feet in the air. The descent would gradually slow with drag or until the force of the wind exceeded the inertia and blew the person across the sky. Now the jumper just has to figure out how to get down.

Download this post as a Computable Document Format (CDF) file.

08 Sep 07:34

Age of city buildings

by Nathan Yau
lilgator

Now waiting for someone to do this for many more cities.

Brooklyn age

When we think about the age of cities, it's common to think of when it was founded or established. However, the growth of a city is often more organic, as buildings and homes spring up at different times and different areas. So when you map buildings by when they were built, you get a sense of that growth process. Thomas Rhiel did this for Brooklyn.

The borough's a patchwork of the old and new, but traces of its history aren’t spread evenly. There are 320,000-odd buildings in Brooklyn, and I’ve plotted and shaded each of them according to its year of construction. The result is a snapshot of Brooklyn’s evolution, revealing how development has rippled across certain neighborhoods while leaving some pockets unchanged for decades, even centuries.

Inspired by Rhiel's map, one of almost ten million buildings in the Netherlands:

Netherlands age

Now waiting for someone to do this for many more cities.

08 Sep 07:31

WTF visualizations

by Nathan Yau

thumbs up

There are a lot of poorly conceived graphics that make little sense or do the opposite of what they're supposed to do. You know what I'm talking about. We see them often. You can either (1) get upset and overreact a bit; or (2) you can laugh. The latter is more fun, and that is the premise of the new Tumblr WTF Visualizations. Enjoy.

17 Aug 09:30

Sauceress: 1956

by Dave
1956. "General Motors Technical Center, Warren, Michigan. Design Center interior with stair in background. Eero Saarinen, architect." Our second look at the reception disk and its pilot. Kodachrome by Balthazar Korab. View full size.
17 Aug 08:13

Listening to Zen-like Wikipedia edits

by Nathan Yau

Listen to Wikipedia

It's easy to think of online activity as a whirlwind of chatter and battles for loudest voice, because, well, a lot of it is that. We saw it just recently with the burst of emojis and what happens in just one second online. But maybe that's because people tend to present the bits that way. Stephen LaPorte and Mahmoud Hashemi approached it differently in Listen to Wikipedia.

The project is an abstract visualization and sonification of the Wikipedia feed for recent changes, which includes additions, deletions, and new users. Bells, strings, and a rich tone represent the activities, respectively. Unlike other projects that attempt to hit you with an overwhelmed feeling, Listen oddly provides a calm. I left the tab open in the background for half an hour.

Listen is open source.

11 Aug 08:26

Booze Is Back: 1934

by Dave
Washington, D.C., circa 1934. "Leon's Delicatessen, 1131 14th Street NW. Window display of whiskey." Courtesy of Leon Slavin (1893-1975), who, according to his obituary, "obtained the first off-sale retail liquor license in Washington after the repeal of Prohibition." 8x10 negative by Theodor Horydczak. View full size.
11 Aug 08:26

Ahead of the Curve: 1955

by Dave
March 30, 1955. "Fontainebleau Hotel, Miami Beach. Over pool to hotel. Morris Lapidus, client." The luxe hostelry's first "season" after its opening. Large-format acetate negative by Gottscho-Schleisner. View full size.
11 Aug 08:23

Cabanarama: 1955

by Dave
March 30, 1955. "Fontainebleau Hotel, Miami Beach. Roof view of pool, cabanas and garden. Morris Lapidus, architect." The valet will be happy to park your Cadillac. Large-format acetate negative by Samuel H. Gottscho. View full size.
11 Jul 09:15

Overtime

by Greg Ross

http://commons.wikimedia.org/wiki/File:Armand_Kohl22a.jpg

Suppose that, at a given moment, a certain number of people are engaged in the manufacture of pins. They make as many pins as the world needs, working (say) eight hours a day. Someone makes an invention by which the same number of men can make twice as many pins: pins are already so cheap that hardly any more will be bought at a lower price. In a sensible world, everybody concerned in the manufacturing of pins would take to working four hours instead of eight, and everything else would go on as before. But in the actual world this would be thought demoralizing. The men still work eight hours, there are too many pins, some employers go bankrupt, and half the men previously concerned in making pins are thrown out of work. There is, in the end, just as much leisure as on the other plan, but half the men are totally idle while half are still overworked. In this way, it is insured that the unavoidable leisure shall cause misery all round instead of being a universal source of happiness. Can anything more insane be imagined?

– Bertrand Russell, “In Praise of Idleness,” 1935

10 Jul 06:34

The heart of a computer is now the network connection

by Matt Cutts
lilgator

Yes, but "the cloud" is so damn intangible, elusive, undefined... it's just #insecurity at another level.

Back in the 90s, the heart of a computer was the CPU. The faster the CPU, the better the computer was–you could do more, and the speed of the CPU directly affected your productivity. People upgraded their computers or bought new ones whenever they could to take advantage of faster CPU speeds.

I remember the point when computers got “fast enough” though. Around 1997 or 1998, computers started hitting 166 MHz or 200 MHz and you could feel the returns diminishing. At some point, the heart of a computer switched from being a CPU to the hard drive. What mattered wasn’t the speed of your Intel or AMD chip, but the data that you had stored on your computer.

The era of the hard drive lasted for a decade or so. Now I think we’re shifting away from the hard drive to the network connection. Or at least the heart of a computer has shifted for me. In 2006 I contemplated a future where “documents sat in a magic Writely [note: now Google Docs] cloud where I could get to them from anywhere.” Sure enough, I keep all my important files in Google Docs now. At this point, if I have a file that sits only on a local hard drive, I get really nervous. I’ve had local hard drives fail. By 2008, I was spending 98% of my time in a web browser.

Don’t get me wrong. Local hard drives are great for caching things. Plus sometimes you want to run apps locally. But for most people, the heart of a computer will soon be its network connection. Ask yourself: could you get by with a minimal hard drive? Sure. Plenty of people store their files on Dropbox, Box, Google Drive, iCloud, or SkyDrive. Or they back up their data with CrashPlan, SpiderOak, Carbonite, or Mozy. But would you want a computer that couldn’t browse the web, do email, or watch YouTube videos? Not likely.

10 Jul 06:26

Global migration and debt

by Nathan Yau

GED Viz

Global Economic Dynamics, by the Bertelsmann Foundation in collaboration with 9elements, Raureif, and Boris Müller, provides an explorer that shows country relationships through migration and debt. Inspired by a New York Times graphic from a few years ago, which was a static look at debt, the GED interactive allows you to select among 46 countries and browse data from 2000 through 2010.

Each outer bar represents a country, and each connecting line either indicates migration between two countries or bank claims, depending on which you choose to look at. You can also select several country indicators, which are represented with bubbles. (The image above shows GDP.) Although, that part of the visualization is tough to read with multiple indicators and countries.

The strength of the visualization is in the connections and the ability to browse the data by year. The transitions are smooth so that it's easy to follow along through time. The same goes for when you select and deselect countries.

30 Jun 06:44

House of Glass: 1939

by Dave
June 12, 1939. "House of Glass No. 4, New York World's Fair. Master bath. Landefeld & Hatch, architect." You know what they say about people in glass bathrooms. Large-format negative by Samuel H. Gottscho. View full size.
30 Jun 06:39

Voyager 1 Discovers Bizarre and Baffling Region at Edge of Solar System

by Adam Mann
Voyager 1 Discovers Bizarre and Baffling Region at Edge of Solar System
Not content with simply being the man-made object to travel farthest from Earth, NASA’s Voyager 1 spacecraft recently entered a bizarre new region at the solar system’s edge that has physicists baffled.
    


30 Jun 06:37

A Eulogy For AltaVista, The Google Of Its Time

by Danny Sullivan
lilgator

#ancienthistory, I was there

Goodbye AltaVista. You deserved better than this. Better than the one-sentence send-off Yahoo gave you today, when announcing your July 8 closure date. But then again, you always were the bright child neglected by your parents.

The Amazing AltaVista

You appeared on the search engine scene in December 1995. You made us go “woah” when you arrived. You did that by indexing around 20 million web pages, at a time when indexing 2 million web pages was considered to be big.

Today, of course, pages get indexed in the billions, the tens of billions or more. But in 1995, 20 million was huge. Existing search engines like Lycos, Excite & InfoSeek (to name only a few) didn’t quite know what hit them. With so many pages, you seemed to find stuff they and others didn’t.

As a result, you were a darling of reviews and word-of-mouth praise. You grew in popularity. In fact, I’d say you were the Google of your time, but it would be more accurate to say Google was the AltaVista of its time. That’s because Google didn’t even exist when you were ascendant. That’s also because you help paved some of the way for Google.

It was a brief ascendency, however. You were headed upward, but your parent, Digital Equipment, didn’t quite know what to do with you. You started out as an experiment, and then got used as a poster child for Digital to prove why companies should buy super-computers.

Never Nurtured

Then Digital got bought by Compaq in January 1998. You finally got a parent that at least, later that year, would buy you the domain name of altavista.com, saving us from typing in www.altavista.digital.com (yes, kids, really) to reach you.

But the next year, you were sold off to CMGI, which put you down the portal path that so many other search engines had morphed into, since search was seen as a loss leader. There would be an IPO! You’d finally have the success you deserved!

Alas, next came the dotcom crash. The IPO was cancelled in January 2001. Layoffs. I remember visiting your offices around the time and finding them empty, so empty that some employee had put a skeleton in a chair, at one of the many darkened workstations.

You hung in there, long enough for Overture to buy you in 2003. Then Yahoo bought Overture later that year, and really, you were done. You became part of Yahoo, and your search technology became part of the in-house search technology that Yahoo built. But as a brand, your glory days were finally over.

The Google-AltaVista X

You were loved. You really were. People did not want to leave you. But despite adding new features, some of which Google copied, you couldn’t keep up with the pace and innovation of that company, which decided against becoming a portal like your corporate masters ordered for you.

People who wanted search, who came to you for it, eventually went over to Google. It’s what I termed at the time to be the “Google-AltaVista X,” which looked like this:

google altavista x

The ratings we had at the time were fairly rudimentary, but these ones from comScore showed the percentage of people in the US reported to be going to a particular search engine at least once in a given month. You were climbing, then Google came along and the serious searchers started flocking toward it.

“I Used To Use AltaVista, But Now I Use Google.”

As I said, they didn’t want to go. When I would ask people at the time what search engine they used, it was extremely common that they’d preface the answer to reference having used you in the past. “I used to use AltaVista, but now I use Google.” I heard that over and over. It was like talking to someone who had broken up with a partner they loved but ultimately had to leave. “I used to be with this person, but now I’m with to someone else.” There was real regret.

Google didn’t stop in its ascendency, of course. Having bypassed you, it went on to bypass the portals that you never beat. Indeed, it grew so successful that an entire new generation of searchers seemed to have no idea there was anything other than Google to search with. They used Google’s very name as a synonym for searching. They “googled” for things.

Given the right parent, perhaps you might hired Larry Page and Sergey Brin, when the Google cofounders came calling. Perhaps if Yahoo or Microsoft had understood the desire for better search that the demand for Google was showing, either of them would have purchased you early on and allowed you to thrive.

You Deserved A Better Send-Off

You deserved better — and better than this eulogy, too. I should go on and on explaining how innovative and groundbreaking you were, for your time. I’m sorry for that, AltaVista. I’ll beg a little forgiveness that I’m on a plane, and it’s not the best place to be writing.

For those reading, and wanting more, I highly recommend John Battelle’s “The Search.” It’s an outstanding history of the early days of search, and how Google rose during that time, but it covers the other players as well. Get it. If you want to continue what I consider to be the “Search Trilogy,” get Ken Auletta’s “Googled” and Steven Levy’s “In The Plex.” Both pick up where John leaves off; all three are excellent.

As for Yahoo’s send off, in announcing your death today — “Please visit Yahoo! Search for all of your searching needs” — that’s just shameful. It really is. Yes, it was time for you to be retired. But you deserved your own post, not having your closure mixed in among the other many products being axed.

You deserved from Yahoo, itself one of the old-time brands of the web, to have more attention paid to your role.

Rest in peace, AltaVista.

AltaVista, May 1997

AltaVista, born December 1996, pictured as of May 1997

AltaVista 2013

AltaVista from June 2013, shortly before its death.

Postscript: And now AltaVista is officially gone. See our follow-up story, AltaVista Officially Closes — What Will Pawnee Do!

24 Jun 14:37

Duck Duck Go’s Post-PRISM Growth Actually Proves No One Cares About “Private” Search

by Danny Sullivan

duckduckgo-featured

Look out, Google! Duck Duck Go is on the rise, posting a 50% traffic increase in just eight days. Is this proof people want a “private” search engine, in the wake of allegations the PRISM program allows the US government to read search data with unfettered access? Nope. Google has little to worry about. People don’t care about search privacy, and Duck Duck Go’s growth demonstrates this.

Don’t get me wrong. If you ask people about search privacy, they’ll respond that it’s a major issue. Big majorities say they don’t want to be tracked nor receive personalized results. But if you look at what people actually do, virtually none of them make efforts to have more private search.

Duck Duck Go’s growth is an excellent case study to prove this. Despite it growing, it’s not grown anywhere near the amount to reflect any substantial or even mildly notable switching by the searching public.

Duck Duck Go’s Growth, In Perspective

Duck Duck Go maintains a traffic page where anyone can see how it has grown, and in the last few days, it’s been dramatic:

ddg

Using that data, here’s Duck Duck Go’s traffic versus Google before the PRISM news came out:

ddg v google

That’s taking Duck Duck Go’s 2 million searches per day that it was at just before the PRISM news broke on June 6. Actually, Duck Duck Go had come close to but never actually reached 2 million searches per day before PRISM. That happened four days after the news came out. But it’s close enough for the purposes of this article. Duck Duck Go was at 2 million searches per day, or 60 million searches per month. That compares to 13,317 million searches per month — 13.3 billion — for Google.

I’ll explain more about those Google figures in a bit. But next, here’s the post-PRISM change, where 11 days after the PRISM news broke, with even more revelations of the US National Security Agency spying, Duck Duck Go cracked the 3 million searches per day mark, putting it on course for a 90 million searches per month. How’s all that new growth compare to Google?

Microsoft Excel

In comparison to Google, Duck Duck Go’s growth might as well not even count. It’s nowhere near close. It’s not close to Bing or Yahoo, either. At 90 million searches per month, Duck Duck Go still needs to triple that figure to reach the search traffic of AOL, 266 million per month, according to comScore.

That’s also not counting any worldwide traffic AOL has. Similarly, that 13 billion figure that Google handles is only for searches in the United States, whereas Duck Duck Go’s data is for worldwide traffic. And while the Google traffic is for May 2013, and so potentially doesn’t reflect any post-PRISM loss, it’s pretty clear from Duck Duck Go’s figures that hundreds of millions of people haven’t left Google for it. Tens of millions haven’t. Maybe, at best, one million have.

People Don’t Actually Seek Out Private Search

Over the past few weeks, I’ve done several press interviews about Duck Duck Go, where the issue of whether it can beat Google by being more “private” has come up. My answer has consistently been “no,” because that’s been the experience of search engines before that have tried this.

I can imagine some on Reddit or Hacker News or elsewhere arguing about how this time, it’s different. This time, with all the NSA allegations, privacy is front and center. This is the right time for a private search engine to emerge.

I doubt it. Having covered the search engine space for 17 years now, having seen the privacy flare-ups come-and-go, I’d be very surprised if this time, it’s somehow going to cause more change than in the past.

Past Privacy Moves

Here’s a good example. Back in 2007, Google decided it would start to anonymize its search data. After 18-to-24 months, Google said it would break connections between what was searched for and particular IP addresses, to increase privacy. It wasn’t forced to do this. It wasn’t a reaction because some government body sent it a letter. Google itself decided that was a good, voluntary move to help increase privacy.

In response to this, the European Union decided that Google voluntarily cutting its search data retention policy from forever to two years wasn’t enough. Its main privacy body jumped in demanding data be retained for even less time, without even understanding other EU regulations prevented this.

That fracas kicked off an industry competition to be more private. Sensing a weakness where it might win against Google, Microsoft declared it would anonymize data after 18 months. Yahoo said it would cut retention to only 90 days. Ask launched Ask Eraser, promising instant privacy, for those who wanted it.

All this happened in an environment where there was much media focus on search privacy. How’d it work out? None of the Google competitors trying to win with a “private” feature made any impact on Google’s share. Yahoo rolled back and starting keeping data up to 18 months. Even Startpage, based out of Europe and with a long-time focus on promising private searching, found its efforts in 2009 didn’t pay off. It took until now for Startpage to get to 3 million searches per day.

The Privacy Google Already Provides

That was then, this is now? Microsoft has been spending millions on its “Scroogled” attacks on Google since late last year, viewing privacy as “Google’s kryptonite.” So far, that kryptonite not has any measurable impact on the Search Engine Of Steel.

Sure, there’s always the chance that this time, it’s different. That this time, people will decide that search privacy is so important that they do abandon Google, Bing and Yahoo for tiny, virtually unknown search engines that haven’t been named in allegations that they somehow provide easy, direct and unfettered access to search data — allegations the major players have all denied.

Maybe Duck Duck Go and Startpage will be seen as somehow “safer” options by the masses, even though that also means people have to trust that the NSA isn’t somehow breaking the encryption that Duck Duck Go and Startpage use — something that Google also uses.

Google is protecting privacy? Yes. Since October 2011, Google has moved to encrypt more and more of the searches that happen on its site, even if the searchers themselves haven’t thought about this or requested it. Millions more have been protected by this move than have ever used Duck Duck Go or Startpage.

In fact, when Duck Duck Go claims on its Don’t Track Us privacy site that when people search on Google that “your search term is usually sent to that site,” there’s an excellent chance now that this usually not the case at all. Publishers who have seen the rise of “Dark Google” and “not provided” in their analytics knows that Google’s encryption has kept much data from flowing out.

Google, of course, does retain the data internally, unless people switch off search history or purge it from time-to-time. That means Google can use it in various ways, including ways that help improve searches. That also means it’s available if Google is served with a legal request to deliver it. It also means, if you want to believe the PRISM allegations, that the NSA has a direct line into everything happening at Google. Again, that’s something that the company has continued to deny.

As For Duck Duck Go

Don’t get me wrong about Duck Duck Go. I love that there’s a plucky little competitor out there like it, just like I’m happy to have Blekko out there, which actually does more “heavy lifting” in search by indexing the web rather than relying on the search results from others.

Duck Duck Go, which got funding at the end of last year from Union Square Ventures, has done an outstanding job of punching above-its-weight in attracting press attention. It has also rightfully helped focus attention on privacy issues that people should be aware of, so they can make informed decisions. That type of pressure can help improve the major players, in compelling change.

But being a darling of media stories has only translated into one search engine that I know of becoming a real giant. That was Google. Google got there in large part because of a serious investment in core search infrastructure.

Duck Duck Go is relying on results largely from other search engines, with a smart algorithm to sort through those answers and a privacy pitch, to go up against a company that actually harvests information directly and can literally can have a conversation with you because it understand more than matching word patterns.

AOL Search might be endangered by Duck Duck Go, and that’s an impressive achievement. And Duck Duck Go, with limited staff and expenses, might make it as a profitable business. But I just don’t see it as a serious threat to Google, even with the current privacy climate. Search privacy as a selling point hasn’t worked before; I’d be surprised if it works now.

Related Articles

22 Jun 14:05

IBM Noir: 1962

by Dave
Circa 1962. "International Business Machines Corp., Thomas J. Watson Research Center, Yorktown Heights, New York, 1956-61. Exterior. Eero Saarinen, architect." Large format negative by Balthazar Korab. View full size.
09 Jun 05:48

Sunset on the British Empire

lilgator

and then there's continental drift...

Sunset on the British Empire

When (if ever) did the Sun finally set on the British Empire?

—Kurt Amundson

It hasn't. Yet. But only because of a few dozen people living in an area smaller than Disney World.

The world's largest empire

The British Empire spanned the globe. This led to the saying that the Sun never set on it, since it was always daytime somewhere in the Empire.

It's hard to figure out exactly when this long daylight began.  The whole process of claiming a colony (on land already occupied by other people) is awfully arbitrary in the first place. Essentially, the British built their empire by sailing around and sticking flags on random beaches.[1] This makes it hard to decide when a particular spot in a country was "officially" added to the Empire.

The exact day when the Sun stopped setting on the Empire was probably sometime in the late 1700s or early 1800s, when the first Australian territories were added.[2]

The Empire largely disintegrated in the early 20th century, but—surprisingly—the Sun hasn't technically started setting on it again.

Fourteen territories

Britain has fourteen overseas territories, the direct remnants of the British Empire.[3]

(Many newly-independent British colonies joined the Commonwealth of Nations. Some of them, like Canada and Australia, have Queen Elizabeth as their official monarch. But they are independent states which happen to have the same queen; they are not part of any empire that they know of.)

The Sun never sets on all fourteen British territories at once (or even thirteen, if you don’t count the British Antarctic Territory). However, if the UK loses one tiny territory, it will experience its first Empire-wide sunset in over two centuries.

Every night, around midnight GMT, the Sun sets on the Cayman Islands, and doesn't rise over the British Indian Ocean Territory until after 1:00 AM. For that hour, the little Pitcairn Islands in the South Pacific are the only British territory in the Sun.

The Pitcairn Islands have a population of a few dozen people, the descendants of the mutineers from the HMS Bounty. The islands became notorious in 2004 when a third of the adult male population, including the mayor, were convicted of child sexual abuse.[4][5]

As awful as the islands may be, they remain part of the British Empire, and unless they're kicked out, the two-century-long British daylight will continue.

Will it last forever?

Well, maybe.

Four hundred years from now, in April of 2432, the island will experience its first total solar eclipse since the mutineers arrived.[6]

Luckily for the Empire, the eclipse happens at a time when the Sun is over the Cayman Islands in the Caribbean. Those areas won't see a total eclipse; the Sun will even still be shining in London.

In fact, no total eclipse for the next thousand years will pass over the Pitcairn Islands at the right time of day to end the streak. If the UK keeps its current territories and borders, it can stretch out the daylight for a long, long time.

But not forever. Eventually—many millennia in the future—an eclipse will come for the island, and the Sun will finally set on the British Empire.

30 May 14:00

House on Fire: 1936

by Dave
lilgator

Maybe fires have accelerated in the past decades, but I don't remember seeing so much rescued things these days...

November 1936. "Residence on fire in Aledo, Illinois." Now where'd that bucket go? Photo by Russell Lee for the Resettlement Administration. View full size.
26 May 07:48

Big Government.

by Andy in Germany

Our state government has decided it wants to investigate sustainability and tell us all about how to have a small carbon footprint.

OLYMPUS DIGITAL CAMERA

The best way to do this is with a big truck, so we can see they are really, really serious about sustainability. As long as it doesn’t mean changing anything.

Remember: Infinite growth is possible with finite resources. We will discover a cheap recoverable energy source to replace oil. Technology will save us.

OLYMPUS DIGITAL CAMERA


25 May 16:08

Training Baggage Screeners

by schneier
lilgator

"results suggest firm limits on human rationality", but be extra careful when calling for robots instead

The research in G. Giguère and B.C. Love, "Limits in decision making arise from limits in memory retrieval," Proceedings of the National Academy of Sciences v. 19 (2013) has applications in training airport baggage screeners.

Abstract: Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.
19 May 09:26

How Half a Second of High Frequency Stock Trading Looks Like

frequency_trading.jpg
The movie shown below, developed by a real-time trading software developer Nanex, shows the stock trading activity in Johnson & Johnson (JNJ) as it occurred during a particular half a second on May 2, 2013.

Each colored box represents one unique exchange. The whote box at the bottom of the screens shows the National Best Bid/Offer, which often drastically changes in a fraction of a second. The moving shapes represent quote changes which are the result of a change to the top of the book at each exchange. The time at the bottom of the screen is Eastern Time HH:MM:SS:mmm, which is slowed down to be able to better observe what goes on at the millisecond level (1/1000th of a second).

In the movie, one can observe how High Frequency Traders (HFT) jam thousands of quotes at the millisecond level, and how every exchange must process every quote from the others for proper trade through price protection. This complex web of technology must run flawlessly every millisecond of the trading day, or arbitrage (HFT profit) opportunities will appear. However, it is easy for HFTs to cause delays in one or more of the connections between each exchange. Yet if any of the connections are not running perfectly, High Frequency Traders tend to profit from the price discrepancies that result.

More detailed information about this project can be found here. Via Huffington Post.

12 May 09:20

Dear the Oatmeal, I see your Mantis Shrimp post, and I raise you my favorite animal…

by DOGHOUSE DIARIES

Dear the Oatmeal, I see your Mantis Shrimp post, and I raise you my favorite animal...

The Oatmeal’s post is here. Pretty darn good too. You have to watch this NOVA program on Cuttlefish, if just to see the the ‘Broadclub’ Cuttlefish hypnosis strobe effect.