Shared posts

02 Oct 08:50

George Carlin The Tank Engine

by jwz

Apparently George Carlin narrated Thomas the Tank Engine -- this is a true thing that actually happened -- and someone has fixed it.

20 Sep 07:55

I simulated California housing and learned... about simulators


felt this for real

Here's a post I've been thinking about for a while. I shared it privately with a few people last year, but wanted to find a way to present it that wouldn't be wildly misconstrued. No luck so far.

Summary: Inspired by some conversations at work, I made a variant of my previously popular SimSWE (software engineer simulator) that has our wily engineers trying to buy houses and commute to work. The real estate marketplace is modeled based on Silicon Valley, a region where I don't live but my friends do, and which I define as the low-density Northern Californian suburban area including Sunnyvale, Mountain View, Cupertino, etc, but excluding San Francisco. (San Francisco is obviously relevant, since lots of engineers live in SF and commute to Silicon Valley, or vice versa, but it's a bit different so I left it out for simplicity.)

Even more than with the earlier versions of SimSWE (where I knew the mechanics in advance and just wanted to visualize them in a cool way), I learned a lot by making this simulation. As with all software projects, the tangible output was what I expected, because I kept debugging it until I got what I expected, and then I stopped. But there were more than the usual surprises along the way.

Is simulation "real science"?

Let's be clear: maybe simulation is real science sometimes, but... not when I do it.

Some of my friends were really into Zen and the Art of Motorcycle Maintenance back in high school. The book gets less profound as I get older, but one of my favourite parts is their commentary on the scientific method:

    A man conducting a gee-whiz science show with fifty thousand dollars’ worth of Frankenstein equipment is not doing anything scientific if he knows beforehand what the results of his efforts are going to be. A motorcycle mechanic, on the other hand, who honks the horn to see if the battery works is informally conducting a true scientific experiment. He is testing a hypothesis by putting the question to nature. [...]

    The formation of hypotheses is the most mysterious of all the categories of scientific method. Where they come from, no one knows. A person is sitting somewhere, minding his own business, and suddenly - flash! - he understands something he didn’t understand before. Until it’s tested the hypothesis isn’t truth. For the tests aren’t its source. Its source is somewhere else.

Here's the process I followed. I started by observing that prices in Silicon Valley are unexpectedly high, considering how much it sucks to live there, and rising quickly. (Maybe you like the weather in California and are willing to pay a premium; but if so, that premium has been rising surprisingly quickly over the last 15 years or so, even as the weather stays mostly the same.)

Then I said, I have a hypothesis about those high prices: I think they're caused by price inelasticity. Specifically, I think software engineers can make so much more money living in California, compared to anywhere else, that it would be rational to move there and dramatically overpay for housing. The increase in revenue will exceed the increase in costs.

I also hypothesized that there's a discontinuity in the market: unlike, say, New York City, where prices are high but tend to gently fluctuate, prices in Silicon Valley historically seem to have two states: spiking (eg. dotcom bubble and today's strong market) or collapsing (eg. dotcom crash).

Then I tried to generate a simulator that would demonstrate those effects.

This is cheating: I didn't make a simulator from first principles to see what would happen. What I did is I made a series of buggy simulators, and discarded all the ones that didn't show the behaviour I was looking for. That's not science. It looks similar. It probably has a lot in common with p-hacking. But I do think it's useful, if you use the results wisely.

If it's not science, then what is it?

It's part of science. This approach is a method for improved hypothesis formulation - the "most mysterious" process described in the quote above.

I started with "I think there's a discontinuity," which is too vague. Now that I made a simulator, my hypothesis is "there's a discontinuity at the point where demand exceeds supply, and the market pricing patterns should look something like this..." which is much more appropriate for real-life testing. Maybe this is something like theoretical physics versus experimental physics, where you spend some time trying to fit a formula to data you have, and some time trying to design experiments to get specific new data to see if you guessed right. Except worse, because I didn't use real data or do experiments.

Real science in this area, by the way, does get done. Here's a paper that simulated a particular 2008 housing market (not California) and compared it to the actual market data. Cool! But it doesn't help us explain what's going on in Silicon Valley.

The simulator

Okay, with all those disclaimers out of the way, let's talk about what I did. You can find the source code here, if you're into that sort of thing, but I don't really recommend it, because you'll probably find bugs. Since it's impossible for this simulation to be correct in the first place, finding bugs is rather pointless.

Anyway. Imagine a 2-dimensional region with a set of SWEs (software engineers), corporate employers, and various homes, all scattered around randomly.

Luckily, we're simulating suburban Northern California, so there's no public transit to speak of, traffic congestion is uniformly bad, and because of zoning restrictions, essentially no new housing ever gets built. Even the 2-dimensional assumption is accurate, because all the buildings are short and flat. So we can just set all those elements at boot time and leave them static.

What does change is the number of people working, the amount companies are willing to pay them, the relative sizes of different companies, and exactly which company employs a given SWE at a given time. Over a period of years, this causes gravity to shift around in the region; if an engineer buys a home to be near Silicon Graphics (RIP), their commute might get worse when they jump over to Facebook, and they may or may not decide it's time to move homes.

So we have an array of autonomous agents, their income, their employer (which has a location), their commute cost, their property value (and accumulated net worth), and their property tax.

(I also simulated the idiotic California "property tax payments don't change until property changes owners" behaviour. That has some effect, mainly to discourage people from exchanging equally-priced homes to move a bit closer to work, because they don't want to pay higher taxes. As a result, the market-distorting law ironically serves to increase commute times, thus also congestion, and make citizens less happy. Nice work, California.)

The hardest part of the simulator was producing a working real estate bidding system that acted even halfway believably. My simulated SWEs are real jerks; they repeatedly exploited every flaw in my market clearing mechanics, leading to all kinds of completely unnatural looking results.

Perversely, the fact that the results in this version finally seem sensible gives me confidence that the current iteration of my bidding system is not totally wrong. A trained logician could likely prove that my increased confidence is precisely wrong, but I'm not a logician, I'm a human, and here we are today.

The results

Let's see that plot again, repeated from up above.

The x axis is time, let's say months since start. The top chart shows one dot for every home that gets sold on the open market during the month. The red line corresponds to the 1.0 crossover point of the Demand to Supply Ratio (DSR) - the number of people wanting a home vs the number of homes available.

The second plot shows DSR directly. That is, when DSR transitions from <1.0 to >1.0, we draw a vertical red line on all three plots. For clarity there's also a horizontal line at 1.0 on the second plot.

The third plot, liquidity, shows the number of simulated homes on the market (but not yet sold) at any given moment. "On the market" means someone has decided they're willing to sell, but the price is still being bid up, or nobody has made a good enough offer yet. (Like I said, this part of the simulator was really hard to get right. In the source it just looks like a few lines of code, but you should see how many lines of code had to die to produce those few. Pricing wise, it turns out to be quite essential that you (mostly) can't buy a house which isn't on the market, and that bidding doesn't always complete instantaneously.)

So, what's the deal with that transition at DSR=1.0?

To answer that question, we have to talk about the rational price to pay for a house. One flaw in this simulation is that our simulated agents are indeed rational: they will pay whatever it takes as long as they can still make net profit. Real people aren't like that. If a house sold for $500k last month, and you're asking $1.2 million today, they will often refuse to pay that price, just out of spite, even though the whole market has moved and there are no more $500k houses. (You could argue that it's rational to wait and see if the market drops back down. Okay, fine. I had enough trouble simulating the present. Simulating my agents' unrealistic opinions of what my simulator was going to do next seemed kinda unwieldy.)

Another convenient aspect of Silicon Valley is that almost all our agents are engineers, who are a) so numerous and b) so rich that they outnumber and overwhelm almost all other participants in the market. You can find lots of news articles about how service industry workers have insane commutes because they're completely priced out of our region of interest.

(Actually there are also a lot of long-term residents in the area who simply refuse to move out and, while complaining about the obnoxious techie infestation, now see their home as an amazing investment vehicle that keeps going up each year by economy-beating percentages. In our simulator, we can ignore these people because they're effectively not participating in the market.)

To make a long story short, our agents assume that if they can increase their income by X dollars by moving to Silicon Valley vs living elsewhere, then it is okay to pay mortgage costs up to R*X (where R is between 0 and 100%) in order to land that high-paying job. We then subtract some amount for the pain and suffering and lost work hours of the daily commute, proportionally to the length of the commute.

As a result of all this, housing near big employers is more expensive than housing farther away. Good.

But the bidding process depends on whether DSR is less than one (fewer SWEs than houses) or more than one (more SWEs than houses). When it's less than one, people bid based on, for lack of a better word, the "value" of the land and the home. People won't overpay for a home if they can buy another one down the street for less. So prices move, slowly and smoothly, as demand changes slowly and smoothly. There's also some random variation based on luck, like occasional employer-related events (layoffs, etc). Market liquidity is pretty high: there are homes on the market that are ready to buy, if someone will pay the right price. It's a buyer's market.

Now let's look at DSR > 1.0, when (inelastic) demand exceeds supply. Under those conditions, there are a lot of people who need to move in, as soon as possible, to start profiting from their huge wages. But they can't: there aren't enough homes. So they get desperate. Every month they don't have a house, they forfeit at least (1-R)*X in net worth, and that makes them very angry, so they move fast. Liquidity goes essentially to zero. People pay more than the asking price. Bidding wars. Don't stop and think before you make an offer: someone else will buy it first, at a premium. It's a seller's market.

When this happens, prices settle at, basically, R*X. (Okay, R*X is the mortgage payment, so convert the annuity back to a selling price. The simulator also throws in some variably sized down payments depending on the net worth you've acquired through employment and previous real estate flipping. SWEs gonna SWE.)

Why R*X? Because in our simulator - which isn't too unlike reality - most of our engineers make roughly the same amount of income. I mean, we all know there's some variation, but it's not that much; certainly less than an order of magnitude, right? And while there are a few very overpaid and very underpaid people, the majority will be closer to the median income. (Note that this is quite different from other housing markets, where there are many kinds of jobs, the income distribution is much wider, and most people's price sensitivity is much greater.)

So as a simplification, we can assume R and X are the same for "all" our engineers. That means they simply cannot, no matter how much they try, pay more than R*X for a home. On the other hand, it is completely rational to pay all the way up to R*X. And demand exceeds supply. So they if they don't pay R*X, someone else will, and prices peak at that level.

When DSR dips back below 1.0: liquidity goes up and prices go back down. Interestingly, the simulated prices drop a lot slower than they shot up in the first place. One reason is that most people are not as desperate to sell as they were to buy. On the other hand, the people who do decide to sell might have a popular location, so people who were forced to buy before - any home at any price - might still bid up that property to improve their commute. The result is increasing price variability as people sell off not-so-great locations in exchange for still-rare great locations.

What does all this mean?

First of all, unlike healthier markets (say, New York City) where an increase in demand translates to higher prices, and demand can increase or decrease smoothly, and you can improve a property to increase its resale price, Silicon Valley is special. It has these three unusual characteristics:

  1. Demand is strictly greater than supply
  2. Most buyers share a similar upper limit on how much they can pay
  3. Other than that limit, buyers are highly price insensitive

That means, for example, that improving your home is unlikely to increase its resale value. People are already paying as much as they can. Hence the phenomenon of run-down homes worth $1.5 million in ugly neighbourhoods with no services, no culture, and no public transit, where that money could buy you a huge mansion elsewhere, or a nice condo in an interesting neighbourhood of a big city.

It means raising engineer salary to match the higher cost of living ("cost of living adjustment") is pointless: it translates directly to higher housing prices (X goes up for everyone, so R*X goes up proportionally), which eats the benefit.

Of course, salaries do continue to rise in Silicon Valley, mostly due to continually increasing competition for employees - after all, there's no more housing so it's hard to import more of them - which is why we continue to see a rise in property values at all. But we should expect it to be proportional to wages and stock grants, not housing value or the demand/supply ratio.

In turn, this means that a slight increase in housing supply should have effectively no impact on housing prices. (This is unusual.) As long as demand exceeds supply, engineers will continue to max out the prices.

As a result though, the market price provides little indication of how much more supply is needed. If DSR > 1.0, this simulation suggests that prices will remain about flat (ignoring wage increases), regardless of changes in the housing supply. This makes it hard to decide how much housing to build. Where the market is more healthy, you can see prices drop a bit (or rise slower) when new housing comes on the market, and you can extrapolate to see how much more housing is appropriate.

At this point we can assume "much more" housing is needed. But how much? Are we at DSR=2.5 or DSR=1.001? If the latter, a small amount of added housing could drop us down to DSR=0.999, and then the market dynamics would change discontinuously. According to the simulation - which, recall, we can't necessarily trust - the prices would drop slowly, but they would still drop, by a lot. It would pop the bubble. And unlike my simulation, where all the engineers are rational, popping the bubble could cause all kinds of market panic and adjacent effects, way beyond my area of expertise.

In turn, what this means is that the NIMBYs are not all crazy. If you try to improve your home, the neighbourhood, or the region, you will not improve the property values, so don't waste your money (or municipal funds); the property values are already at maximum. But if you build more housing, you run the risk of putting DSR below 1.0 and sending property values into free fall, as they return to "normal" "healthy" market conditions.

Of course, it would be best globally if we could get the market back to normal. Big tech companies could hire more people. Service industry workers could live closer to work, enjoy better lives, and be less grumpy. With more market liquidity, engineers could buy a home they want, closer to work, instead of just whatever was available. That means they could switch employers more easily. People would spend money to improve their property and their neighbourhood, thus improving the resale value and making life more enjoyable for themselves and the next buyer.

But global optimization isn't what individuals do. They do local optimization. And for NIMBYs, that popping bubble could be a legitimate personal financial disaster. Moreover, the NIMBYs are the people who get to vote on zoning, construction rules, and improvement projects. What do you think they'll vote for? As little housing as possible, obviously. It's just common sense.

I would love to be able to give advice on what to do. It's most certainly a housing bubble. All bubbles pop eventually. Ideally you want to pop the bubble gently. But what does that mean? I don't know; an asset that deteriorates to 30% of its current price, slowly, leaves the owner just as poor as if it happened fast. And I don't know if it's possible to hold prices up to, say, 70% instead of 30%, because of that pesky discontinuity at DSR=1.0. The prices are either hyperinflated, or they aren't, and there seems to be no middle.

Uh, assuming my simulator isn't broken.

That's my hypothesis.

19 Sep 04:41

Saturday Morning Breakfast Cereal - Modern Epic


Click here to go see the bonus panel!

It occurred to me after drawing this that's it's basically a summary of The End of History.

Today's News:
05 Sep 23:58

Clamdy Canes, clam-flavored candy canes

by Rusty Blazenhoff

I know, I know, we shouldn't be talking about the holidays just yet. But, keep clam, Archie McPhee went ahead and released Clamdy Canes -- yes, candy canes that taste like clams -- and I couldn't resist sharing the news with you.
ONE SHELL OF A CANDY From the personified clam on the package to the clam taste, you’ll wonder how Christmas existed without Clamdy Canes. They’re a candy clamity! We all celebrate holidays in our own way and if your holiday tastes like the sea, this is for you. Add a little sand for extra clam realness. If anyone complains, just tell them to clam up. Each candy cane is 5-1/4" tall with gray and white stripes.

A box of six is available for $4.95.

01 Sep 02:27

Culture Shock in Silicon Valley

by Elisabeth Handler

Recently, City Hall hosted a group of Swiss college students participating in an“International Program Experience” – a six-week work/live immersion into the US tech world. IPE brings students and a professor to the US for six weeks, and in addition to learning about the local area, teams of students engage in pro bono work on research/
development projects with local companies. This is part of the students’ third and final year studies at Lucerne School of Information Technology in Switzerland.

Two of this year’s students, Ursulina Kolbener and Matthias Perrolaz, posted their impressions of the difference between the US and European ways of doing things. The post is in German, so we will summarize some of their observations. We found this an interesting lesson in cross-cultural communication!

  • The gaps are enormous — America has homeless people, and also provides food in enormous portions, in restaurants and for sale in grocery stores.
  • Courtesy on American freeways allows merging cars and last-minute lane-changes, but also allows people to cancel business meetings at the last minute.
  • American are friendly when you meet, but might not remember you the next day.
  • Where the Swiss concentrate on details and delivering a perfect product with modesty, Americans lead with their strongest pitch and focus on the positives, not the challenges.

Their conclusion? The diversity of San Jose contrasted with their initial perception of Silicon Valley as solid nerd country. But in fact, the authors did see a unifying principle: Be the next unicorn!

We look forward to what their projects bring to the participating companies they will be working with – Twitter, Swisscom, Varian, Valora and some early-stage start-ups.

The post Culture Shock in Silicon Valley appeared first on City of San Jose.

30 Aug 15:53

Major Open Source Project Revokes Access to Companies That Work with ICE

by jwz
"Apologies to any contributors who aren't employees of Palantir, but to those who are, please find jobs elsewhere and stop helping Palantir do horrible things"

On Tuesday, the developers behind a widely used open source code-management software called Lerna modified the terms and conditions of its use to prohibit any organization that collaborates with ICE from using the software. Among the companies and organizations that were specifically banned were Palantir, Microsoft, Amazon, Northeastern University, Motorola, Dell, UPS, and Johns Hopkins University. [...]

"Recently, it has come to my attention that many of these companies which are being paid millions of dollars by ICE are also using some of the open source software that I helped build," Jamie Kyle, an open source developer and one of the lead programmers on the Lerna project, wrote in a statement. "It's not news to me that people can use open source for evil, that's part of the whole deal. But it's really hard for me to sit back and ignore what these companies are doing with my code." [...]

Before he changed the license, Kyle left a comment on Palantir's Github asking the company to stop using the software. "Apologies to any contributors who aren't employees of Palantir, but to those who are, please find jobs elsewhere and stop helping Palantir do horrible things," Kyle wrote last week, linking to an article in The Intercept about the company's collaboration with ICE. "Also, stop using my tools. I don't support you and I don't want my work to benefit your awful company." [...]

After Kyle discussed his concerns with some of the other lead developers on the Lerna project, they assented to a change to the Lerna license that would effectively bar any organization that collaborates with ICE from continuing to use the software. This led to some developers calling the change illegitimate and lamenting that it technically meant the project was no longer open source. [...]

"I've been around the block enough to know how every company affected is going to respond," Kyle told me. "They're not going to try and find a loophole. I kinda hope they do try to keep using my tools though -- I'm really excited about the idea of actually getting to take Microsoft, Palantir or Amazon to court."

As for the hate he has received online about how open source projects shouldn't be politicized, Kyle said this misses the point.

"I believe that all technology is political, especially open source," he told me. "I believe that the technology industry should have a code of ethics like science or medicine. Working with ICE in any capacity is accepting money in exchange for morality. I am under no obligation to have a rigid code of ethics allowing everyone to use my open source software when the people using it follow no such code of ethics."

Previously, previously, previously, previously, previously, previously, previously.

29 Aug 21:13

Dunkin' Donuts comes to San Jose

by Joshua Santos

lol big things happening

The famous east-coast doughnut chain has finally made its way to San Jose. A former Arby's at 5519 Snell Ave has become the first Dunkin' Donuts in Silicon Valley. I'm sure transplants will appreciate the location, and they might nab a few new converts as well (sorry, my personal favorite is still Psycho Donuts).

This is the first of several new locations. Next up will be a second San Jose shop on Winchester Boulevard towards the end of the year or early 2019. Franchise owners are also scouting Milpitas, Sunnyvale, and other Silicon Valley cities for further expansion.

The San Jose Dunkin' Donuts is now open everyday from 5am to 10pm.

Source: SVBJ

22 Aug 04:33

Prenda Lawyer Pleads Guilty in Pirate Bay Honeypot Case

by Ernesto

Over the past several years, so-called copyright trolls have been accused of various dubious schemes and actions, with one group as the frontrunner.

The now-defunct Prenda Law grabbed dozens of headlines, mostly surrounding negative court rulings over identity theft, misrepresentation and even deception.

Most controversial was the shocking revelation that Prenda uploaded their own torrents to The Pirate Bay, creating a honeypot for the people they later sued over pirated downloads.

The accusation was first published here on TorrentFreak. While some disregarded it as a wild conspiracy theory, the US Department of Justice took it rather seriously. These and other allegations ultimately resulted in a criminal indictment, which was filed in 2016.

The US Government accused two of the leading Prenda lawyers of various crimes, including money laundering, perjury, mail and wire fraud. This week one of the defendants, Paul Hansmeier, pleaded guilty to two of the counts.

Hansmeier signed a plea agreement admitting that he is guilty of conspiracy to commit mail fraud and wire fraud, as well as conspiracy to commit money laundering.

The plea agreement comes with a statement of facts which includes a description of the Pirate Bay honeypot scheme. In addition, it describes how Hansmeier and his colleague John Steele generated millions of dollars by threatening BitTorrent users who allegedly downloaded pirated porn videos.

“Beginning no later than in or about April 2011, HANSMEIER and Steele caused P.H. to upload their clients’ pornographic movies to BitTorrent file-sharing websites, including a website named the Pirate Bay in order to entice people to download the movies and make it easier to catch those who attempted to obtain the movies.

“As defendants knew, the BitTorrent websites to which they uploaded their clients’ movies were specifically designed to aid copyright infringement by allowing users to share files, including movies, without paying any fees to the copyright holders,” the agreement reads.

From the plea agreement

After extracting IP-addresses of account holders who allegedly shared the files Prenda created and uploaded, they asked courts for subpoenas to obtain the personal info of their targets from ISPs. This contact information was then used to coerce victims to pay settlements of thousands of dollars.

Prenda Law went to great lengths to hide its direct involvement in the uploading of the material as well as its personal stake in the lawsuits and settlements, according to the plea agreement.

Both attorneys obscured their involvement by creating several companies, which were then used to file lawsuits against alleged pirates. In addition to running a honeypot, Prenda also began creating their own porn movies, which were then shared on file-sharing sites as bait.

“Shortly after filming the movies, HANSMEIER instructed P.H. to upload the movies to file-sharing websites’such as the Pirate Bay in order to catch, and filed lawsuits against, people who attempted to download the movies,” the plea agreement reads.

Hansmeiers’ guilty plea applies to a count of wire fraud and mail fraud, as well as a count of money laundering. Both come with a potential jail sentence of 20 years as well as hundreds of thousands of criminal fines.

Previously, the Prenda attorney filed a motion to dismiss, which was denied. This decision is currently under appeal and the present plea agreement is conditional, meaning that Hansmeier has the right to withdraw it if he wins that.

This please agreement comes after fellow Prenda attorney John Steele agreed to a similar deal last year.

It’s rather unique that information provided by The Pirate Bay team is being used to help build a criminal case in the US. And with both lawyers having personally signed a statement of facts that confirm the honeypot scheme, there can be little doubt that Pirate Bay’s allegations were indeed true.

Finally, there is also some good news for the victims of the Prenda copyright-trolling scheme. The plea agreement specifically states that those who were hurt by the scheme are entitled to get the maximum restitution possible.

“Defendant understands and agrees that the Mandatory Restitution Act […] applies and that the Court is required to order the defendant to pay the maximum restitution to the victims of his crimes as provided by law,” it reads.

A copy of Hansmeier’s plea agreement is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

20 Aug 02:05

Life After Death on Wikipedia

by slaporte
17 Aug 06:27

Story Circle, Three Act Structure, and more. Writing for TV in UNDER A MINUTE!

by Phil Matarese

some good points here

Animals on HBO Fridays at 11:30pm
15 Aug 08:08

Dr. Pat | Adult Swim

by Adult Swim

dada goin strong 2k18

Created by Sam Hochman
Watch more on the Adult Swim livestreams:


About Adult Swim:
Adult Swim is your late-night home for animation and live-action comedy. Enjoy some of your favorite shows, including Robot Chicken, Venture Bros., Tim and Eric, Aqua Teen, Childrens Hospital, Delocated, Metalocalypse, Squidbillies, and more. Watch some playlists. Fast forward, rewind, pause. It's all here. And remember to visit for all your full episode needs. We know you wouldn't forget, but it never hurts to make sure.

Connect with Adult Swim Online:
Visit Adult Swim WEBSITE:
Like Adult Swim on FACEBOOK:
Follow Adult Swim on TWITTER:

Dr. Pat | Adult Swim
15 Aug 03:13

Saturday Morning Breakfast Cereal - The Event


this is the teal reason the fallout series is as popular as it is

Click here to go see the bonus panel!

Why are you so favored as to get a comic drawn by Abby Howard? She has a new book out! See the blog below the comic.

Today's News:
Dinosaur Empire 2 is out!
11 Aug 23:11

ultrafacts: Source: [x] Click HERE for more facts!


lol terrible lullaby

11 Aug 08:44

How Turkey's Currency Crisis Came To Pass

by b
Updated below --- President Erdogan of Turkey often asserts that 'foreign powers' (meaning the U.S.) want to bring him down. He says that the 'interest lobby' (meaning (Jewish) bankers), wants to damage Turkey. He is somewhat right on both points....
09 Aug 05:42

How Zildjian Cymbals Were Created by an Alchemist in the Ottoman Empire, Circa 1618

by Josh Jones

definitely one of my favorite stories, some issues with the video's retelling of it. for one, it wasn't _all_ about the gold, just general transmutation.

When it comes to musical instruments, there are brands and then there are legacies—names so unquestionably indicative of quality and craftsmanship that players swear by them for life. Martin Guitars, for example, have inspired this kind of loyalty among musicians like Willie Nelson and Johnny Cash. Martin's story—dating back to 1833—inspires book-length histories and documentaries. In the drum world, the longest-lived and most-storied brand would have to be Zildjian, the famed cymbal maker known the world over, beloved by the best drummers in the business.

But Zildjian is far older than Martin Guitars, or any other contemporary instrument manufacturer. Indeed, the company may be the world’s oldest existing manufacturer of almost any product. Though incorporated in the U.S. in 1929, Zildjian was actually founded 400 years ago in Constantinople by Armenian metalworker Avedis, who in 1622 “melted a top-secret combination of metals,” writes Smithsonian, “to create the perfect cymbal.” The short film above recreates in dramatic fashion the alchemy of Avedis’ discovery and the global history of Zildjian.

The brief Smithsonian history can seem a little sensational and may not be entirely accurate at points. Lara Pellegrinelli, writing at The New York Times, dates Avedis’ “secret casting process” to four years earlier, 1618. (The company itself dates its founding to 1623.) Pellegrinelli notes that Avedis' “new bronze alloy” pleased the Sultan, Osman II, who “granted the young artisan permission to make instruments for the court and gave him the Armenian surname Zildjian (meaning ‘son of cymbal maker’). The family set up shop in the seaside neighborhood of Samatya in Constantinople, where metal arrived on camel caravans and donkeys powered primitive machines.”

Zildjian cymbals were admired by Mozart and his contemporaries, and “what came to be known simply as 'Turkish cymbals’ were assimilated by European orchestras and, in the first half of the 19th century, into new military and wind band styles” of the East and West. In 1851, Zildjian cymbals set sail on a 25-foot schooner bearing the family name, bound for London’s Great Exhibition. Kerope Zildjian introduced the K Zildjian line of cymbals in 1865, still in production and widely in use today. (The old K's can still be heard in several major symphony orchestras.)

As the jazz scene took off in the 1920’s, many music shops exclusively carried Zildjians, and drummers like Gene Krupa helped refine and develop the famous instruments even further, making them thinner, more responsive, and able to cut through the big band sound. The story of Zildjian is the story of Western music and its unmistakable Eastern influence, an incredible history four centuries in the making, full of intrigue and brilliant innovation, and containing at its heart an alchemical mystery, a secret recipe still closely guarded by the Zildjian family.

Related Content:

Visit an Online Collection of 61,761 Musical Instruments from Across the World

Watch a Musician Improvise on a 500-Year-Old Music Instrument, The Carillon

What Makes the Stradivarius Special? It Was Designed to Sound Like a Female Soprano Voice, With Notes Sounding Like Vowels, Says Researcher

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness

How Zildjian Cymbals Were Created by an Alchemist in the Ottoman Empire, Circa 1618 is a post from: Open Culture. Follow us on Facebook, Twitter, and Google Plus, or get our Daily Email. And don't miss our big collections of Free Online Courses, Free Online Movies, Free eBooksFree Audio Books, Free Foreign Language Lessons, and MOOCs.

09 Aug 01:52

Zuckerbubble expands by $10M/year.

by jwz


Your security is very important to us. My. My security is very important to us. To me.

Facebook Inc. spent $7.33 million last year protecting its chief executive officer at his homes and during his tour across the U.S. Last week, the social-media giant said it would provide an additional $10 million a year for him to spend on personal security.

The cost for this year far exceeds what other firms will spend for their bosses and probably surpass the average annual compensation for S&P 500 CEOs.

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

05 Aug 23:20

mapsontheweb:How America Uses It’s Land.


I did not realize there was this much timberland. The cows are less of a surprise, but still yeah, a lot.


How America Uses It’s Land.

04 Aug 09:01

Woman Actually Licks the Chef's Fingers

by Just For Laughs Gags

right hat, and you can get away with anything

Wow, this woman must really love the food. Why else would she willingly put the chef's fingers in her mouth? WATCH FULL EPISODES:


Watch our latest pranks!

Just for Laughs Gags Across the Internet:
Visit our store:
JFL Comedy:

Watch more JFL Gags!
Naughty Holiday Pranks:
Latest Uploads:
Best of JFL Gags Compilations:
Most Crazy Complex Pranks:
Girls in Bikinis Pranks:

Filmed in Montreal, Quebec

Welcome to the world-famous Just for Laughs Gags channel, where we pull public pranks on unsuspecting Montreal residents and tourists.

Subscribe to our channel ▶▶ ◀◀ and stay up to date on our daily pranks!
04 Aug 07:04

Value Pack: Home Improvement | adult swim

by Adult Swim

Created by Lord Spew
More Things:


About Adult Swim:
Adult Swim is your late-night home for animation and live-action comedy. Enjoy some of your favorite shows, including Robot Chicken, Venture Bros., Tim and Eric, Aqua Teen, Childrens Hospital, Delocated, Metalocalypse, Squidbillies, and more. Watch some playlists. Fast forward, rewind, pause. It's all here. And remember to visit for all your full episode needs. We know you wouldn't forget, but it never hurts to make sure.

Connect with Adult Swim Online:
Visit Adult Swim WEBSITE:
Like Adult Swim on FACEBOOK:
Follow Adult Swim on TWITTER:

Value Pack: Home Improvement | adult swim
29 Jul 05:18

DC statehood gains support

by Sam Smith

only 27 democratic senators care about basic political survival of their party

There are now 27 Democratic senators who support DC statehood, a movement this journal helped start 48 years ago
18 Jul 05:22

Big Bag in the French Concession

by stylites_admin

power moves

Big Bag in the French Concession WechatIMG1763 1

Here is one of Pawnstar‘s regular clients outside of our shop with a rather massive, custom-made, bag.


14 Jul 04:10

More millennials are moving to San Jose than SF

by Joshua Santos
Based on 2016 immigration and emigration data, it looks like more San Jose is getting more traction among millennials. This generation is currently defined as those aged 22 to 37. In 2016, San Jose gained 5,496 millennials (this is net, immigration - emigration). On top of the list for net migration was Seattle, followed by Columbia and the only other Northern California city in the top ten, Sacramento.

Source: SVBJ

14 Jul 02:59

Today in Landfill Capitalism: Realistic Marketing, Inc.

by jwz
09 Jul 18:39

Wikimedia is against European Parliament's Copyright Directive

by slaporte

Wikimedia puts the EU in their place. Also love the phrasing in this article "The new restrictions, if passed, would possibly lead to a loss of revenue for the giant as well..." like they can't get away from corporate/market rhetoric.

09 Jul 18:38

Current state of webdesign

by slaporte

to get riled up before the big meeting.

08 Jul 22:37


by CDTcrew

doesn't end strong, but 0:55 is a good fever pitch.

07 Jul 22:24

2001 | Joe Veazey's Elongated Coins | adult swim

by Adult Swim

Welcome to America's #1 elongated coin show. Adult Swim’s Joe Veazey shares his collection of elongated coins. One at a time.
Watch Adult Swim livestreams:


About Adult Swim:
Adult Swim is your late-night home for animation and live-action comedy. Enjoy some of your favorite shows, including Robot Chicken, Venture Bros., Tim and Eric, Aqua Teen, Childrens Hospital, Delocated, Metalocalypse, Squidbillies, and more. Watch some playlists. Fast forward, rewind, pause. It's all here. And remember to visit for all your full episode needs. We know you wouldn't forget, but it never hurts to make sure.

Connect with Adult Swim Online:
Visit Adult Swim WEBSITE:
Like Adult Swim on FACEBOOK:
Follow Adult Swim on TWITTER:

2001 | Joe Veazey's Elongated Coins | adult swim
07 Jul 17:34

How not to structure your database-backed web applications: a study of performance bugs in the wild

by adriancolyer

How not to structure your database-backed web applications: a study of performance bugs in the wild Yang et al., ICSE’18

This is a fascinating study of the problems people get into when using ORMs to handle persistence concerns in their web applications. The authors study real-world applications and distil a catalogue of common performance anti-patterns. There are a bunch of familiar things in the list, and a few that surprised me with the amount of difference they can make. By fixing many of the issues that they find, Yang et al., are able to quantify how many lines of code it takes to address the issue, and what performance improvement the fix delivers.

To prove our point, we manually fix 64 performance issues in [the latest versions of the applications under study] and obtain a median speed-up of 2x (and up to 39x max) with fewer than 5 lines of code change in most cases.

The Hyperloop website provides access to a tool you can use to identify and solve some of the common performance issues in your own (Rails) apps.

I’m going to skip the intro parts about what ORMs do and how a typical web app is structured, on the assumption that you probably have a good handle on that already. Note that fundamentally a lot of the issues stem from the fact that the ‘O’ in ORM could just as easily stand for ‘Opaque.’

On one hand, it is difficult for application compilers or developers to optimize the interaction between the application and the underlying DBMS, as they are unaware of how their code would translate to queries by the ORM. On the other hand, ORM framework and the underlying DBMS are unaware of the high-level application semantics and hence cannot generate efficient plans to execute queries.

I remember in the early days of Spring (when Hibernate was young too, and JPA didn’t exist) a framework called iBATIS was quite popular amongst people I considered to be good developers. It turns out iBATIS is still going strong, though now it’s called MyBatis. The key selling point is that you retain control over the SQL, without the overheads of directly using an API like JDBC.

Finding and profiling real-world applications

The study focuses on Ruby on Rails applications, for which many large open source applications: “compared to other popular ORM frameworks such as Django and Hibernate, Rails has 2x more applications on GitHub with 400 or more stars than Django and Hibernate combined.” Six popular application categories (covering 90% of all Rails apps with more than 100 stars on GitHub) are further selected, and then the two most popular applications in each category. Resulting in the following twelve apps:

They have been developed for 5-12 years, are all in active use, and range from 7Kloc to 145Kloc (Gitlab). To generate realistic data for these apps, the team collected real-world statistics based on either the app in question where available, or similar apps otherwise, and implemented a crawler that fills out forms on the application’s websites following the real-world stats. For each application, three sets of data were created: with 200, 2000, and 20,000 records in the main database table.

When we discuss an application’s scalability we compare its performance among the three above settings. When we discuss an application’s performance we focus on the 20,000-record setting, which is a realistic setting for all the applications under study. In fact, based on the statistics we collect, the number of main table records of every application under study is usually larger than 20,000 in public deployments…

(E.g., more than 1 million records).

With databases in hand, the applications are then profiled by visiting links randomly for two hours and the resulting logs produced by Chrome and the Rails Active Support Instrumentation API are processed to obtain average end-to-end loading time for every page, the detailed performance breakdown, and the issued database queries. For each app, the 10 pages with the worst loading time are plotted in the figure below.

11 out of 12 applications have pages whose average end-to-end loading time (i.e. from browser sending the URL request to page finishing loading) exceeds 2 seconds; 6 out of 12 applications have pages that take more than 3 seconds to load…. Note that our workload is smaller, or for some applications, much smaller, than today’s real-world workload.

Looking at where the time goes for these queries, server time (app server + DBMS) contributes at least 40% of this latency in at least 5 out of the top 10 pages for 11 of the 12 apps.

In total, 40 problematic actions are identified from among the top 10 most time-consuming actions of every application. 34 of these have scalability problems, and 28 take more than 1 second of server time. 64 performance issues were found in these 40 actions, as summarised in the table below.

In addition to studying these actions, the authors also look at performance issues reported in the application’s bug trackers.

The causes of inefficiency are categorised into three main areas: ORM API misuse, database design, and application design.

ORM API Misuse

Inefficient computation (IC) occurs when the same operation on persistent data can be implemented via different ORM calls. These often look similar on the surface, but can have very different performance implications. For example the difference between any? (scans all records in the absence of an index) and exists? as shown here. Another example is use of Object.where(c).first instead of Object.find_by(c).

There all also opportunities to move computation to the DBMS, for example, replacing pluck(:total).sum with sum(:total). Sometimes we can also take advantage of data already in memory (e.g. replace Object.count— a db query— with Object.size which will count in-memory objects if they have already been loaded).

Unnecessary Computation (UC) occurs when redundant or useless queries are issued. A classic example is a query inside a loop body, that could be computed once outside the loop body.

Inefficient data accessing (ID) results in slow data transfers by either not getting enough data (resulting in the familiar N+1 selects problem), or getting too much data. This can be due to inefficient lazy loading (still prevalent in this study), or too-eager eager loading. Here’s an example of a patch in Lobsters that replaces 51 database queries with one:

A similar inefficient updating issue occurs when N queries are issued to update N records separately, instead of using update_all.

Unnecessary data retrieval (UD) occurs when an application retrieves persistent data that it doesn’t then use.

Inefficient Rendering (IR). This one was a surprise to me in the measured impact. It’s very common in Rails to do something like this, which calls link_to within a loop to generate an anchor:

Replacing it with one call to link_to outside of the loop, and the use of gsub within the loop is faster.

There’s a readability trade-off here, but with complex rendering functions it is clearly worth it. 5 problematic actions in the study dropped their time by over half as a result.

Database design issues

Missing Fields (MF) occurs when an application repeatedly computes a value that it could just store.

Missing Database Indexes (MI). Need I say more?

… missing index is the most common performance problem reported in ORM application’s bug tracking systems. However, it only appears in three out of the 40 problematic actions in latest versions.

Application design issues

Content Display Trade-offs (DT). Typically returning all records instead of using pagination. This becomes a problem as the number of records in the database grows.

Application Functionality Trade-offs (FT). Sometimes an application has a side information on a page that is actually quite expensive to compute. Removing this when it is not essential to the user’s task in-hand can make things go faster.

Fixing inefficiencies

The authors study all 40 of the problematic actions and manually fix 39 of the (the other one spends its time on file-system actions). This leads to 64 fixes being applied.

Many fixes are very effective. About a quarter of them achieve more than 5x speedup, and more than 60% of them achieve more than 2x speedup. Every type of fix has at least one case where it achieves more than 2x speedup.

40 of the 64 fixes alter neither the display nor the functionality of the original application, achieving an average speed-up of 2.2x (maximum 9.2x).

Per a recent comment from Steve Powell, it’s actually quite confusing to talk about a ‘2x speed-up’ when you’re talking about the time something takes! I guess we could interpret this as either 2x requests-per-second (that’s a speed metric) , or that an action takes on average half the time it used to (a ‘/2’ latency reduction?). With a single thread of execution, they both amount to the same thing.

The average server time is reduced form 3.57 seconds to 0.49 seconds, and the end-to-end page load times from 4.17 seconds to 0.69 seconds. More than 78% of fixes require fewer than 5 lines of code.

In other words, by writing code that contains the anti-patterns discussed earlier, developers degrade the performance of their applications by about 6x.

The authors wrote a static analyser (you can find it here) that looks for some of the simple API misuse patterns and ran it on the latest versions of the 12 ORM applications. There are still plenty of incidences! (Some of these in actions that weren’t identified as worse performing actions during profiling of course).

By profiling the latest versions of 12 representative ORM applications and studying their bug-tracking systems, we find 9 types of ORM performance anti-patterns and many performance problems in the latest versions of these applications. Our findings open up new research opportunities to develop techniques that can help developers solve performance issues in ORM applications.

Now if you’ll excuse me, I’ve got a Ruby codebase I just need to go and look at…

14 Jun 03:10




07 Jun 09:39

Downtown San Jose BART Station renders

by Joshua Santos

aw man just give us the trains. no need to heighten the contrast with sf/oakland stations.

Now that the construction methodology has been finalized for the BART subway in San Jose (single bore), let's have a quick look at the stunning station that is being planned for Downtown San Jose. To call the current design "open" would be a serious understatement. From the lowest point you can look up to the ceiling 145 feet or so above. The layout is modern and welcoming with high tech flourishes throughout. Check out the renders below of what will become one of the most iconic stations in the BART network.

Source: Robertee from the San Jose Development Forum