Shared posts

21 Jun 19:03

Sharing a story on World Refugee Day

by Dennis Woodside

Today is World Refugee Day, recognized by the United Nations as a time to reflect and take action to help people who have been displaced from their home countries. It can be difficult to comprehend the scale of what’s happening in places like Syria or Iraq. So we’d like to share the story of one family.

Through our customer, the International Refugee Assistance Project (IRAP), we met Abdullah Hussein. Abdullah served as a translator for the US Army in his home country of Iraq. He was then targeted by extremists, and reached out to IRAP for help. This is his story:


Abdullah’s journey is one that’s close to our hearts. The parents of our co-founder, Arash Ferdowsi, came to the US during the Iranian Revolution. He’s just one of many Dropbox employees whose families immigrated to America, so we wanted to learn more about IRAP’s work around the world. We hosted Becca Heller, co-founder of IRAP, and Farah Alkhafaji, another former IRAP client who has since resettled to the US, at Dropbox HQ yesterday. We chatted about their backgrounds and next steps to alleviate this crisis. You can check out the recorded event here.

To learn more about IRAP’s work with Dropbox Business, continue reading here.

21 Jun 19:03

Twitter Favorites: [lesley_mak] Acknowledgement of the men out there who want to become fathers. This day can be very conflicting and difficult. ❤️

Sleeve Fierce @lesley_mak
Acknowledgement of the men out there who want to become fathers. This day can be very conflicting and difficult. ❤️
21 Jun 19:03

Twitter Favorites: [LindsayTedds] Also good day to remember lots of wanna be Dads out there for whom today is a day of a reminder of what they have lost or can't yet have

21 Jun 19:03

Twitter Favorites: [jeffjedras] @sillygwailo Every few weeks I see someone on the bus playing Pokemon Go. What a time that was.

Jeff Jedras @jeffjedras
@sillygwailo Every few weeks I see someone on the bus playing Pokemon Go. What a time that was.
21 Jun 19:03

Twitter Favorites: [danudey] @sillygwailo This was always me at the end of a quiet pager shift

Wile E. Cyrus @danudey
@sillygwailo This was always me at the end of a quiet pager shift
21 Jun 19:03

Twitter Favorites: [JodiesJumpsuit] Moral of the story: don't take your case of the Mondays out on other people. You may come down with a double case of the Mondays.

ur killing me, jump @JodiesJumpsuit
Moral of the story: don't take your case of the Mondays out on other people. You may come down with a double case of the Mondays.
21 Jun 17:22

Recommended on Medium: Introducing Forest 1.0

Rigetti Computing headquarters in Berkeley, CA.

Today, I’m extremely excited to announce the public beta availability of Forest 1.0, the world’s first full-stack programming and execution environment for quantum/classical computing. You can use Forest to develop algorithms for quantum/classical hybrid computing, and you can use it to learn how quantum computers and algorithms really work. You can simulate those algorithms on up to 30 qubits using our Quantum Virtual Machine, or QVM™, running in the cloud. And you can interact with real quantum chips using simple function calls that execute on an active system.

Rigetti Computing is a full-stack quantum computing company, and Forest is a full-stack product. So I’d like to highlight some of the key technical milestones that underpin Forest at both the hardware and software level.


We’ve developed a highly coherent and scalable quantum integrated circuit architecture. Two key ingredients are a new fab process for superconducting through-silicon vias, and a low-temperature bonding process for 3D integration.

To further accelerate our progress in design and manufacturing capabilities, about a year ago we decided to make a strategic investment to build the world’s first commercial quantum integrated circuit fab, called Fab-1. Today, we are officially marking the opening of this new facility (see photo below).

Fab-1 is based on the latest semiconductor processing tools and technology, but, unlike traditional fabs, operates in rapid-iteration mode. Today, we can produce an entirely new design for a 3D integrated quantum circuit in about 2 weeks. Because of the exponential value of iteration cycles in the development of any complex technology, I expect Fab-1 to be a key driver and strategic advantage in our march towards ever greater processing power (more qubits, faster gate times) and performance (longer coherence times, lower error rates, greater connectivity) from quantum chips over the coming years.

Nagesh Vodrahalli has joined the company in the role of VP of Process Technology, leading Fab-1 and our overall fab strategy. Nagesh brings decades of executive and technology leadership in advanced semiconductor manufacturing and 3D integration, including time at Intel, HP, and Altera.

Rigetti Computing’s Fab-1, the world’s first commercial quantum chip fab.

New Two-Qubit Gates

We’ve developed and demonstrated a new two-qubit gate scheme based on direct parametric modulation of qubit frequencies. This gate scheme can be faster and more selective than previous methods, making it better suited for scaled-up chips with many qubits. Details are available in 3 papers describing the theory and experimental implementations on 2-qubit and 8-qubit processors. A software upgrade to Forest to unlock this functionality will be available later this year.

Characterizing the performance of quantum chips is a complex and subtle process. To ensure that we are using the most rigorous and efficient characterization routines — and that those routines are adapted or extended to practical, cutting edge hardware — we have entered into a collaboration with the Quantum Characterization, Verification, and Validation (QCVV) team at Sandia National Labs. The team, led by Robin Blume-Kohout, is one of the leading independent groups in the world in QCVV. The collaboration is underway, and we expect initial results this year.

We are excited to announce the addition of two leading quantum computing researchers. Marcus da Silva has joined the company, managing our device design and theory team, and Colm Ryan has joined our quantum engineering team.

Nagesh Vodrahalli (left) and Jeff Cordova (right) have joined our leadership team.
Colm Ryan (left) and Marcus da Silva (right) have joined our quantum engineering team.

Software and Applications

Forest is built on top of Quil™, the first instruction language for hybrid quantum/classical computing. Hybrid quantum/classical algorithms take advantage of the best aspects of a classical computer and a quantum computer simultaneously. Classical computers are best at rote sequences of arithmetic, while quantum computers are best at manipulating extremely large ensembles of information at once.

Quil is an open and portable instruction set, using a shared memory model that is optimized for near-term algorithms and hardware. Forest 1.0 includes pyQuil, a set of open-source python tools for building and running Quil programs. You can see more about how Quil and pyQuil work in this video.

To help illustrate how quantum computers might one day be used to solve problems that are currently impossible, we have built an interactive demo showing how an example algorithm, the Quantum Approximate Optimization Algorithm, uses quantum mechanics to find optimization-based solutions to NP-hard problem types, such as MAX-CUT.

Developing quantum computing software is one of the most fascinating and challenging emerging fields of engineering. Today, that field is at the foundational stage, where learning and discovery are at a premium. Our full-stack strategy allows us to run faster, more tightly coupled iteration cycles between hardware, software, and applications.

Jeff Cordova has joined the company as VP of Software Engineering to lead the development of our quantum operating system and quantum cloud services infrastructure.

8-qubit quantum processors manufactured by Rigetti Computing.

What’s Next?

Products are built by systems of people functioning together as a team. Existing companies, when faced with profound technological change, must adapt an existing machine to produce a new kind of work output. Startups have the opportunity and imperative to build a fundamentally new kind of machine to produce that new work output.

Rigetti Computing is a new kind of machine, and Forest 1.0 is an example of its work output. We are centered around rapid iteration design-fab-test cycles, powered by incredible scientists and engineers that further build and refine the machine every day.

In the past 18 months, the output from those internal iteration cycles has accelerated. We’ve gone from single-qubit devices to fully-functional 8-qubit systems that are now in the final stages of validation. These systems will be made available on Forest later this year.

Introducing Forest 1.0 was originally published in Rigetti on Medium, where people are continuing the conversation by highlighting and responding to this story.

21 Jun 17:22

En vrac du mardi caniculaire

by Tristan

Tristan à moto au col du Mont Cenis

En vedette : déclarations tonitruantes et lois liberticides pour avoir l’air de faire quelque chose contre le terrorisme

c’est clair : si les doits de l’homme nous empêchaient de résoudre les problèmes de l’extrémisme et du terrorisme, nous changerons ces lois pour garder les anglais en sécurité  ;

En vrac

quitte à puiser dans la technologie de son époque le modèle d’un gouvernement, pourquoi emprunter celui des plateformes et des start-up ? (…) Après tout, quitte à puiser un modèle dans ce que nous permet la machine, il y a autre chose. Pourquoi ne pas aller voir quelles formes de gouvernement nous pourrions puiser du logiciel libre (“l’Etat logiciel libre”, ç’aurait de la gueule, non ?) ? Ou même du wiki, comme espace de partage et de co-élaboration (“Le wiki-état”, pas non plus, non ?”) ?

21 Jun 17:21

Apple Posts Two Videos Highlighting Photos Memories

by John Voorhees

Apple posted two videos highlighting the Memories feature of its iOS Photos app. One, called ‘The Archives,’ is part of Apple’s ‘practically magic’ series of videos and features an elderly man creating a film called ‘Together.’ When the man pulls a photograph out of a cabinet with dozens of drawers, it comes alive with a short snippet of video like a Live Photo.

After gathering a cart-load of photos and film, the man begins the laborious process of splicing them together into a film. The heartwarming spot captures the time, care, and attention needed to painstakingly create a movie from analog photos and videos, making the unstated point of how easy it is to do the same thing in Photos.

The second spot is in stark contrast to the first. It demonstrates the three steps to using Photos’ Memories feature:

  1. Open the Photos app
  2. Go to the Memories Tab
  3. Choose a Memory

The two spots, which both feature Memories called 'Together,' are a clever one-two punch intended to convey how simple Photos has made it to create photo and video montages that once would have taken hours of work.

Support MacStories Directly

Club MacStories offers exclusive access to extra MacStories content, delivered every week; it’s also a way to support us directly.

Club MacStories will help you discover the best apps for your devices and get the most out of your iPhone, iPad, and Mac. Plus, it’s made in Italy.

Join Now
21 Jun 17:21

Acorn 6 Public Beta

It's been a long time since we've done a public beta of a major app release. In fact, we've never done this before for Acorn. But we feel now is a great time to open up Acorn 6's beta to the public.

So we're happy to say that we've got Acorn 6 in public beta for you to try out today.

We're introducing some great new features and refinements, including text on a path, cloning across layers and images, improvements to web export and smart layer export, and new shape stroke options.

We've also added some great new tools for working with color profiles and wide gamut images, which is becoming more important every day for iPhone and pro photographers alike. You can now load and export color profiles from the Image ▸ Color Profile… menu. And when you're exporting for the web, you can highlight the areas of your DP3 or wide gamut image which are out of the range of sRGB.

And there's more of course, so why not grab the beta?

We're doing something new with trials in Acorn 6. The direct version will get its usual 14 day trial, but after that's up you'll be able to still use Acorn to view your images. The only change will be that the tools are disabled if you choose not to purchase it. We're also doing this for the App Store version in Acorn 6. When Acorn officially ships you'll be able to download it for free and "purchase" a 14 day trial for $0.00. When that's up you can keep on using Acorn to view your images, or you can unlock Acorn at the usual price. This will make it really easy for App Store customers to download Acorn and try it out.

Finally- one of the reasons we're doing a public beta, and one of the reasons that we still love working on Acorn, is hearing about what you like and what you think needs improvement. So if you like something- let us know! And if there's something you'd like to see better- let us know! And if you hate it- let us know that too. The feedback we get from our customers is the main driver for changes in Acorn.

So go grab the Acorn 6 beta already.

Mini FAQ:
If you purchase Acorn 5 directly from us between now and Acorn 6's release, you'll be emailed an Acorn 6 license when it ships. We aren't able to do this for the App Store however (sorry).
Acorn 6 requires 10.11.4 or later, including MacOS 10.12 Sierra. Acorn will of course support 10.13 when it comes out in the fall.

P.S., I'm also using this as an opportunity to kick off our new forums, which you can find at

20 Jun 19:18

The JPEG Format’s Days May Be Numbered

by Ryan Christoffel

Kelly Thompson writes for 500px about Apple's upcoming transition from JPEG to the HEVC-based HEIF for photos across all its platforms:

JPEG is 25 years old and showing its age. Compression has become a big deal as we’ve moved to 4K and HDR video, and HEVC was developed to compress those huge video streams. Luckily HEVC also has a still image profile. The format doesn’t just beat JPEG, JPEG 2000, JPEG XR, and WebP—it handily crushes them. It claims a 2 to 1 increase in compression over JPEG at similar quality levels. In our tests, we’ve seen even better levels, depending on the subject of the image.

By using it internally on the camera, it means storing twice as many images in the same space. People with full iPhones are weeping with joy.

Think about it for a second—if we could reduce every picture delivered on the web by two times and have it look the same (or better)… game changer.

A move away from JPEG is significant, but Apple clearly has good reason for making the transition now. The recent massive increases of photos taken by the average user have led to persistently-scarce storage space. Apple has responded in the past year by increasing the base storage of new iPhones and iPads, but storage bumps are only a bandaid fix – adopting HEIF should make a long-term difference.

→ Source:

20 Jun 19:18

iPhone App Size Is Increasing at a Breakneck Pace

by John Voorhees

According to a Sensor Tower report, the total space required for the top 10 most installed iPhone apps has increased more than 1000% since May 2013, increasing from 164 MB to a whopping 1.8 GB. During that period, Apple has raised the maximum app size from 2 GB to 4 GB and the minimum storage capacity of iPhones to 32 GB, but the size of the most popular iPhone apps has far outstripped those increases. iOS 11 will address the issue in part, with a feature that can offload apps that aren’t used often, while saving settings and user data.

Sensor Tower explains that,

We see the often sudden growth in size exhibited by apps such as Facebook and Snapchat as directly tied to the intense competition between them, which necessitates a steady rollout of new and more space-intensive features. Of course, some apps have likely grown in size simply due to a reduced need (or perception of a need) for optimization.

I'm sure that competition and perception play a role in the increases noted by Sensor Tower, but it would also be interesting to see how deep this trend extends beyond the ten most popular iPhone apps. I suspect it runs deeper than many people realize. Scrolling through the Update tab of the App Store, I have many recent updates on my iPhone that exceed 100 MB, including Apple’s own Keynote, which is 675 MB.

→ Source:

20 Jun 19:18

The Story of a New Brain

by Ava Kofman

In the beginning, my plan seemed perfect. I would meditate for five minutes in the morning. Each evening before bed, I would do the same. And instead of having to rely on my own feelings, a biofeedback device would study my brainwaves to tell me whether I was actually focused, anxious, or asleep. By placing a few electrodes on my scalp to measure its electrical activity, I could use an electroencephalography (EEG) headset to monitor my mood and help me meditate. And unlike “quantified self” devices like the Fitbit or Apple Watch, which save your data for later, the headset would loop my brainwaves back to me in real time so that I could better control and regulate them. What could be more relaxing?

Inward bound, I sat at my desk and closed my eyes. Waves crashed loudly on the shore, which indicated that I was thinking too much

Basic EEG technology has been around since the early 20th century, but only recently has it become available in affordable, Bluetooth-ready packages. In the past five years, several startups — with hopeful names like Thync, Melon, Emotiv, and Muse — have tried to bring the devices from clinical and countercultural circles into mainstream consumer markets. Their sales pitch is undeniably attractive: In the comfort of our own homes, without psychotropic meds, psychoanalysis, or an invasive operation, we could bring to light what had previously been unconscious. That, in any case, is the dream.

When I first placed the Muse on my head one Sunday evening in late October, I felt as though I was greeting myself in the future. A thin black band, lightweight and plastic, stretched across my forehead. Its wing-like flanks fit snugly behind my ears. On the launch screen of its accompanying iPhone app, clouds floated by. The Muse wasn’t just a meditation device, the app explained, but a meditation assistant. For some minutes, my initial signal was poor, but eventually the Muse accurately “sensed” my brain. It would now be able to interpret my brainwaves and translate their frequencies into audio cues, which I would hear throughout my meditation session.

Inward bound, I sat at my desk and closed my eyes. Waves crashed loudly on the shore, which indicated that I was thinking too much. But from time to time, I could hear a few soft splashes of water and, farther in the distance, the soft chirping of birds. After what seemed like forever, the session was over. As with all self-tracking practices (and unlike conventional meditation), the post-game seemed to as important as the practice itself, so I made a good-faith effort to pore over my “results.”

They were, at first, second, and third glance, impenetrable. I had earned 602 “calm points,” which the app had mysteriously multiplied by a factor of three. My “neutral points,” by contrast, had been multiplied by a factor of only one. Birds, I was told, had “landed” 16 times.

Equally inscrutable were the two awards I had earned, after a total of seven minutes scanning my brain. One was for tranquility — “Being more than 50 percent calm in a single session must feel good,” the app told me. The other was a “Birds of Eden Award”: I earned this because at least two birds chirped per minute, “which must have felt a bit like being at Birds of Eden in South Africa — the largest aviary in the world.” Not really, I thought. But then again, I had never been to South Africa.

It felt great to meditate for the first time only to be told that I was already off to a good start. But I knew — or, at least, I thought I knew — that I had not felt calm during any part of the session. I had to either accept that I did not know myself, in spite of being myself, or insist on my own discomfort to prove the machine wrong. It seemed that what the brain tracker wanted was less for me to know myself better than for me to know myself the way that it knew me.

The second morning of my experiment, I went to see Dr. Kamran Fallahpour, the founder of the Brain Resource Center, which provides patients with maps and other measures of their cognitive activity so that they can, ideally, learn to alter it. Some of Fallahpour’s patients suffer from severe brain trauma, autism, PTSD, or cognitive decline, but many others — athletes, opera singers, attorneys, actors, students (some as young as five years old) — come to him to improve their concentration, reduce stress, and “achieve peak performance” in their respective fields.

It seemed the brain tracker wanted less for me to know myself than for me to know myself the way it knew me

Before turning to brain stimulation technologies, Fallahpour worked for many years as a psychotherapist, treating patients with traditional talk therapy. His supervisors thought he was doing a good job, and he saw many of his patients improve. But the results were slow-going. He often got the feeling that he was only “scratching the surface” of their problems. Medication worked more quickly, but it, too, was imprecise. Pills could mask the symptoms of those suffering from a brain injury, but they did little to improve the brain’s long-term health.

Fallahpour started to become interested in how to improve the brain through conditioning, electrical and magnetic stimulation, and visual feedback. He began to work with an international group of neuroscientists, clinicians, and researchers developing a database of the “typical” brain. They interviewed thousands of “normal” patients — what was regarded as normal was determined by tests showing the absence of known psychological disorders — and measured their brainwaves, among other physiological responses, to establish a gigantic repository of the normative brain’s function.

Neuroscience has always had a double aim: to know the brain and to be able to change it. Its method for doing so — “screen and intervene” — is part of the larger trend toward personalized medicine. Advance testing, like genomics, can target patients at risk for diabetes, cancer, and other diseases. With the rise of these increasingly sophisticated diagnostic technologies, individuals can not only be treated for current symptoms but prescribed a course of therapy to prevent future illnesses.

Under the 21st century paradigm of personalized medicine, everyone becomes a potential patient. This is why the Brain Resource Center sees just as many “normal” minds as symptomatic ones. And it’s why commercial EEG headsets are being sold to both epileptics trying to monitor their symptoms and employees hoping to work better, faster.

Brain training like this is seductive because its techniques coincide with the prevailing neoliberal approach to care: health is framed as a product of personal responsibility, while economic and environmental etiologies are ignored. Genetics may hardwire us in certain ways, the logic of neuro-liberalism goes, but hard work and data can make us healthy.

Consider Fallahpour’s boot camp for elementary-school kids. For a few hours each day during school vacations, the small rooms of his low-ceilinged offices are swarmed with well-behaved wealthy children playing games to “improve brain health,” “unlock better function,” and acquire a “competitive advantage.” “We tune their brain to become faster and more efficient,” he explained. “The analogy is they can have Windows 3.1 or upgrade it to 10.” Before I had time to contemplate the frightening implications of this vision, the phone began to ring. Fallahpour exchanged pleasantries for a few minutes, asking about the caller’s weekend. No, he told them, he did not take insurance.

The more I thought about the kind of cognitive enhancement Fallahpour promised, the more trouble I had remembering the last time I felt clear-eyed and focused. Had I ever been? Would I ever be? For a few days I was in a fog. I sensed a dull blankness behind my eyes. I wondered if it was a head cold, or sleep deprivation, or a newfound gluten allergy. On a good day, I convinced myself, there was no way I was operating above 60 percent, maybe 65. Sixty percent of what, I wasn’t sure. But I knew I could do better.

The more elusive peak performance seemed, the more I came to realize that it was an essentially nostalgic feeling

I started to resent those who had achieved mythical “peak performance,” and redoubled my commitment to self-improvement. The headset continued to flatter. “Whatever you’re experiencing right now is perfect,” my meditation assistant whispered in my ear, after another tedious sitting.

Still, I couldn’t help comparing each session’s score to the last’s. Was I hearing fewer birds? Was it easier to focus with or without caffeine? As suspicious as I was about the underlying accuracy of the headset’s metrics, I still wanted to beat my previous score. The more elusive peak performance seemed, the more I came to realize that it was an essentially nostalgic feeling. It preyed on the fear that the younger, sharper, more clear-eyed version of yourself once existed and had now disappeared. And it relied on the hope that someday, with practice, such peak selfhood could be rediscovered.

When I went to see Dr. Fallahpour for a follow-up visit, we decided I should try to take a snapshot of my brain. I tried a calm protocol first, to test my brain’s ability to relax, followed by a setting that rewarded my brain for its ability to focus. While he gelled the electrodes and placed them on my scalp, I asked him about some of the skepticism surrounding EEG headsets — namely, the fact that many people, myself included, found it difficult to tell what exactly was being measured.

“EEG is a crude tool and it isn’t the best we have, but it’s the most convenient in many ways,” he explained. “It’s prone to a lot of ‘garbage in and out.’” But when done correctly, he added, it could be “useful and quite powerful.” For one of the protocols we tried, I was asked to modulate my mind’s frequencies in order to trigger classical music to play, even if I did not quite know what those patterns meant or how to generate them.

It soon became clear that deciphering signals from the noise required the trained judgment of an expert like Fallahpour. In this sense, the EEG’s biofeedback wasn’t as seamless as, say, going to the gym with your Fitbit. You still needed someone to help you help yourself.

The next day at dinner, I mentioned these experiments to a friend, who recommended that I watch “Online Shopping Center.” In the performance, the conceptual programmer Sam Lavigne trains a homemade EEG device to identify whether his brain is thinking about online shopping or his own mortality. He sleeps at night with the headset on, hooked up to a computer that either fills carts on Amazon or provides the notification, “You are thinking about your own death.” Having a brain that was either “shopping-like” or “death-like” was not so different, it seemed, from the Muse telling me whether I was calm or active, focused or restless. In both cases, the binaries were reductive, the exercise absurd.

By the end of my week with the Muse, my results were as inscrutable as they had been at the start. Thousands of birds had chirped in my ear. An infinity of waves had crashed upon an endless shore. I had earned quite a few more badges, some by the sheer virtue of persisting: adjusting the signal, continuing the exercise day after day, not quitting in the face of a great and useless mystery.

The more I parsed my graphs and charts, though, the more obscure they seemed. As anyone who has taken more than a passing glance at the mind already knows, our tools aren’t good enough. At least not yet. And the inadequate and embarrassing analogies we use to describe our brains do little to help us see ourselves. In the course of the week, mine had been compared to a loom, a digital machine, an obsolete Windows operating system.

What had I been expecting? That a toy would illuminate the fog? Average EEG devices like the Muse have been shown to have trouble distinguishing between the signals of a relaxed brainwave, stray thought, skin pulse, and furrowed brow. And several studies have disproven the efficacy of related “brain training” games, which don’t augment intelligence so much as make people better at playing their specific games. The Muse helped me score calm points and charm songbirds, but how all this was connected to unlocking inner bliss remained unclear.

I had learned very little about myself. This in itself wasn’t surprising. But if my EEG adventure taught me anything, it was a contradictory lesson ripped straight out of Silicon Valley’s playbook: Know thyself, and know your data knows better.

This piece originally appeared in Logic, a new magazine about technology published in San Francisco. Read their manifesto, subscribe, and check out their new book, Tech Against Trump, at Their upcoming issue is on Scale. Contact with pitches.

20 Jun 19:18

Samsung to unveil the Samsung Galaxy Note 8 on August 26, says report

by Dean Daley
Samsung Galaxy Note 7 in water

The rumour mill keeps spinning with information about the Samsung Galaxy Note 8.

Today, a date for the device’s unveiling has been reported by Korea Herald and Korean publication Navar. While the specifics reported by the two Korean publications are a little different, both state the next Samsung flagship device will arrive near the end of August.

Specifically, Navar indicates that Samsung will reveal the Note 8 on August 26th at an unpacking event in New York, noting that the date may vary by a day or two depending on the location in which the Korean company decides to hold the event. Navar sources its leak to an unnamed Samsung Electronics official.

Korea Herald noted that Samsung plans to unveil the Note 8 in New York as well, but its date isn’t as specific. Korea Herald remarks that Samsung is set to reveal the Note in the third or fourth week of August. Past leaks suggested that the Korean flagship was to launch in the first week of September at IFA, but the publication notes that Samsung’s desire for a competitive edge against Apple has resulted in a schedule change.

It’s also possible that Samsung changed the date so that it can be ahead of the launch for the LG V30, as recent reports say that LG will unveil the V30 in August.

Rumours suggest that the Galaxy Note 8 will feature a 6.3-inch Infinity Display, an upgraded S Pen, a dual rear camera set up and will not support an embedded fingerprint scanner.

Source: Korea HeraldNavar


The post Samsung to unveil the Samsung Galaxy Note 8 on August 26, says report appeared first on MobileSyrup.

20 Jun 19:18

#RidgeMeadows #RCMP responding to a report of someone chasing a black bear with a remote control car #Maple RidgeBC

by ScanBC
mkalus shared this story from scanbc on Twitter.

#RidgeMeadows #RCMP responding to a report of someone chasing a black bear with a remote control car #Maple RidgeBC

Posted by ScanBC on Tuesday, June 20th, 2017 4:31am

434 likes, 342 retweets
20 Jun 19:17

Firefox 55 Beta 4 Testday, June 23rd

by Camelia Badau

Hello Mozillians,

We are happy to let you know that Friday, June 23rd, we are organizing Firefox 55 Beta 4 Testday. We’ll be focusing our testing on the following new features: Screenshots and Simplify Page.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

20 Jun 19:17

Amazon’s New Customer | Ben Thompson

Amazon’s New Customer | Ben Thompson:

Thompson is one of the smartest people thinking deeply about the new economics of our era. He disassembles the Whole Foods acquisition by Amazon with cutting logic: Amazon wins in its many markets by employing a consistent approach, which is to construct a high fixed cost services capability that allows for high scalability. For example, AWS was motivated by the need to support Amazon’s book (and later everything else) e-commerce business, which allowed the company to offer the various modules of that service to others. Voilá, a $6B/year business on top of the high efficiency of the e-commerce operation, which itself is a service for others. 40% of products sold on Amazon are offered by other vendors.

He parses the Whole Foods acquisition as following the same pattern: it is building Amazon Grocery Services, and needs Whole Foods (or an alternative) as the first-and-best customer:


This is the key to understanding the purchase of Whole Foods: to the outside it may seem that Amazon is buying a retailer. The truth, though, is that Amazon is buying a customer — the first-and-best customer that will instantly bring its grocery efforts to scale.

Today, all of the logistics that go into a Whole Foods store are for the purpose of stocking physical shelves: the entire operation is integrated. What I expect Amazon to do over the next few years is transform the Whole Foods supply chain into a service architecture based on primitives: meat, fruit, vegetables, baked goods, non-perishables (Whole Foods’ outsized reliance on store brands is something that I’m sure was very attractive to Amazon). What will make this massive investment worth it, though, is that there will be a guaranteed customer: Whole Foods Markets.

In the long run, physical grocery stores will be only one of the Amazon Grocery Services’ customers: obviously a home delivery service will be another, and it will be far more efficient than a company like Instacart trying to layer on top of Whole Foods’ current integrated model.

I suspect Amazon’s ambitions stretch further, though: Amazon Grocery Services will be well-placed to start supplying restaurants too, gaining Amazon access to another big cut of economic activity. It is the AWS model, which is to say it is the Amazon model, but like AWS, the key to profitability is having a first-and-best customer able to utilize the massive investment necessary to build the service out in the first place.

Go read the whole thing.

20 Jun 19:16

"It’s their own fault. They should have never given us uniforms if they didn’t want us to..."

“It’s their own fault. They should have never given us uniforms if they didn’t want us to be an army.”

- Margaret Atwood, A Handmaid’s Tale
20 Jun 19:16

Tech Leaders Met With Trump, And The Looks On Their Faces Said It All

20 Jun 19:16

Lowering the GWAS threshold would save millions of dollars

A recent publication (pay-walled) by Boyle et al. introducing the concept of an omnigenic model has generated much discussion. It reminded me of a question I’ve had for a while about the way genetics data is analyzed. Before getting into this, I’ll briefly summarize the general issue.

With the completion of the human genome project, human geneticists saw much promise in the possibility of scanning the entire genome for the genes associated with a trait. Inherited diseases were of particular interest. The general idea is to genotype individuals with the disease of interest and a group to serve as controls, then test each genotype for association with disease outcome using, for example, a chi-squared test. These are referred to as genome-wide association studies (GWAS).

Since, thousands of GWAS have been performed and have mostly failed to identify variants that explain observed variation: odds ratios for binary disease outcomes tend to be lower than 1.5. In contrast, odds ratios for eye color are above 20. This has generated much debate and speculation about why. Boyle et al. show data that point to the possibility that most inherited traits are so complex that they may involve cumulative effects of a very large proportion of, or even most, variants, but each with very small effects. If this is the case, it has several implications on how we analyze and interpret GWAS data.

Because current technologies permit millions of variants to be tested, current GWAS apply multiple comparison corrections after obtaining a p-value for each variant. The Bonferroni correction for an error rate of 0.05 is the standard, which implies that a p-value in the order of \(10^{-8}\) is needed to reach genome-wide significance. Because the effect sizes are so small, to obtain genome-wide significance, sample sizes in the thousands are needed. As a result, GWAS tend to be expensive relative to other research projects. Furthermore, the better the high-throughput technologies get, the more tests are run, and as a result smaller p-value are required and larger sample sizes are needed. In some cases, after a first GWAS does not yield any variant reaching significance, more samples are collected to increase power.

This brings me to my question. If we know a trait is inherited, then the current hypothesis testing approach may not be appropriate. In particular, controlling the family-wide error rate at 0.05 is too conservative. Why not require a false discovery rate of 0.05, 0.10 or even 0.25? For a typical GWAS how much would conclusions change if we halved the sample sizes and relaxed the significance threshold?

Lacking access to GWAS data (genotypes are hard to get), to illustrate my point I ran a simulation with several genomic regions having a small effect. I simulated data for 5,000 cases and 5,000 controls. A total of 468 genotypes were tested.

spike <- function(x, A=0.02, mu=0, theta=.015, phi=2000)
  A*log((1 - theta)^(abs(x-mu)/theta) * phi + 1)/log(phi)
chr <- rep( 1:9, round(seq(87, 17, len = 9)))
chr_start <- c(which(!duplicated(chr)), length(chr))
chr_start <- (chr_start[-length(chr_start)] + chr_start[-1])/2
P <- length(chr) ## nunmber of features
loc <- 1:P
centers <- c(0.05,0.25,0.55,.8,0.94)*P
As <- c(0.1,0.1,0.125,0.1,0.05)
tmp <- sapply(seq_along(centers), function(i) spike(loc, mu = centers[i], A=As[i]))
effect <- 1 + rowSums(tmp)
baseline_p <- rmutil::rbetabinom(P, 100, 0.25, 100) / 100
N <- 5000
disease <- sapply(effect*baseline_p, function(p) rbinom(N, 1, p = p))
control <- sapply(baseline_p, function(p) rbinom(N, 1, p = p))
X <- rbind(control, disease)
if(any(colMeans(X) ==0 | colMeans(X)==1)) stop("One column with MAF=0")
y <- rep(c(0,1), each=nrow(X)/2)

We can obtain odds ratios and p-values and create a Manhattan plot by simply using this code:

res <- apply(X, 2, function(x){
plot(loc, -log10(res[2,]) , pch=16, xaxt="n", xlab="Chromosome", ylab="-log (base 10) p-value", ylim=c(0, -log10(0.05/P)))
axis(1, chr_start, seq_along(chr_start), tick=FALSE)
abline(h=-log10(0.05/P), lty=2)

Note that no variant achieves genome-wide significant (dashed line). However, we do see peaks in the plot. If we smooth the odds ratio data, these peaks become even clearer:

logodds <- log(res[1,])
fit <- predict(loess(logodds~loc, span = 0.1), se=TRUE)
mat <- fit$fit + cbind(-2*fit$,0,2*fit$
matplot(loc, mat, type="l", col=c("grey","black","grey"), lty=1, ylim=max(mat)*c(-1,1), ylab="Log odds", xlab="Chromosome", xaxt="n")
axis(1, chr_start, seq_along(chr_start), tick=FALSE)
abline(h=0, lty=2)

However, in the GWAS world, genome-wide significance at a 0.05 is required. So after obtaining the results above, the next step would be to obtain more funding and double the sample size.

disease2 <- sapply(effect*baseline_p, function(p) rbinom(N, 1, p = p))
control2 <- sapply(baseline_p, function(p) rbinom(N, 1, p = p))
X <- rbind(control, control2, disease, disease2)
if(any(colMeans(X) ==0 | colMeans(X)==1)) stop("One column with MAF=0")
y <- rep(c(0,1), each=nrow(X)/2)

With this larger sample size we now find four regions achieving genome wide significance:

res2 <- apply(X, 2, function(x){
  c(tab[1,1]*tab[2,2] / (tab[1,2]*tab[2,1]),chisq.test(tab)$p.value)
plot(loc, -log10(res2[2,]), pch=16, xaxt="n", xlab="Chromosome", ylab="-log (base 10) p-value")
axis(1, chr_start, seq_along(chr_start), tick=FALSE)
abline(h=-log10(0.05/P), lty=2)

We find four out of the five regions simulated to have effects with no false positives. Here is a plot of the simulated effects:

plot(loc, effect, col=chr, pch=16, xaxt="n", xlab="Chromosome", ylab="Effect")
axis(1, chr_start, seq_along(chr_start), tick=FALSE)

However, note that the shape of the Manhattan plot did not change much with the new data (see figure below). We mostly moved points up by increasing the sample size. But we could have identified those same regions with a relatively low false positive rate, by, for example looking at the top 10 variants, or simply lowering the threshold. In fact, statisticians have beed advocating for other formal statistical approaches.

In the following plot we compare the results using two different sample sizes. We denote the regions simulated to have the strongest effects with black points while the rest are grey.

plot(loc, -log10(res2[2,]), col=ifelse(effect>1.01, "black", "grey"), pch=16, xaxt="n",
     xlab="Chromosome", ylab="-log (base 10) p-value", main="Double the sample size")
axis(1, chr_start, seq_along(chr_start), tick=FALSE)
abline(h=-log10(0.05/P), lty=2)
o <- order(res[2,])[1:25] ##top 25 variants
plot(loc, -log10(res[2,]), col=ifelse(effect>1.01, "black", "grey"), pch=16, ylim=c(0, -log10(0.05/P)), xaxt="n", xlab="Chromosome", ylab="-log (base 10) p-value", main="Lower the threshold")
axis(1, chr_start, seq_along(chr_start), tick=FALSE)
abline(h = -log10(max(res[2,o])), lty=2)

If Boyle et al. are correct, then hypothesis testing is definitely not an appropriate statistical approach for GWAS: we know the null hypothesis is false for most variants. Going forward, to make sense of GWAS data, we should reinvest the millions of dollars currently used to satisfy the Bonferroni correction to measure other endpoints and develop new statistical approaches that can help us understand the molecular cause of disease and improve treatment outcomes.

20 Jun 16:04

Firefox Focus New to Android, blocks annoying ads and protects your privacy

by Barbara Bermes

Last year, we introduced Firefox Focus, a new browser for the iPhone and iPad, designed to be fast, simple and always private. A lot has happened since November; and more than ever before, we’re seeing consumers play an active role in trying to protect their personal data and save valuable megabytes on their data plans.

While we knew that Focus provided a useful service for those times when you want to keep your web browsing to yourself, we were floored by your response  – it’s the highest rated browser from a trusted brand for the iPhone and iPad, earning a 4.6 average rating on the App Store.

Today, I’m thrilled to announce that we’re launching our Firefox Focus mobile app for Android.

Like the iPhone and iPad version, the Android app is free of tabs and other visual clutter, and erasing your sessions is as easy as a simple tap.  Firefox Focus allows you to browse the web without being followed by tracking ads which are notoriously known for slowing down your mobile experience.  Why do we block these ad trackers? Because they not only track your behavior without your knowledge, they also slow down the web on your mobile device.

Check out this video to learn more:


New Features for Android

For the Android release of Firefox Focus, we added the following features:

  • Ad tracker counter – For the curious, there’s a counter to list the number of ads that are blocked per site while using the app.
  • Disable tracker blocker – For sites that are not loading correctly, you can disable the tracker blocker to quickly take care of it and get back to where you’ve left off.
  • Notification reminder – When Focus is running in the background, we’ll remind you through a notification and you can easily tap to erase your browsing history.

For Android users we also made Focus a great default browser experience. Since we support both custom tabs and the ability to disable the ad blocking as needed, it works great with apps like Facebook when you just want to read an article without being tracked. We built Focus to empower you on the mobile web, and we will continue to introduce new features that make our products even better. Thanks for using Firefox Focus for a faster and more private mobile browsing experience.


Firefox Focus Settings View

Firefox Focus Settings View

You can download Firefox Focus on Google Play and in the App Store.

The post Firefox Focus New to Android, blocks annoying ads and protects your privacy appeared first on The Mozilla Blog.

20 Jun 16:01

Traffic trouble: Growing concern about Vancouver's Prior-Venables replacement options

mkalus shared this story from CTV News - Vancouver:
How about a fourth option and work with Translink to create a feeder bus route from a central parking location? Yeah, no not going to happen. That would be un BC.

As the city moves closer to removing Vancouver's viaducts, the focus is turning to which side roads will become main arteries for traffic.

Getting in and out of the city is difficult to begin with, but without the Georgia and Dunsmuir viaducts, and with big changes coming to False Creek Flats, drivers are going to have to find another route.

The requirements: The feeder route must run east-west, and connect with the downtown core. It also needs to be able to support rush-hour traffic.

"There is no easy option," said City of Vancouver director of transportation Lon LaClaire. "All the options require trade-offs."

The city is working on a major project that will see the replacement of Prior and Venables as a main east-west route.

With the city looking at three different options for what will become a busy thoroughfare carrying thousands of vehicles a day, people who live and work near the three proposed routes are voicing concerns about what the changes will mean for the neighbourhood.

City of Vancouver False Creek plan


One is Malkin Avenue, a road that hugs the south side of Strathcona Park, and is also known as "Produce Row." The avenue hosts several produce distribution warehouses, and those who work on the Row worry a traffic takeover would stall trucks trying to head in and out of the area for deliveries.

"If we put in another 15,000 or 20,000 cars a day down Malkin it would make most of our businesses impossible to run," said Discovery Organics manager Damien Bryan.

A city report estimates the project would cost between $80 million and $130 million, with the bulk of the budget going to land acquisition and overpass structure.

The route would also run alongside community gardens. David Tracey has gardened in the area for years.

“These kind of places you can’t put a dollar price on,” he explained. “These kind of places you can’t strike off on the map when they’re inconvenient according to how the economy and traffic has to go. I hope the city takes a look at it from the wider point of view, and takes an ecological look at the big picture of these things.”



Another option just one block south of Malkin is National Avenue, but repurposing the roadway for high volumes of vehicle traffic would require a Vancouver fire training facility to move. The National Avenue option is the most expensive, with a price tag of up to $230 million. Overpass structure alone could cost up to $90 million, and acquiring the land is expected to cost between $75 million and $105 million.

National is also close to Terminal Avenue, which LaClaire said "doesn't create a very good arterial spacing."



The third option is William Street – an alternative that was initially ruled out but brought back into consideration. William runs from Strathcona Park to Boundary Road and into Burnaby, but in many places the road is split by crossroads, continuing on to the north or south in a step-like pattern rather than straight across the city.

The street would have to first be aligned. Officials would then have to route it either through the park or around it, making this a complex and controversial option.

"The big concern we have is it cuts through an existing park, and park space is irreplaceable once it's gone," said Dan Jackson of the Strathcona Residents' Association.

Strathcona Park falls under the jurisdiction of the Vancouver Park Board, so city staff have requested the board consider the option.

The city is asking for feedback on all three routes. There is no clear timeline on when a decision will be made.

An animated video from the city shows the False Creek Flats traffic plan west of Main Street.

20 Jun 15:58

Netflix rolls out children’s interactive storytelling content to iOS devices and smart TVs

by Dean Daley
Netflix on iPhone

Netflix has launched a new interactive children’s show, “Puss in Book: Trapped in an Epic Tale,” in which users are able to choose the paths that the characters take.

The programming is available today, June 20th, on iOS and smart TV platforms globally — Netflix website, Android Devices, Chromecast and Apple TV is yet to come.

This form of interactive storytelling will allow viewers to watch the same show, over and over again with a variety of endings. Netflix’s use of the form makes choose-you-own-adventure storytelling — usually a hallmark of video games — more accessible to kids.

Following Puss in Book, Netflix is also planning to release “Buddy Thunderstruck: The Maybe Pile” in July and “Stretch Armstrong: Breakout” in 2018.

This isn’t the first time interactive programming has been on Netflix; senior vice president of original series Cindy Holland noted in March that the Netflix Original cartoon Kong, which debuted in April of 2016 also followed the choose-your-own-adventure format.

At the time, Holland stated that the feature is something they are not experimenting with adult dramas.

While we shouldn’t expect shows like Marvel’s Defenders to feature interactive storytelling anytime soon, for now, the feature will allow children to have even more fun doing what kids do best: watching the same programs, over and over again.

The post Netflix rolls out children’s interactive storytelling content to iOS devices and smart TVs appeared first on MobileSyrup.

20 Jun 15:57

DNA Replication Has Been Filmed For The First Time, and it's not what we expected

mkalus shared this story from ScienceAlert - Latest.

Here's proof of how far we've come in science - in a world-first, researchers have recorded up-close footage of a single DNA molecule replicating itself, and it's raising questions about how we assumed the process played out.

The real-time footage has revealed that this fundamental part of life incorporates an unexpected amount of 'randomness', and it could force a major rethink into how genetic replication occurs without mutations.

"It's a real paradigm shift, and undermines a great deal of what's in the textbooks," says one of the team, Stephen Kowalczykowski from the University of California, Davis.

"It's a different way of thinking about replication that raises new questions."

The DNA double helix consists of two intertwining strands of genetic material made up of four different bases - guanine, thymine, cytosine, and adenine (G, T, C and A).

Replication occurs when an enzyme called helicase unwinds and unzips the double helix into two single strands.

A second enzyme called primase attaches a 'primer' to each of these unravelled strands, and a third enzyme called DNA polymerase attaches at this primer, and adds additional bases to form a whole new double helix.

You can watch that process in the new footage below:

The fact that double helices are formed from two stands running in opposite directions means that one of these strands is known as the 'leading strand', which winds around first, and the other is the 'lagging strand', which follows the leader.

The new genetic material that's attached to each one during the replication process is an exact match to what was on its original partner.

So as the leading strand detaches, the enzymes add bases that are identical to those on the original lagging stand, and as the lagging strand detaches, we get material that's identical to the original leading strand.

Scientists have long assumed that the DNA polymerases on the leading and lagging strands somehow coordinate with each other throughout the replication process, so that one does not get ahead of the other during the unravelling process and cause mutations.

But this new footage reveals that there's no coordination at play here at all - somehow, each strand acts independently of the other, and still results in a perfect match each time.

The team extracted single DNA molecules from E. coli bacteria, and observed them on a glass slide. They then applied a dye that would stick to a completed double helix, but not a single strand, which means they could follow the progress of one double helix as it formed two new double helices.

While bacterial DNA and human DNA are different, they both use the same replication process, so the footage can reveal a lot about what goes on in our own bodies.

The team found that on average, the speed at which the two strands replicated was about equal, but throughout the process, there were surprising stops and starts as they acted like two separate entities on their own timelines.

Sometimes the lagging strand stopped synthesising, but the leading strand continued to grow. Other times, one strand could start replicating at 10 times its regular speed - and for seemingly no reason.

"We've shown that there is no coordination between the strands. They are completely autonomous," Kowalczykowski says.

The researchers also found that because of this lack of coordination, the DNA double helix has had to incorporate a 'dead man's switch', which would kick in and stop the helicase from unzipping any further so that the polymerase can catch up.

The question now is that if these two strands "function independently" as this footage suggests, how does the unravelling double helix know how to keep things on track and minimise mutations by hitting the brakes or speeding up at the right time?

Hopefully that's something more real-time footage like this can help scientists figure out. And it's also an important reminder that while we humans love to assume that nature has a 'plan' or a system, in reality, it's often a whole lot messier.

The research has been published in Cell.

20 Jun 15:57

Innovation Starts At Home…?


Tony Hirst, OUInfo, Jun 21, 2017


This commentary from Tony Hirst is true not only of OU but also of every large organization - public secort and private sector - I have ever encountered. With size comes control. Here's Tony Hirst: "The OU was innovative because folk  understood  technologies of all sorts and made creative use of them. Many of our courses included emerging technologies that were examples of the technologies being taught in the courses. We ate the dogfood we were telling students about. Now we’ ve put the dog down and just show students cat pictures given to us by consultants."

[Link] [Comment]
20 Jun 15:57

Whither Moodle?


Phil Hill, e-Literate, Jun 23, 2017


"The trajectory of Moodle new implementations (higher education degree-granting institutions moving from another LMS to Moodle as the primary LMS) is striking," writes Phil Hill. What's striking, of course, is the downward momentum, trending toward zero. "In 2012 and 2014 an astounding 76% of new implementations were movements towards Moodle. But we might be seeing a change. In 2016 the number was down to a still-healthy 49%, but for the first quarter of 2017 it is only 3%."

[Link] [Comment]
20 Jun 15:57

Meet Matter Seven


Ben Werdmuller, Medium, Jun 23, 2017


Ben Werdmuler: "Unlike most journalism, these stories are two-way: you can reply to the journalist and have a conversation. And unlike most conversational platforms, you’ re always talking to a real person, not a bot. The result is strong audience trust and a loyal audience in a world where media companies are struggling to find either."

[Link] [Comment]
20 Jun 15:56

Facebook and Twitter are being used to manipulate public opinion – report


Alex Hern, The Guardian, Jun 23, 2017


From where I sit, this is a case of the latest centralized news media being used for the same purpose centralized news media have always been used: to, um, educate the public. 'The reports... cover nine nations including Brazil, Canada, ChinaGermany, Poland, Ukraine, and the United States. They found “ the lies, the junk, the misinformation'  of traditional propaganda is widespread online and 'supported by Facebook or Twitter’ s algorithms” according to Philip Howard, Professor of Internet Studies at Oxford.'"

[Link] [Comment]
20 Jun 15:56

Attention Trendoids — Latest News HERE

by Ken Ohrn

Now that hot yoga has receded into background noise, if it hasn’t disappeared completely, here’s a viable replacement.   Beer yoga.  Perhaps also a way to practice looking reverent while upending a bottle.

20 Jun 15:56

On the Rise of Kotlin

by Joe Kutner

It’s rare when a highly structured language with fairly strict syntax sparks emotions of joy and delight. But Kotlin, which is statically typed and compiled like other less friendly languages, delivers a developer experience that thousands of mobile and web programmers are falling in love with.

The designers of Kotlin, who have years of experience with developer tooling (IntelliJ and other IDEs), created a language with very specific developer-oriented requirements. They wanted a modern syntax, fast compile times, and advanced concurrency constructs while taking advantage of the robust performance and reliability of the JVM. The result, Kotlin 1.0, was released in February 2016 and its trajectory since then has been remarkable. Google recently announced official support for Kotlin on Android, and many server-side technologies have introduced Kotlin as a feature.

The Spring community announced support for Kotlin in Spring Framework 5.0 last month and the Vert.x web server has worked with Kotlin for over a year. Kotlin integrates with most existing web applications and frameworks out-of-the-box because it's fully interoperable with Java, making it easy to use your favorite libraries and tools.

But ultimately, Kotlin is winning developers over because it’s a great language. Let’s take a look at why it makes us so happy.

A Quick Look at Kotlin

The first thing you’ll notice about Kotlin is how streamlined it is compared to Java. Its syntax borrows from languages like Groovy and Scala, which reduce boilerplate by making semicolons optional as statement terminators, simplifying for loops, and adding support for string templating among other things. A simple example in Kotlin is adding two numbers inside of a string like this:

val sum: String = "sum of $a and $b is ${a + b}"

The val keyword is a feature borrowed from Scala. It defines an immutable variable, which in this case is explicitly typed as a String. But Kotlin can also infer that type. For example, you could write:

val x = 5

In this case, the type Int is inferred by the compiler. That’s not to say the type is dynamic though. Kotlin is statically typed, but it uses type inference to reduce boilerplate.

Like many of the JVM languages it borrows from, Kotlin makes it easier to use functions and lambdas. For example, you can filter a list by passing it an anonymous function as a predicate:

val positives = list.filter { it > 0 }

The it variable in the function body references the first argument to the function by convention. This is borrowed from Groovy, and eliminates the boilerplate of defining parameters.

You can also define named functions with the fun keyword. The following example creates a function with default arguments, another great Kotlin feature that cleans up your code:

fun printName(name: String = "John Doe") {

But Kotlin does more than borrow from other languages. It introduces new capabilities that other JVM languages lack. Most notable are null safety and coroutines.

Null safety means that a Kotlin variable cannot be set to null unless it is explicitly defined as a nullable variable. For example, the following code would generate a compiler error:

val message: String = null

But if you add a ? to the type, it becomes nullable. Thus, the following code is valid to the compiler:

val message: String? = null

Null safety is a small but powerful feature that prevents numerous runtime errors in your applications.

Coroutines, on the other hand, are more than just syntactic sugar. Coroutines are chunks of code that can be suspended to prevent blocking a thread of execution, which greatly simplifies asynchronous programming.

For example, the following program starts 100,000 coroutines using the launch function. The body of the coroutine can be paused at a suspension point so the main thread of execution can perform some other work while it waits:

fun main(args: Array<String>) = runBlocking<Unit> {
  var number = 0
  val random = Random()
  val jobs = List(100_000) {
    launch(CommonPool) {
      number += random.nextInt(100)
  jobs.forEach { it.join() }
  println("The answer is: $number")

The suspension point is the delay call. Otherwise, the function simply calculates some random number and renders it.

Coroutines are still an experimental feature in Kotlin 1.1, but early adopters can use them in their applications today.

Despite all of these great examples, the most important feature of Kotlin is its ability to integrate seamlessly with Java. You can mix Kotlin code into an application that’s already based on Java, and you can consume Java APIs from Kotlin with ease, which smooths the transition and provides a solid foundation.

Kotlin Sits on the Shoulders of Giants

Behind every successful technology is a strong ecosystem. Without the right tools and community, a new programming language will never achieve the uptake required to become a success. That’s why it’s so important that Kotlin is built into the Java ecosystem rather than outside of it.

Kotlin works seamlessly with Maven and Gradle, which are two of the most reliable and mature build tools in the industry. Unlike other programming languages that attempted to separate from the JVM ecosystem by reinventing dependency management, Kotlin is leveraging the virtues of Java for it's tooling. There are attempts to create Kotlin-based build tools, which would be a great addition to the Kotlin ecosystem, but they aren't a prerequisite for being productive with the language.

Kotlin also works seamlessly with popular JVM web frameworks like Spring and Vert.x. You can even create a new Kotlin-based Spring Boot application from the Spring Initializer web app. There has been a huge increase in adoption of Kotlin for apps generated this way.

Kotlin has great IDE support too, thanks to it's creators. The best way to learn Kotlin is by pasting some Java code into IntelliJ and allowing the IDE to convert it to Kotlin code for you. All of these pieces come together to make a recipe for success. Kotlin is poised to attract both new and old Java developers because it's built on solid ground.

If you want to see how well Kotlin fits into existing Java tooling, try deploying a sample Kotlin application on Heroku using our Getting Started with Kotlin guide. If you're familiar with Heroku, you'll notice that it looks a lot like deploying any other Java-based application on our platform, which helps make the learning curve for Kotlin relatively flat. But why should you learn Kotlin?

Why Kotlin?

Heroku already supports five JVM languages that cover nearly every programming language paradigm in existence. Do we need another JVM Language? Yes. We need Kotlin as an alternative to Java just as we needed Java as an alternative to C twenty years ago. Our existing JVM languages are great, but none of them have demonstrated the potential to become the de facto language of choice for a large percentage of JVM developers.

Kotlin has learned from the JVM languages that preceded it and borrowed the best parts from those ecosystems. The result is a well round, powerful, and production-ready platform for your apps.