Provided by The National Safety Council
Filed under: Road safety Tagged: The National Safety Council
Last November I went to a discussion on what should be done about downtown buses and in particular the impact of the closure of one block of Robson behind the Art Gallery. You can now review Translink’s proposed bus route alternatives and comment on them. As I very rarely use any of these routes and I do not consider myself any kind of expert in the field of bus routing in general (I leave that to Jarret Walker) or downtown Vancouver (that’s Gordon Price) I will refrain from further comment.
As it happens you can also see what is proposed for Robson Street tonight at the Art Gallery.
I am not happy that this blog post is so short, so thanks to the Georgia Strait here is some more information about Translink’s recent performance
Last year, Metro Vancouver’s transportation authority received a total of 36,390 complaints, up from 32,617 in 2012.
The number of transit-related complaints went up about 10 percent, to 31,595 in 2013 from 28,408 in the previous year.
Transit service and ridership both decreased in 2013. Service hours were tightened [I think that means reduced] to 6.792 million from 6.927 million. Boarded passengers declined to 355.2 million from 363.2 million.
By the way, of the transit-related complaints, 28,494 pertained to the Coast Mountain Bus Company, 1,526 to HandyDART, 599 to the West Coast Express, 551 to the Expo and Millenium lines, and 425 to the Canada Line.
…”less than half (44 per cent) of TransLink’s customers feel they are getting good-to-excellent value for the money they spend on transit, this also is down slightly from 2012 (48 per cent).”
But in case that is all too negative for you here’s Daryl’s latest take on improved efficiency.
First, let’s dispel the notion that commercial photographers have a camera in their hands every day. This will vary for individuals, and by season, but I would guess that I spend a good 75-80% of my working hours in front of a computer – not out shooting. I consider that to be a pretty successful ratio. No one starting out really thinks about it, but digital workflow, retouching, billing, marketing, pre-production, post-production, accounting, taxes, etc… and the plethora of general business paperwork takes up a ton of time.
Many retailers, for example, seek to save money by understaffing. The result is rushed, overworked and mistake-prone employees and higher turnover, which leads to unhappy, antagonized customers. The counterintuitive solution, Ms. Ton says, is to increase “slack” — meaning to have more employees available than are absolutely required at any given time of day.
QuikTrip, in contrast with most of its competitors, purposely overstaffs stores so that they can accommodate employees with emergencies or who are sick or on vacation. The result is happier employees and better-served customers. Ms. Ton cites one study of a 500-store retailer that found that every additional $1 spent on employee salaries resulted in an increase of anywhere from $4 to $28 in sales.
Sarah Green, looking into “the daily routines of geniuses” from the book Daily Rituals: How Artists Work by Mason Curry. One finding:
A clear dividing line between important work and busywork. Before there was email, there were letters. It amazed (and humbled) me to see the amount of time each person allocated simply to answering letters. Many would divide the day into real work (such as composing or painting in the morning) and busywork (answering letters in the afternoon). Others would turn to the busywork when the real work wasn’t going well. But if the amount of correspondence was similar to today’s, these historical geniuses did have one advantage: the post would arrive at regular intervals, not constantly as email does.
It would be great to go back to such a world, where interruptions arrived but once a day. But I do wonder if there’s a way to simulate that through scheduling and discipline…
I love everything about this post.
Apple QuickTake 200 Digital Camera
Apple QuickTake 200 Digital Camera. Manufactured for Apple by Fujifilm from 1996-1997. The QuickTake camera line was killed by Steve Jobs when he returned to Apple.
Still sort of crazy that Apple made this. Though in hindsight, it seems well ahead of the curve.
In the end, Gmail ended up running on three hundred old Pentium III computers nobody else at Google wanted. That was sufficient for the limited beta rollout the company planned, which involved giving accounts to a thousand outsiders, allowing them to invite a couple of friends apiece, and growing slowly from there.
“I am often asked at what point in my love affair with films I began to want to be a director or a critic. Truthfully, I don’t know. All I know is that I wanted to get closer and closer to films.”
Francois Truffaut, The Films in my Life
David Undercoffler on Tesla’s plans to debut a cheaper model in early 2015:
The newest model debuting in 2015 will be the third step, as its platform will differ significantly from anything else Tesla has built so far.
The plan for a mainstream model follows a strategy that Tesla Chief Executive Elon Musk laid out in a 2006 blog post.
“The strategy of Tesla is to enter at the high end of the market, where customers are prepared to pay a premium,” Musk wrote in a post titled “The Secret Tesla Motors Master Plan (just between you and me).” “Then drive down market as fast as possible to higher unit volume and lower prices with each successive model.”
That post, written nearly eight years ago, laid out exactly what Tesla aimed to do.
In the world of startups, eight years seems like an eternity. It’s hard enough to think one year out, let alone a decade. Yet in the automotive industry, eight years seems like nothing. Very little used to change in that span.
It’s amazing not only how far Tesla has come in the past eight years, but just how well they’ve been able to execute on that original plan.
I’ve been in the midst of thinking through a web host / server move for vanderwal.net for a while. I started running a personal site in 1995 and was running it under vanderwal.net since 1997. During this time it has gone through six of 7 different hosts. The blog has been on three different hosts and on the same host since January 2006.
I’ve been wanting better email hosting, I want SSH access back, more current updates to: OS; scripting for PHP, Ruby, and Python; MySQL; and other smaller elements. A lot has changed in the last two to three years in web and server hosting.
The current shift is the 4th generation that started with simple web page hosting with limited scripting options, but often had some SSH and command line access to run cron jobs. The second was usually had a few scripting options and database to run light CMS or other dynamic pages, but the hosting didn’t give you access to anything below the web directory (problematic when trying to set your credentials for login out of the web directory, running more than one version of a site (dev, production, etc.), and essential includes that for security are best left out of the web directory). The second generation we often lost SSH and command line as those coming in lacked skills to work at the command line and could cripple a server with ease with a minor accident. The third has been more robust hosting with proper web directory set up and access to sub directories, having multiple scripting resources, having SSH and command line back (usually after proven competence), having control of setting up your own databases at will, setting up your own subdomains at will, and more. The third generation was often still hosting many sites on one server and a run away script or site getting hammered with traffic impacted the whole server. These hosts also often didn’t have the RAM to run current generations of tools (such as Drupal which can be a resource hog if not using command line tools like drush that thankfully made Drupal easier to configure in tight constraints from 2006 forward).Today’s Options
Today we have a fourth generation of web host that replicates upgraded services like your own private server or virtual private server, but at lighter web hosting prices. I’ve been watching Digital Ocean for a few months and a couple months back I figured for $5 per month it was worth giving it a shot for some experiments and quick modeling of ideas. Digital Ocean starts with 512 MB or RAM, 20GB of SSD space (yes, your read that right, SSD hard drive), and 1TB of transfer. The setup is essentially a virtual private server, which makes experimentation easier and safer (if you mess up you only kill your own work not the work of others - to fix it wipe and rebuild quickly if it is that bad). Digital Ocean also lets you setup your server as you wish in about a minute of creation time with OS, scripting, and database options there for your choosing.
In recently Marco Arment has written up the lay of the land for hosting options from his perspective, which is a great overview. I’ve also been following Phil Gyford’s change of web hosting and like Phil I am dealing with a few domains and projects. I began looking at WebFaction and am liking what is there too. WebFaction adds in email into the equation and 100GB of storage on RAID 10 storage. Like Digital Ocean it has full shell scripting and a wide array of tools to select from to add to your server. This likely would be a good replacement for my core web existence here at vanderwal.net and its related services. WebFaction provides some good management interfaces and smoothing some of the rough edges.
There are two big considerations in all of this: 1) Email; 2) Server location.Email
Email is a huge pain point for me. It should be relatively bullet proof (as it was years ago). To get bullet proof email the options boil down to going to a dedicated mail service like exchange or something like FastMail, a hosted Exchange server, or Google Apps. Having to pester the mail host to kick a server isn’t really acceptable and that has been a big reason I am considering moving my hosting. Also sitting on servers that get their IP address in blocks of blacklisted email servers (or potentially blacklisted) makes things really painful as well. I have ruled out Exchange as an option due to cost, many open scripts I rely on don’t play well with Exchange, and the price related to having someone maintain it.
Google Apps is an option, but my needs for all the other pieces that Google Apps offers aren’t requirements. I am looking at about 10 email addresses with one massive account in that set along with 2 or 3 other domains with one or two email accounts that are left open to catch the stray emails that drift in to those (often highly important). The cost of Google for this adds up quickly, even with using of aliases. I think having one of my light traffic domains on Google Apps would be good, the price of that and access to Google Apps to have access to for experimentation (Google Apps always arise in business conversations as a reference).
FastMail pricing is yearly and I know a lot of people who have been using it for years and rave about it. Having my one heavy traffic email there, as well as tucking the smaller accounts with lower traffic hosted there would be a great setup. Keeping email separate from hosting give uptime as well. FastMail is also testing calendar hosting with CalDAV, which is really interesting as well (I ran a CalDAV server for a while and it was really helpful and rather easy to manage, but like all things calendar it comes with goofy headaches, often related timezone and that bloody day light savings time, that I prefer others to deal with).
Last option is bundled email with web hosting. This has long been my experience. This is mostly a good solution, but rarely great. Dealing with many domains and multitudes of accounts email bundled with web hosting is a decent option. Mail hosting is rarely a deep strength of a web hosting company and often it is these providers that you have to pester to kick the mail server to get your mail flowing again (not only my experience, but darned near everybody I know has this problem and it should never work this way). I am wondering with the benefits of relatively inexpensive mail hosting bundled into web hosting is worth the pain.
I am likely to split my mail hosting across different solutions (the multiple web hosts and email hosts would still be less than my relatively low all in one web hosting I currently have).Server Location
I have had web hosting in the US, UK, and now Australia and at a high level, I really don’t care where the the servers are located as the internet is mostly fast and self healing, so location and performance is a negligible distance for me (working with live shell scripting to a point that is nearly at the opposite side of the globe is rather mind blowing in how instantaneous this internet is).
My considerations related to where in the globe the servers are hosted comes down to local law (or lack of laws that are enforced). Sites sitting on European hosts require cookie notifications. The pull down / take down laws in countries are rather different. As a person with USA citizenship paperwork and hosting elsewhere, the laws that apply and how get goofy. The revelations of USA spying on its own people and servers has me not so keen to host in the US again, not that I ever have had anything that has come close to running afoul of laws or could ever be misconstrued as something that should draw attention. I have no idea what the laws are in Australia, which has been a bit of a concern for a while, but the host also has had servers in the US as well.
My options seem to be US, Singapore, UK, Netherlands, and Nordic based hosting. Nearly all the hosting options for web, applications, and mail provide options for location (the non-US options have grown like wildfire in the post Edward Snowden era). Location isn’t a deciding point, but it is something I will think through. I chose Australia as the host had great highly recommended hosting that has lived up to that for that generation of hosting options. It didn’t matter where the server was hosted eight years ago as the laws and implications were rather flat. Today the laws and implications are far less flat, so it will require some thinking through.
This week on the #openbadgesMOOC, New Currency for Professional Credentials, Sunny Lee walked through Mozilla BadgeKit, the new set of open, foundational tools to support the entire badging process currently available in private beta for select partners developing badges for their communities. You can view Sunny’s slides here.
As many of our community members have already read, seen or listened to many different presentations introducing BadgeKit, we will take this opportunity to address some more frequently asked questions that we didn’t get to in our BadgeKit announcement.
BadgeKit is a set of open, foundational tools to make the badging process easy. It includes tools to support the entire process, including badge design, creation, assessment and issuing, remixable badge templates, milestone badges to support leveling up, and much more. The tools are open source and have common interfaces to make it easy to build additional tools or customizations on top of the standard core, or to plug in other tools or systems.
BadgeKit builds on existing technologies that have evolved out of several years of work and user testing, including Chicago Summer of Learning. In fact, specific tools within BadgeKit are currently being used for key partners within the badges ecosystem (i.e. Connected Educators.)
BadgeKit is open source, so improvements made by community members benefit everyone, from bug fixes to new features and more. It is also easily extendable, working seamlessly with other open tools and systems as they emerge.
While open badges technology has been gaining momentum - with more than 2,000 organizations issuing badges that align with the Open Badges system - there are still ways we can make it easier for organizations to join the ecosystem.
Today, there are too many gaps in the badging experience and many of the existing options are too closed, too expensive or too big. In fact, given the current options for organizations interested in issuing badges, it can be harder to make an open badge than a closed badge!
BadgeKit provides modular and open options (standards) for the community of badge makers to use and build upon within their existing sites or systems. Currently, BadgeKit supports key points in the badging experience, including:
Throughout 2014, we will be adding additional tools to BadgeKit, including:
Mozilla BadgeKit is now available in private beta for select partner organizations that meet specific technical requirements. And anyone can download the code from GitHub and implement it on their own servers.
DOWNLOAD: Easily download the code from https://github.com/mozilla/openbadges-badgekit and install the tools on your own server.
For the private beta version of BadgeKit, all the backend pieces are hosted, supported and updated by Mozilla, while you still have complete control over the experience of your end users on your own sites through our APIs.
And BadgeKit code is currently available on Github, with additional features set to be added in the coming months. To download the tools, visit Github: https://github.com/mozilla/openbadges-badgekit.
Mozilla BadgeKit is available in private beta for select partner organizations that meet specific technical requirements.
To apply for private beta access, visit www.badgekit.org. Given your organization meets the specific hosted version requirements, you will receive a follow-up email with full details on how to get started.
The technical requirements necessary for private beta access to the hosted version of BadgeKit are:
Right now we’re making that decision based on each organization’s technical resources and capacity. But by the end of 2014, the hosted version will be available to any organization looking to implement a badging system!
BadgeKit is currently in private beta and can be used by any issuing organization that meets specific technical requirements. It is aimed at organizations that are building full badge systems and want to leverage their own sites and systems on the front end, as well as have access to technology resources. Tool providers might also be interested in leveraging BadgeKit to extend their own tools, or build additional customizations on top of BadgeKit.
Not yet. Today, BadgeKit is currently in private beta and meant for organizations that have access to technology resources and are looking to implement a full badge system. We are exploring ways to create a lighter weight version of BadgeKit that could be used by individuals, and hope to have it ready later this year. In the meantime check out the additional community driven issuing platforms at http://bit.ly/platform-chart to help you get started.
Not yet - but we’re working on it! You can track progress in Github here: https://github.com/mozilla/openbadges-badgekit/issues/205.
We have a variety of ways we can help. You can simply select the option that best meets your needs:
|mkalus shared this story from Inhabitat - Sustainable Design Innovation, Eco Architecture, Green Building.|
Practically everyone knows that LED and fluorescent light bulbs are better than inefficient incandescents, but many consumers just can’t resist the familiar shape and soft, warm glow. Philips plans on changing all that with an LED bulb that not only looks like the old reliable incandescent bulb, but also gives off a similar glow, which may just convince the hold-outs to convert.
Post tags: "energy efficiency", 2700k LED, efficient bulbs, efficient LED, efficient lighting, energy efficient bulb, energy efficient lighting, green lighting, green technology, home light bulbs, home lighting, incandescent vs LED, LED, LED bulbs, Philips 40W bulb, Philips Bulb, Philips led, Philips LED bulb, Philips LED clear bulb, replacing incandescent, residential lighting, warm LED
SkyTrain guideways are a common sight in Vancouver, particularly along the Lougheed Highway – and now up North Road as the Evergreen Line construction proceeds:
Since we’ve lived with them for about 30 years, it would be an exaggeration to say these concrete behemoths have been universally detrimental. Indeed, I recall one study done after Expo Line construction which documented essentially no change in property values to those homes overshadowed by the elevated guideway – something that would not likely be true if it were a freeway overpass.
Is it just the absence of excessive noise and pollution of a SkyTrain line, or is it that we get used to something once it’s developed - regardless of scale - and becomes a familiar presence in the neighbourhood?
Sun columnist Barbara Yaffe tackles a topic that’s gaining increasing traction as the civic election approaches: the loss of Vancouver’s pre-1940s stock of character homes:
… the wrecking ball these days takes out 70 a month …
These older homes, with their pitched roofs and leaded glass windows, French doors and narrow-slat oak floors, often are architecturally charming, part of the city’s history, a positive for tourism, deserving of refurbishment.
Of course, it’s personal too.
The beautifully appointed and lovingly tended 80-year old character home I once owned in Dunbar now awaits the wrecker’s ball.
Last December I moved to Kitsilano, only to find the diminutive house next to mine was headed for the dump; it got demolished this week. A big new duplex is taking its place.
And she’s not alone:
That’s certainly the view of Caroline Adderson (interviewed here) … The writer, who lives in Mackenzie Heights, on Vancouver’s west side between Kerrisdale and Dunbar, says: “Delay action for a year, and we will be down another 850 (homes), by which time city staff may be hard pressed to find a concentration of character homes.”
Adderson launched her “Vancouver Vanishes” Facebook page in February, featuring photos of homes that once were, along with a petition urging the city to take fast action to stop the demolitions.
More than 2,500 signatures have been gathered and Adderson told me she aims to make her cause an election issue in November’s municipal vote.
There’s also “Disappearing Dunbar” that maps the loss of character homes in just this one small part of the city.
Unfortunately, few address (other than to bemoan) the underlying issue: land values so high they cannot be realized without the demolition of the smaller, older house, combined with the cost, regulation and complications of upgrading a character home to contemporary standards. Or the even more difficult issue of ‘offshore’ money (whether from Asia or Alberta) sustaining a real-estate market that does not or cannot incorporate intangible values.
So let’s blame the politicians.
Having been there, let me articulate the challenge:
Who is willing to take a loss on the sale of their property - if the City could indeed come up with a way to lower land values?
Who will take less than the market would pay by constraining a subsequent owner to ensure the preservation of the existing home?
Or to say it another way, who is willing to be taxed on the unearned increment (the difference between what they bought the property for and the escalation of value separate from improvements) – if, for instance, that could create a fund to purchase the character homes of the City ?
Or yet another way: Who is willing to have their property taxes raised sufficiently to allow the City to compensate the difference between what the character home is worth on the market and the value if it were designated and protected as a heritage property? Which is what the law requires.
Or yet another way: Who is willing to rezone neighbourhoods or other parts of the city so drastically that it would flood the market with housing sufficient to make the character homes competitive?
Or, especially in Dunbar, who is willing to support the scale of density bonusing or infill required to make retention of the existing house sufficiently attractive?
Who, in fact, is willing to run for office on a platform of lowering property values or increasing taxes enough to protect homes almost a century old? Or to put in place regulations so onerous it effectively prohibits demolition? Or do anything that would negatively affect the current owners before they can cash out?
Ironically, Yaffe’s column is on the business pages, and yet is devoid of any hard-nosed analysis or alternatives.
If we can’t take on the big questions, we’ll only be left with small answers.
Other stuff I like:
My two minor (emphasis on minor) missed opportunities:
Let me close this this: building products is hard. Building really good products is very very hard. To get a 1.0 product out of the gate in such a strong state is impressive work, and while many might call it a “me-too”, I don’t. I think this is the exact right first step for Amazon in the living room, and will keep an eye on where it goes from here.
Prediction: Amazon Fire TV is the #2 Internet STB on the market (behind Apple) within 2 years.
Also, and again, Gary Busey.
We’re excited to announce today that we’re taking Authful — our API for Two Factor Authentication (2FA), which we believe is the easiest way to implement 2FA — open source. Open source announcements can be greeted with a mix of welcome and puzzlement, so we wanted to take a minute to introduce Authful, and explain why we built it.
How We Did It — the Authful Truth
We originally created Authful as an internal application to provide MongoHQ customers with the option to enable and configure two-factor authentication, increasing the level of security required when accessing your MongoHQ accounts. Recognizing that a large part of the value we offer you is the ability to minimize risk in deploying MongoDB, it’s on us to leverage every security-enhancing feature available.
Why We Created Our Own 2FA App
Why, with several viable 2FA options available, did we create our own implementation? After evaluating some of the pre-built 2FA services, we noticed that they required users to use their specific mobile apps.
Since our goal is to make our customers’ lives easier, we didn’t want to require the use of a proprietary app to get the most out of our 2FA feature. So, we created our own, with the goal of making it as easy to implement as possible. Authful allows you to store a unique user key with an optional SMS number and use a very simple API to:
• Create users
• Generate QR Code for app use
• Validate OTP tokens
• Manage recovery codes
Authful offers support for multiple mobile apps, fallback numbers and recovery codes, and SMS integration (we went with Twilio for SMS because of it’s ease to implement and reliability). And, to make sure we offered the most tested product possible, we put Authful through a security audit by Matasano.
Why Open Source?
In a nutshell, we have open sourced this tool because security features should be simple to implement, and we’re all better off if it’s easy for developers to do the right thing.
We had informal Saturday brunch with families of kids in our second-grader’s class in Old Chinatown at The Emerald, once a dim-sum joint, now a hipster supper club. The old-Chinatowners are aging out and some of the people moving in look Chinese because hey, this is Vancouver, but they’re younger and single-er and probably don’t speak much 廣州話. Whatever it’s becoming will probably be interesting, but not the same.
Old Chinatown is still full of life and bustle and color, but to me there’s something of a museum-piece feeling. I wonder how long there’ll still be this kind of shop? Lots of fish!
It’s not just Vancouver; hipsters are invading Chinatowns up and down the West Coast. And last time I was in Shanghai, I was shocked at how plastic-wrapped and cosmopolitan much of the shopping had become.
Anyhow, this lady seems happy with her choice. And my hope is that my grandkids will still be able to go shopping where you can pick through the fish to find the one you want to take home.
You might want to enlarge the picture because that is one great-looking fish.
A great idea shouldn’t have to wait for you to get back to a particular device. An impromptu call with a customer shouldn’t be delayed because you don’t have the right data on hand. Life moves too fast to put limits on where and how you work. Just as the best camera is the one you have with you, sometimes the right device is the one closest at hand. Simply put, our vision is to deliver the best cloud-connected experience on every device.
A few buzzwords aside, this is a great post. Clear and fairly concise. It seems like he gets it.
It goes without saying that Mozilla is an open organization, they promote the open web, promote open source software, and advocate for open learning, open journalism, and even have a pretty badass manifesto. Given the enormous number of companies involved in the badging ecosphere, (see above) who do you want to develop the plumbing that keeps all this together? A company that sees every eyeball as a dollar sign? Or a foundation built on the principals of open source?
I wrote the above soon after joining Mozilla. I still agree with it. Mozilla is the best community I’ve ever been a part of. I care deeply for it. On Thursday, March 27th, almost exactly two years after I wrote the above – and a short time after participating in a Mozilla Foundation discussion about the appointment of Brendan Eich as CEO of Mozilla, I tweeted:
— Chris McAvoy (@chmcavoy) March 27, 2014
The wording of the tweet was provocative. “I am an employee of @mozilla” makes it clear that my voice in this situation has more power and more responsibility than a non-employee. I chose that wording. I wanted to be clear and unequivocal. Other wording was just as intentional. I didn’t ask Brendan to resign, I asked him to step down as CEO. I didn’t demand, I asked.
When Brendan was announced as CEO, my hope was that he would explain himself, maybe apologize or recant his actions of 2008. Six years is a long time; maybe now he understood the larger context of Proposition 8 and its terrible effect on thousands of Californians. Instead, he was wholly unprepared to speak about the issue. We waited, being told he would write a blog post that would clear things up. The post came, but was underwhelming, and he neither apologized nor offered an explanation for the donation.
In the meantime, Christie Koelher wrote an amazing post explaining that though she disagreed with Eich on the subject of same-sex marriage, she trusted him, and the organization, to not let his personal views cause harm to Mozilla employees. Fundamentally, I agree with Christie – it would be impossible for Brendan to discriminate within Mozilla; he wouldn’t be allowed to exercise his personal views of same-sex marriage in a way that discriminates against employees. There are plenty of checks, both within Mozilla and legally that would protect employees from Brendan’s personal views.
I didn’t ask for Brendan to step down because I was worried he’d discriminate against those in his reporting chain. If that was the case, I would have asked in 2012 when this story originally broke (just a few short weeks after I joined the organization).
So why tweet at all? This morning, Mark Surman, one of the key people who make me proud to be a Mozillian, wrote “I worry that Mozilla is in a tough spot now. I worry that we do a bad job of explaining ourselves, that people are angry and don’t know who we are or where we stand. And, I worry that in the time it takes to work this through and explain ourselves the things I love about Mozilla will be deeply damaged. And I suspect others do to.” I agree with him, we as Mozilla are in a tough spot now. So again – why did I tweet? Why risk damaging a community I love so much? I want to be absolutely clear: I never meant to hurt this community, everything I’ve done has come from a place of love, love for this organization, love for the community it built, and most importantly, love for the people who make it possible.
This very public debate about Brendan’s appointment points to a divide in Mozilla’s identity, which I’d characterize as Mozilla as tech company versus Mozilla as activist organization, which is the fundamental reason why I believe the Brendan Eich that contributed to Prop 8 isn’t the CEO that Mozilla needs. Our power as an organization comes from our ability to assert technology as activism. Webmaker, Open Badges, Web Literacy, a smart phone that puts the web in every hand, the protection of privacy and identity in the face of attacks from every corner.
Mozilla is a leading organization in the fight for an open web. That’s well established. Less known is Mozilla’s role in open education, open journalism open research / science and web literacy. An open web is a tool to empower individuals. To paraphrase Woody Guthrie’s guitar, “This [internet] machine kills fascists.” That’s the open web we’re fighting for, a machine that ends human suffering, a machine that won’t let a government stop our sons and daughters from loving who they choose. An open web not tied to a mission like essential human freedom and empowerment is an empty web.
Our manifesto is vague on this point, “The Mozilla project is a global community of people who believe that openness, innovation, and opportunity are key to the continued health of the Internet.” We promote the health of the internet, but never make the bigger connection to the health of humanity. We interact with organizations who accept that goal, but take it one step further, their goals aren’t just the health of the Internet, their goals are the health of humankind, of the planet, of our place in history.
Mark once quoted Mitchell as describing projects as rockets. A rocket is a thing, but in every rocket, there’s a payload. The payload can be anything, a belief that the web should be open, that open source is a fundamentally different paradigm of work. Paraphrasing, “a rocket without a payload isn’t worth anything.” The payload of Mozilla is human freedom through technological empowerment. Those are my words, my interpretation of the Manifesto, but I can’t imagine anyone in the organization, even Brendan, disagreeing with them.
In a blog post yesterday, we go beyond the Manifesto and state, “Mozilla’s mission is to make the Web more open so that humanity is stronger, more inclusive and more just. This is why Mozilla supports equality for all, including marriage equality for LGBT couples. No matter who you are or who you love, everyone deserves the same rights and to be treated equally.” Maybe Brendan can lead us on that path by showing how a person can change, as an example of how a community can change.
Let’s get one thing out of the way: there is nothing about Box’s S-1 filing that suggests tech is in a bubble. Indeed, the fact Aaron Levie and company are not yet profitable is a good thing.
To understand why, you must read Should Startups Focus on Profitability or Not by VC Mark Suster:
There are certain topics that even some of the best journalists can’t fully grok. One of them is profitability. I find it amusing when a journalist writes an article about a prominent startup (either privately held or preparing for an IPO) and decries that, “They’re not even profitable!”
I mention journalists here because they perpetuate the myth that focusing on profits is ALWAYS the right answer and then I hear many entrepreneurs (and certainly many “normals”) repeating the same mantra.
There is a healthy tension between profits & growth. To grow faster businesses need resources in today’s financial period to fund growth that may not come for 6 months to a year.
The basic gist is that in situations where costs come before revenue (like, say, a sales force for selling to enterprise), chasing growth over making money increases the amount of long-term profitability. Seriously, read the whole thing.1
Suster’s article was not about Box specifically; for that I refer you to Dave Kellogg’s piece, Burn Baby Burn: A Look at the Box S-1. He concludes that the Box numbers are very reasonable and that the business is scaling well:
In many ways you see a typical “go big or go home” cloud computing firm, burning boatloads of cash but acquiring customers in a reasonably efficient manner and doing a nice job with retention/cross-sell/up-sell as judged by their retention numbers. When you look big picture, I believe they see themselves in a winner-take-all battle vs. DropBox and in this case, the strategy — while amazingly cash consumptive — does make sense.
It’s a great analysis, and I also very much recommend it, but I think he got one thing wrong: Box isn’t (just) focused on beating Dropbox in storage. In fact, they are making a play to be the new Enterprise platform, and that means taking on Microsoft.
Back when all computers were PCs, the dominant platform was Windows. Office obviously ran (better) on Windows, but so did an untold number of 3rd-party apps and custom-build line-of-business (LOB) apps. Considering the fact that enterprises bought most PCs, this meant Windows dominated.
The browser began to break this hegemony apart,2 especially when it came to LOB apps, but the true fracturing has happened in just the last few years with the advent of smartphones and tablets. Now, only a portion of computing devices run Windows:
Rather a clear trend at Microsoft. pic.twitter.com/tjsDxtS4w9— Benedict Evans (@BenedictEvans) February 12, 2014
While this chart covers the entire industry, it’s reflective of what is happening in the enterprise as well. Multiple devices with multiple operating systems are in daily use, but, at the end of the day, they all need to access the same data.
Pure storage isn’t a great business. The cost is trending towards zero, as noted by Levie himself:
Just as you don't worry about database rows when using Twitter or bandwidth on Youtube, cloud storage will eventually be free and infinite.— Aaron Levie (@levie) March 13, 2014
Data, though, is priceless; it can’t be replaced, and it’s the essence of what makes a particular organization unique. For this reason, and for regulatory ones, there are all kinds of specialized controls that IT departments need for data. This is where Box has worked diligently to differentiate themselves from consumer-focused competitors like Dropbox (for more, see my article from January Battle of the Box).
At the same time, Box has embraced smartphones and tablets, building and updating apps on all the platforms, often well before competitors. This results in a service that looks something like this:
By handling the data that needs to be available everywhere, Box is well-placed to be the new platform
This image explains why the arguably more significant news from Box last week was not the unveiling of their S-1, but rather the first Box developer’s conference. Just because the operating system is no longer the platform does not mean that the need – and opportunity – for a platform does not exist. Something needs to tie together all those computing devices, and data, which needs to be everywhere, is the logical place to start.
This ups the stakes considerably. Platforms are multi-sided; in the case of Box, they need to have all the data, serve all the devices, and, most critically, have developers. Developers, though, are very pragmatic: they care about opportunity, and opportunity is a function of market size and ability to monetize. The latter is much less of an issue in the enterprise as compared to the consumer, which leaves scale as the most important differentiator from a developer perspective when they decide which platform to support.
Spending a whole lot of money to scale quickly suddenly doesn’t seem like such a bad idea.
Last week, though, was not all good news for Box; on the same day as their developer conference, Microsoft held its own event to announce Office for iPad. Until now, Microsoft has been largely absent from the iPhone and especially the iPad, leaving some of the most important enterprise data – Office docs – available on basic viewers or 3rd-party editors only. This worked in Box’s favor, as their excellent iPad support made Office docs accessible, if not particularly usable.
Office for iPad, though, is designed to work exclusively with Microsoft’s cloud services. Now, the best solution for dealing with Office docs anywhere is to use Microsoft’s data layer. In this way Apple’s sandboxed approach and lack of inter-app communication is working very much in Microsoft’s favor; you can open files stored in Box or other Cloud services with the Office apps, but the communication is one-way. Any changes you made can only be saved to your iPad or to your Microsoft cloud account (OneDrive, OneDrive Pro, or SharePoint).
To be clear, SharePoint is a pain to use, particularly for end-users, and especially relative to Box. Less-than-full access to some of your most important data, though, is very painful as well, and it’s here that Microsoft just played their trump card. Office still matters for a whole lot of businesses, and the best Office experience is only available in conjunction with the Microsoft cloud/platform.
The opportunity that Box is pursuing is the exact reason I have been so outspoken about Microsoft’s misplaced devices strategy. Steve Ballmer and his Windows obsession missed the fact that operating systems as a whole were increasingly irrelevant; Satya Nadella, whose background is in Microsoft’s cloud business, is actually pursuing the same old Microsoft strategy – use Office to prop up the Microsoft platform – he’s just leveraging it for the platform of the future, not the past.
What is tricky is that future almost certainly includes fewer and fewer Office documents; the degree to which enterprises have transitioned away from ready-to-print documents to constant communication and collaboration will determine if Microsoft’s new strategy is successful – and, by extension, the degree to which Box realizes the growth they have so extravagantly invested in.
My monthly review of Firefox for Android performance measurements. March highlights:
- 3 throbber start/stop regressions
- Eideticker not reporting results for the last couple of weeks.
This section tracks Perfomatic graphs from graphs.mozilla.org for mozilla-central builds of Native Fennec (Android 2.2 opt). The test names shown are those used on tbpl. See https://wiki.mozilla.org/Buildbot/Talos for background on Talos.
This test runs the third-party CanvasMark benchmark suite, which measures the browser’s ability to render a variety of canvas animations at a smooth framerate as the scenes grow more complex. Results are a score “based on the length of time the browser was able to maintain the test scene at greater than 30 FPS, multiplied by a weighting for the complexity of each test type”. Higher values are better.
7200 (start of period) – 6300 (end of period).
Regression of March 5 – bug 980423 (disable skia-gl).
Measure of “checkerboarding” during simulation of real user interaction with page. Lower values are better.
24 (start of period) – 24 (end of period)
Panning performance test. Value is square of frame delays (ms greater than 25 ms) encountered while panning. Lower values are better.
110000 (start of period) – 110000 (end of period)
Performance of history and bookmarks’ provider. Reports time (ms) to perform a group of database operations. Lower values are better.
375 (start of period) – 425 (end of period).
Regression of March 29 – bug 990101. (test modified)
An svg-only number that measures SVG rendering performance. About half of the tests are animations or iterations of rendering. This ASAP test (tsvgx) iterates in unlimited frame-rate mode thus reflecting the maximum rendering throughput of each test. The reported value is the page load time, or, for animations/iterations – overall duration the sequence/animation took to complete. Lower values are better.
7600 (start of period) – 7300 (end of period).
Generic page load test. Lower values are better.
710 (start of period) – 750 (end of period).
No specific regression identified.
Startup performance test. Lower values are better.
3600 (start of period) – 3600 (end of period).
Throbber Start / Throbber Stop
These graphs are taken from http://phonedash.mozilla.org. Browser startup performance is measured on real phones (a variety of popular devices).
3 regressions were reported this month: bug 980757, bug 982864, bug 986416.
:bc continued his work on noise reduction in March. Changes in the test setup have likely affected the phonedash graphs this month. We’ll check back at the end of April.
These graphs are taken from http://eideticker.mozilla.org. Eideticker is a performance harness that measures user perceived performance of web browsers by video capturing them in action and subsequently running image analysis on the raw result.
More info at: https://wiki.mozilla.org/Project_Eideticker
Eideticker results for the last couple of weeks are not available. We’ll check back at the end of April.
I was working on Glassboard when I wrote that post, and Glassboard hadn’t become a Google Play “staff pick” yet so it still had a fairly small audience. These days I work on WordPress for Android which has a much larger audience, so I figured I’d revisit the fragmentation topic.
I’ve seen a lot more fragmentation-related problems since working on WordPress, but I still maintain that fragmentation is less of an issue than is commonly believed.
That’s not to say, of course, that it isn’t a problem at all.
I think the biggest problem is that unless your app is relatively new, you probably have to continue supporting Android 2.3. Making sure your app works on that ugly, buggy OS is a massive pain. Every Android developer I know will dance in the streets the day they can drop support for pre-ICS versions of Android.
Another problem is the number of inexpensive, low-powered Android devices in use – especially outside the US. If you want your apps to run well on them, you have to be extra-cautious about memory consumption and performance (but then, you should be anyway).
Overall, though, I haven’t found the number of devices to be as big an issue as the number of OS versions. Here’s a breakdown of the different Android versions our customers are running:
This isn’t as big a deal as you might think, but it’s not unusual to find your app works flawlessly on the latest version of Android yet breaks on a previous version due to a bug fixed between releases. The Android Issue Tracker is a big help in these situations.
The only other oft-recurring problem I’ve encountered is differences between phone and tablet versions of your app, but these are usually self-inflicted (often caused by having too many screen-specific layouts and not synching changes between them).
I’m sure these issues look horrific to iOS developers, who are blessed by only needing to support a few devices and OS versions. But they’re certainly not as horrific as the press often makes fragmentation sound, and they’re far easier to deal with than all the fragmentation problems I encountered back when I developed for Windows.