Shared posts

22 Nov 13:08

On dependencies, dependency injection, and sanity

What's interesting about debacles such as the recent left-padding madness is how it can get you to re-think seemingly obvious concepts, and challenge some basic assumptions. Here's in particular a little reflection that just occurred to me on the way between the bathroom and my office, and that may seem super-obvious to some of you. It wasn't to me, so here goes, I'm sharing…

Whether you go all theoretical, call it fancy and talk about "inversion of control", or you just pragmatically do constructor injection like God intended, at this point almost everyone agrees that dependency injection is a Good Idea. You inject a contract, and the black magic inside your IoC container finds the right implementation and gives you an instance, with the right scope, lifetime, etc. Coupling is low, applications are more robust, more modular, more flexible, and unicorns flutter around happily.

Now when you look at the wonderful world of package managers (a clear progress over what the hell we thought we were doing before), what are we doing exactly? Well, we need a component that performs a certain task (in other words, it fulfils a specific contract). We then go on to ask The One True Package Manager to hand us a very specific version of a very specific package. And we do so USING A STRONG NAME. Pardon me for screaming, but can you see the parallel here?

We are "injecting" a "dependency" into our application, but it doesn't for one second occur to us that the concept is almost exactly the same. Let's go over the comparison one more time.

Dependency Injection "dependency" "injection"
Ask for a contract Ask for a strong and versioned package name
Get a managed and scoped instance with members that you can call into Get a set of namespaces with stuff in them that you can call into

So what is good practice on one side (resolution by contract) is completely out of the picture on the other. Why can't we ask for packages by contract? I don't know exactly what that would be looking like, but I have the nagging feeling that there's something to invent here. Or does something like that exist? What do you think?

22 Nov 05:38

Easy custom RSS in Orchard

Orchard adds RSS automatically to everything that looks like a list of content items: blog posts, of course, but also comments, projections, lists, search results, tags, etc. I’ve explained before how RSS works internally, but adding a new feed from code can look like a daunting task if you’re not an experienced Orchard developer. Fortunately, there is another feature related to RSS that makes it possible to create custom RSS without writing a line of code.

On a default installation of Orchard, let's create a new query that lists all pages in the system (that would be just the home page unless you created more):Creating a new query for pages

Then, let's use that in a new projection page:Creating a new projection page

Once this is done, we can navigate to the new projection, and if we look at the rendered HTML, we'll see the following tag in the HEAD that gives us the path to the RSS feed for the query (ID may vary):

<link rel="alternate" type="application/rss+xml" title="Pages" href="/OrchardLocal/rss?projection=13" />

If we follow that URL, we'll find the default RSS feed:

<rss version="2.0">
  <channel>
    <title>Pages</title>
    <link>http://localhost:30321/OrchardLocal/pages</link>
    <description>Pages</description>
    <item>
      <title>Welcome to Orchard!</title>
      <link>http://localhost:30321/OrchardLocal/</link>
      <description>Welcome to Orchard!</description>
      <pubDate>Fri, 26 Feb 2016 16:19:26 GMT</pubDate>
      <guid isPermaLink="true">http://localhost:30321/OrchardLocal/</guid>
    </item>
  </channel>
</rss>

If we want to customize this feed, say we want to add taxonomy terms as RSS categories, it is actually possible to do so just from the admin. I'll assume that we have a taxonomy called "Category" in the system, and that we've added the corresponding taxonomy field to the type definition for Page.Configuring the Category taxonomy

First, we need to enable the Feeds Tokens feature:Enabling the Feeds Token feature

Let's head over to the content definition for the Page type and add the RSS part:Adding the Rss part to the Page content type

Now we can configure the part to use our taxonomy as RSS categories:Configuring the RSS part

And voilà, our feed now shows categories:

<item>
  <title>Welcome to Orchard!</title>
  <link>http://localhost:30321/OrchardLocal/</link>
  <description>Welcome to Orchard!</description>
  <pubDate>Fri, 26 Feb 2016 16:19:00 GMT</pubDate>
  <guid isPermaLink="true">http://localhost:30321/OrchardLocal/</guid>
  <category>Home</category>
  <category>Orchard</category>
  <category>RSS</category>
</item>
19 Nov 08:04

Understanding ES5, ES2015 and TypeScript

by John Papa

What is the difference between ES5, ES2015 (formerly known as ES6), and TypeScript? Which should we learn and use?

First, let’s create a foundation for our discussion for each of these. TypeScript is a superset of JavaScript. ES2015 is the evolution of ES5. This relationship makes it easier to learn them progressively.

ES5 to ES2015 to TypeScript

We want to understand the differences between them, but first we must understand what each of these are and why they exist. We’ll start with ES5.

ES5

ES5 is what most of us have used for years. Functional programming at its best, or worst, depending on how you view it. I personally love programming with ES5. All modern browsers support it. It’s extremely flexible but also has many factors that contribute to apps that can become train wrecks. Scoping, closures, IIFE’s and good guard logic are required to keep our train on the rails with ES5. Despite this, its flexibility is also a strength that many of us have leaned on.

This chart shows the current compatibility of browsers for ES5.

Perhaps the most difficult problem that ES5 poses for us is the difficulty in identifying issues at development time. Tooling for ES5 is lacking as it is complicated, at best, for a tool to decipher how to inspect ES5 for concerns. We’d like to know what properties an object in another file contains, what an invalid parameter to a function may be, or let us know when we use a variable in an improper scope. ES5 makes these things difficult on developers and on tooling.

ES6/E2015 Leaps Forward

ES2015 is a huge leap forward from ES5. It adds a tremendous amount of functionality to JavaScript. These features address some of the issues that made ES5 programming challenging. They are optional, as we can still use valid ES5 (including functions) in ES2015.

Here are some of the ES2015 features as seen in Luke Hoban’s reference. A full list can be seen here at the spec. - arrows - classes - enhanced object literals - template strings - destructuring - default + rest + spread - let + const - iterators + for..of - generators - unicode - modules - module loaders - map + set + weakmap + weakset - proxies - symbols - subclassable built-ins - promises - math + number + string + array + object APIs - binary and octal literals - reflect api - tail calls

This is a dramatic leap forward to ES5 and modern browsers are racing to implement all of the features. This chart shows the current compatibility of browsers for ES2015.

Node.js is built against modern versions of the V8 engine. Node has implemented much of ES2015, according to its docs.

Node 4.x labels itself as Long Term Support (LTS). The LTS label indicates their release line. All even numbered major versions focus on stability and security. All odd numbered major versions (e.g. 5.x) fall under Short Term Support (STS), which focus on active development and more frequent updates. In short, I recommend you stay on node 4 for production development and node 5 for future research of features that may be in future LTS versions. You can read the official node guidelines for versioning here.

Bringing it back to ES2015, we now have an incredible amount of functionality that we can optionally use to write code.

How Do Developers Consider ES2015?

We might wonder who might be interested in ES2015 and who might not. There are many ES5 developers who are well versed in the pros and cons of the language. After over a decade in JavaScript, we may feel very comfortable with ES5. Once we master a language it can be difficult to justify leaping to a new version if we do not see the value. What are we gaining? What problem are we solving? This is a natural way of thinking. Once we decide if there is value in moving to ES2015, then we can decide to make the move.

There are also many ES5 developers who couldn’t wait to use ES2015. The point is that many folks who have used ES5 are already on to ES2015, while many more are still making that decision to migrate.

There are many JavaScript developers today, but even more are coming. I believe the number now who are considering learning JavaScript and those still on their way, will dwarf that number using it today. JavaScript is growing and not everyone will have had a solid ES5 background. Some are coming from Java and C# and other popular languages and frameworks. Many of these already already have the features that ES2015 recently introduced, and have had them for years. This makes ES2015 a much easier transition for them, than ES5. And it’s good timing too, as many modern browsers and Node are supporting ES2015.

So there are many of us, all with different perspectives, all leading to an eventual ES2015 (or beyond) migration.

Supporting ES5 Browsers

How do we run ES2015 in browsers that do not yet support ES2015? We can use ES2015 and transpile to ES5 using a tool like Babel. Babel makes it easy to write ES2015 (an din the future ES2016 and beyond), and still compile down to an older version of JavaScript. Pretty cool!

TypeScript

Where does TypeScript fit in? Should we even bother with it?

First, I think the name throws people off. The word Type in TypeScript indicates that we now have types. These types are optional, so we do not have to use them. Don;t believe me? Try pasting your ES5 code into the TypeScript playground. Look mom! No types needed! So shouldn’t we optionally call it Type?Script or [Type]Script ? Kidding aside, the types are just once piece of TypeScript. Perhaps a better name is simply ES+.

Let’s step back for a moment and revisit one of the concerns I mentioned previously that many developers have with writing JavaScript: the difficulty in identifying mistakes at development time.

What if we could identify scoping issues as we type them? What if we could identify mismatched parameters in our tool with red underlines? What if our editors and IDEs could tell us when we make a mistake in using the other people’s or our own code improperly? This is what we generally rely on tooling for.

Identifying Issues Early

Whether we use Atom, VS Code, Visual Studio, Web Storm, or Sublime Text we enjoy a plethora of innate features or extensions to our tool of choice that help us write better code faster. These tools should (and can) help use identify problems early.

Is it more fun to find an issue right away as we code it, so we can fix it there … or to get called at 5am due to a production outage when traffic cranked up on our app and hit our hidden bug? I prefer to be home at 5 with my family :)

These tools today try their best to help identify problems, and they do an admirable job with what they have to work with. But what if we could give them a little more help? What if we could give them the same types of help that other languages like C# and Java provide today? Then these tools can really help us identify issues early and often.

This is where TypeScript shines.

The value in TypeScript is not in the writing less code. The value of TypeScript is in writing safer code. Over the long haul, it helps us to write code more efficiently as we take advantage of tooling for identifying issues and automatically filling in parameters, properties, functions, and more (often known as autocomplete and intellisense).

You can try out TypeScript here in their playground.

ES+

I joke that TypeScript should be called ES+, but when we examine it more closely, that is what is really is.

So what does TypeScript offer over ES2015? I’ll focus on the three main additions I feel add the most value:

  1. Types
  2. Interfaces
  3. Future ES2016+ features (such as Annotations/Decorators and async/await)

TypeScript is ES plus features like these.

Types and interfaces help provide the tooling it needs to identify problems early as we type them. With these features our editors don’t have to guess whether we used a function properly or not. The information is readily available for the tool to raise a red flag to us so we can fix he issues right away. In some cases, these tools can also help recommend and refactor for us!

TypeScript promises to be forward thinking. It helps bring the agreed upon features in the future ECMAScript spec to us today. For example features like decorators (used in Angular 2) and async/await (a popular technique to make async programming easier in C#). Decorators are available now in TypeScript while async/await is coming soon in v 2.0 according to the TypeScript roadmap.

Is TypeScript Deviating from JavaScript?

From the top of the TypeScript website’s front page we find this statement:

TypeScript is a typed superset of JavaScript that compiles to plain JavaScript.

This is hugely important. TypeScript is not a shortcut language. It doesn’t deviate from JavaScript. It doesn’t take us in another direction. It’s purpose is to allow us to use features in the future versions of JavaScript today, and to provide a better and safer experience.

Why Not Just use ES2015?

That’s a great option! Learning ES2015 is a huge leap from ES5. Once you master ES2015, I argue that going from their to TypeScript is a very small step. So I suggest back, once you learn ES2015, try TypeScript and take advantage of its tooling.

What About Employability?

Does learning ES2015 or TypeScript hurt my employability? Absolutely not. But it also doesn’t mean that you shouldn’t understand ES5. ES5 is everywhere today. That will curve down eventually, but there is a lot of ES5 code and it’s good to understand the language both to support it and to understand what problems ES2015 and TypeScript help solve. Plus we can use our knowledge of ES5 to help use debug issues using sourcemaps in the browsers.

Keeping Up with the Emerging Technology

For a long time we didn’t need transpilers. The Web used JavaScript and most folks who wrote in ES3 and ES5 used jQuery to handle any cross browser issues. When ES5 came along, not much changed there. For a long period of years in Web development we had a stable set of JavaScript features that most browsers understood. Where there were issues we used things like es5-shim.js and even jQuery to work around them. Things have changed.

The Web is moving at a fast pace. New Web standards are emerging. Libraries like Angular 2, Rx.js, React, and Aurelia are pushing the Web forward. More developers are coming to JavaScript via the web and Node.js.

The ECMAScript team is now adopting a new name for the language versions using the year as an identifier. No more ES6, now we call it ES2015. The next version is targetted as ES2016. The intention is to drive new features into JavaScript more frequently. It takes time for all browsers to adopt the standards across the desktop and mobile devices.

What does this all mean? Just when we have browsers that support ES2015, ES2016 may be out. Without help, this could be awful if we want to support all ubiquitous browsers and use the new features! Unless we have a way to use the new features today and support the browsers we need.

This is why the emergence of transpilers has become so important in the Web today. TypeScript and Babel (the major players in transpiling) both supported ES2015 before it was in the browsers. They both plan to support (and already do in some cases) ES2016 features. These tools are the current answer to how we move forward without leaving behind our customers.

How Do We Transpile?

We can use tools like Gulp, Grunt, WebPack, and SystemJS with JSPM to transpile with Babel or TypeScript. Many editors connect directly to these tasks to transpile for us as we code. Many IDEs now support automatic transpilation with a click of a button. We can even use TypeScript from the command line to watch our files and transpile as we go.

No matter how or where we code, there are many ways to transpile.

What It All Means

A fact in our chosen profession is that technology changes. It evolves. Sometimes it happens much faster than we can absorb it. That’s why it is important to take advantage of tools that can help us absorb and adapt to the changes, like TypeScript and Babel for ES2015 (and beyond). In this case, we’re using technology to keep up with technology. Seems like a paradox, but at the core it’s simply using our time effectively to keep up.

26 Jul 09:43

The Basics of Web Application Security

Modern web development has many challenges. Of course, you need to write code that fulfills customer functional requirements. It needs to be fast. Further you are expected to write this code to be comprehensible and extensible.

Somewhere, way down at the bottom of the list of requirements, behind, fast, cheap, and flexible is “secure”. That is, until something goes wrong, until the system you build is compromised, then suddenly security is, and always was, the most important thing.

Specialized techniques, such as threat analysis, are increasingly recognized as essential to any serious development. But Cade Cairns and Daniel Somerfield explore how security can be significantly enhanced with some basic practices which every developer can and should be doing as a matter of course.

more…

15 Jul 08:49

Bliki: InfrastructureAsCode

Infrastructure as code is the approach to defining computing and network infrastructure through source code that can then be treated just like any software system. Such code can be kept in source control to allow auditability and ReproducibleBuilds, subject to testing practices, and the full discipline of ContinuousDelivery. It's an approach that's been used over the last decade to deal with growing CloudComputing platforms and will become the dominant way to handle computing infrastructure in the next.

I grew up in the Iron Age, when releasing a new server application meant finding some physical hardware to run it on, configuring that hardware to support the needs of the application, and deploying that application to the hardware. Getting hold of that hardware was usually expensive, but also long winded, usually a matter of months. But now we live the Cloud Age, where firing up a new server is a matter of seconds, requiring no more than an internet connection and a credit card. This is a dynamic infrastructure where software commands are used to create servers (often virtual machines, but can be installations on bare metal), provision them, and tear them down, all without going anywhere near a screwdriver.


Practices

Infrastructure as Code is based on a few practices:

  • Use Definition Files: all configuration is defined in executable configuration definition files, such as shell scripts, Ansible playbooks, Chef recipes, or Puppet manifests. At no time should anyone log into a server and make on-the-fly adjustments. Any such tinkering risks creating SnowflakeServers, and so should only be done while developing the code that acts as the lasting definition. This means that applying an update with the code should be fast. Fortunately computers execute code quickly, allowing them to provision hundreds of servers faster than any human could type.
  • Self-documented systems and processes: rather than instructions in documents for humans to execute with the usual level of human reliability, code is more precise and consistently executed. If necessary, other human readable documentation can be generated from this code.
  • Version all the things: Keep all this code in source control. That way every configuration and every change is recorded for audit and you can make ReproducibleBuilds to help diagnose problems.
  • Continuously test systems and processes: tests allow computers to rapidly find many errors in infrastructure configuration. As with any modern software system, you can set up DeploymentPipelines for your infrastructure code which allows you to practice ContinuousDelivery of infrastructure changes.
  • Small changes rather than batches: the bigger the infrastructure update, the more likely it is to contain an error and the harder it is to detect that error, particularly if several errors interact. Small updates make it easier to find errors and are easier to revert. When changing infrastructure FrequencyReducesDifficulty.
  • Keep services available continuously: increasingly systems cannot afford downtime for upgrades or fixes. Techniques such as BlueGreenDeployment and ParallelChange can allow small updates to occur without losing availability

Benefits

All of this allows us to take advantage of dynamic infrastructure by starting up new servers easily, and safely disposing of servers when they are replaced by newer configurations or when load decreases. Creating new servers is just a case of running the script to create as many server instances as needed. This approach is a good fit with PhoenixServers and ImmutableServers

Kief Morris's book is due to be published later this year

Using code to define the server configuration means that there is greater consistency between servers. With manual provisioning different interpretations of imprecise instructions (let alone errors) lead to snowflakes with subtly different configurations, which often leads to tricky faults that are hard to debug. Such difficulties are often made worse by inconsistent monitoring, and again using code ensures that monitoring is consistent too.

Most importantly using configuration code makes changes safer, allowing upgrades of applications and system software with less risk. Faults can be found and fixed more quickly and at worst changes can be reverted to the last working configuration.

Having your infrastructure defined as version-controlled code aids with compliance and audit. Every change to your configuration can be logged and isn't susceptible to faulty record keeping.

All of this increases in importance as you need to handle more servers, making infrastructure as code a necessary capability if you're moving to a serious adoption of microservices. Infrastructure as Code techniques scale effectively to manage large clusters of servers, both in configuring the servers and specifying how they should interact.

Further Reading

My colleague Kief Morris has spent the last year working on a book that goes into more detail about infrastructure as code, which is currently in the final stages of production. The list of practices is taken directly from this book.

Acknowledgements

This post is based on the writing and many conversations with Kief Morris.

Ananthapadmanabhan Ranganathan, Danilo Sato, Ketan Padegaonkar, Piyush Srivastava, Rafael Gomes, Ranjan D Sakalley, Sina Jahangirizadeh, and Srivatsa Katta discussed drafts of this post on our internal mailing list.

Translations: Chinese · Spanish
Share:
if you found this article useful, please share it. I appreciate the feedback and encouragement
18 Apr 12:22

Benchmarking .NET code

by Scott Hanselman
You've got a fast car...photo by Robert Scoble used under CC

A while back I did a post called Proper benchmarking to diagnose and solve a .NET serialization bottleneck. I also had Matt Warren on my podcast and we did an episode called Performance as a Feature.

Today Matt is working with Andrey Akinshin on an open source library called BenchmarkDotNet. It's becoming a very full-featured .NET benchmarking library being used by a number of great projects. It's even been used by Ben Adams of "Kestrel" benchmarking fame.

You basically attribute benchmarks similar to tests, for example:

[Benchmark]

public byte[] Sha256()
{
return sha256.ComputeHash(data);
}

[Benchmark]
public byte[] Md5()
{
return md5.ComputeHash(data);
}

The result is lovely output like this in a table you can even paste into a GitHub issue if you like.

Benchmark.NET makes a table of the Method, Median and StdDev

Basically it's doing the boring bits of benchmarking that you (and I) will likely do wrong anyway. There are a ton of samples for Frameworks and CLR internals that you can explore.

Finally it includes a ton of features that make writing benchmarks easier, including csv/markdown/text output, parametrized benchmarks and diagnostics. Plus it can now tell you how much memory each benchmark allocates, see Matt's recent blog post for more info on this (implemented using ETW events, like PerfView).

There's some amazing benchmarking going on in the community. ASP.NET Core recently hit 1.15 MILLION requests per second.

That's pushing over 12.6 Gbps a second. Folks are seeing nice performance improvements with ASP.NET Core (formerly ASP.NET RC1) even just with upgrades.

It's going to be a great year! Be sure to explore the ASP.NET Benchmarks on GitHub https://github.com/aspnet/benchmarks as we move our way up the TechEmpower Benchmarks!

What are YOU using to benchmark your code?


Sponsor: Thanks to my friends at Redgate for sponsoring the blog this week! Have you got SQL fingers?Try SQL Prompt and you’ll be able to write, refactor, and reformat SQL effortlessly in SSMS and Visual Studio. Find out more with a free trial!



© 2016 Scott Hanselman. All rights reserved.
     
14 Apr 05:43

The Joy of Live Coding - CodePen, REPLs, TOPLAP, Alive, and more

by Scott Hanselman

A few weeks ago I talked about Interactive Coding with C# and F# REPLs. There's a whole generation that seemingly missed out on LIVE CODING. By that, I mean, writing and running code at the same time.

Lots of folks used C, C++, Delphi, C#, Java, etc over the last 15-20-30 years and had a pretty standard Write, Compile, Walk Away, Run process working for them. Twenty years ago I was often waiting 30 min or more for stuff that takes seconds now. Many of you today may have to wait hours for compilation.

However, there's so many environments available today that can allow us to write code while it runs. Instant satisfaction...and the browser is becoming a fantastic IDE for Live Coding.

When I use the term "Live Coding" though, there's actually a couple of definitions. I'm conflating them on purpose. There's Live Coding as in "coding LIVE while people watch" and there's "coding and watching your program change as you type." Of course, you can do them both, hence my conflating.

Live Coding - Music and Art

Mike Hodnick mentioned Live Coding to me in the context of music and art. Live Coders use a wide array of languages and tech stacks to make music and art, including JavaScript, Ruby, Haskell, Clojure, and a number of DSL's. Here is a YouTube video of Mike - Live Coding music using Tidal, a language for musical improvisation.

Resources

  • Overtone - Collaborative Programmable Music.
  • TOPLAP - An organization dedicated to live coding.
  • Cyril - Live Coding Visuals
  • SuperCollider - Real time audio synthesis
  • Tidal - Live Coding Music

Some prominent live coders:

Live Coding - JavaScript and Experimentation

There's another kind of live coding that makes me happy, and that's things like CodePen. Sometimes you just want to write some HTML, CSS, and/or some JavaScript. No IDEA, no text editor...AND you want it to be running as you type.

Code and Watch. That's it.

Some of you LIVE in CodePen. It's where most of your work and prototyping happens, truly. Others who read this blog may be learning of CodePen's existence this very moment. So don't knock them! ;)

CodePen is lovely

CodePen is a "playground for the front-end side of the web." There have been a number of Live Coding Playgrounds out there, including...

But it's fair to say that CodePen has stayed winning. The community is strong and the inspiration you'll find on CodePen is amazing.

Oh, and just to end this blog post on a high note, ahem, and combine Live Coding of Music with  ahem, here's a Roland 808 (that's a Rhythm Controller) written entirely in CodePen. Ya, so. Ya. And it works. AWESOME. Here's the code you can play with, it's by Gregor Adams.

Magical Roland 808 written in CodePen

There's even Live Coding in Visual Studio now with the "Alive" plugin at https://comealive.io.

What kinds of Live Coding tools or experiences have YOU seen, Dear Reader? Share in the comments!


Sponsor: Thanks to my friends at Redgate for sponsoring the blog this week! Have you got SQL fingers? Try SQL Prompt and you’ll be able to write, refactor, and reformat SQL effortlessly in SSMS and Visual Studio. Find out more with a free trial!



© 2016 Scott Hanselman. All rights reserved.
     
03 Apr 08:14

We Hire the Best, Just Like Everyone Else

by Jeff Atwood

One of the most common pieces of advice you'll get as a startup is this:

Only hire the best. The quality of the people that work at your company will be one of the biggest factors in your success – or failure.

I've heard this advice over and over and over at startup events, to the point that I got a little sick of hearing it. It's not wrong. Putting aside the fact that every single other startup in the world who heard this same advice before you is already out there frantically doing everything they can to hire all the best people out from under you and everyone else, it is superficially true. A company staffed by a bunch of people who don't care about their work and aren't good at their jobs isn't exactly poised for success. But in a room full of people giving advice to startups, nobody wants to talk about the elephant in that room:

It doesn't matter how good the people are at your company when you happen to be working on the wrong problem, at the wrong time, using the wrong approach.

Most startups, statistically speaking, are going to fail.

And they will fail regardless of whether they hired "the best" due to circumstances largely beyond their control. So in that context does maximizing for the best possible hires really make sense?

Given the risks, I think maybe "hire the nuttiest risk junkie adrenaline addicted has-ideas-so-crazy-they-will-never-work people you can find" might actually be more practical startup advice. (Actually, now that I think about it, if that describes you, and you have serious Linux, Ruby, and JavaScript chops, perhaps you should email me.)

Okay, the goal is to increase your chance of success, however small it may be, therefore you should strive to hire the best. Seems reasonable, even noble in its way. But this pursuit of the best unfortunately comes with a serious dark side. Can anyone even tell me what "best" is? By what metrics? Judged by which results? How do we measure this? Who among us is suitable to judge others as the best at … what, exactly? Best is an extreme. Not pretty good, not very good, not excellent, but aiming for the crème de la crème, the top 1% in the industry.

The real trouble with using a lot of mediocre programmers instead of a couple of good ones is that no matter how long they work, they never produce something as good as what the great programmers can produce.

Pursuit of this extreme means hiring anyone less than the best becomes unacceptable, even harmful:

In the Macintosh Division, we had a saying, “A players hire A players; B players hire C players” – meaning that great people hire great people. On the other hand, mediocre people hire candidates who are not as good as they are, so they can feel superior to them. (If you start down this slippery slope, you’ll soon end up with Z players; this is called The Bozo Explosion. It is followed by The Layoff.) — Guy Kawasaki

There is an opportunity cost to keeping someone when you could do better. At a startup, that opportunity cost may be the difference between success and failure. Do you give less than full effort to make your enterprise a success? As an entrepreneur, you sweat blood to succeed. Shouldn’t you have a team that performs like you do? Every person you hire who is not a top player is like having a leak in the hull. Eventually you will sink. — Jon Soberg

Why am I so hardnosed about this? It’s because it is much, much better to reject a good candidate than to accept a bad candidate. A bad candidate will cost a lot of money and effort and waste other people’s time fixing all their bugs. Firing someone you hired by mistake can take months and be nightmarishly difficult, especially if they decide to be litigious about it. In some situations it may be completely impossible to fire anyone. Bad employees demoralize the good employees. And they might be bad programmers but really nice people or maybe they really need this job, so you can’t bear to fire them, or you can’t fire them without pissing everybody off, or whatever. It’s just a bad scene.

On the other hand, if you reject a good candidate, I mean, I guess in some existential sense an injustice has been done, but, hey, if they’re so smart, don’t worry, they’ll get lots of good job offers. Don’t be afraid that you’re going to reject too many people and you won’t be able to find anyone to hire. During the interview, it’s not your problem. Of course, it’s important to seek out good candidates. But once you’re actually interviewing someone, pretend that you’ve got 900 more people lined up outside the door. Don’t lower your standards no matter how hard it seems to find those great candidates. — Joel Spolsky

I don't mean to be critical of anyone I've quoted. I love Joel, we founded Stack Overflow together, and his advice about interviewing and hiring remains some of the best in the industry. It's hardly unique to express these sort of opinions in the software and startup field. I could have cited two dozen different articles and treatises about hiring that say the exact same thing: aim high and set out to hire the best, or don't bother.

This risk of hiring not-the-best is so severe, so existential a crisis to the very survival of your company or startup, the hiring process has to become highly selective, even arduous. It is better to reject a good applicant every single time than accidentally accept one single mediocre applicant. If the interview process produces literally anything other than unequivocal "Oh my God, this person is unbelievably talented, we have to hire them", from every single person they interviewed with, right down the line, then it's an automatic NO HIRE. Every time.

This level of strictness always made me uncomfortable. I'm not going to lie, it starts with my own selfishness. I'm pretty sure I wouldn't get hired at big, famous companies with legendarily difficult technical interview processes because, you know, they only hire the best. I don't think I am one of the best. More like cranky, tenacious, and outspoken, to the point that I wake up most days not even wanting to work with myself.

If your hiring attitude is that it's better to be possibly wrong a hundred times so you can be absolutely right one time, you're going to be primed to throw away a lot of candidates on pretty thin evidence.

Perhaps worst of all, if the interview process is predicated on zero doubt, total confidence … maybe this candidate doesn't feel right because they don't look like you, dress like you, think like you, speak like you, or come from a similar background as you? Are you accidentally maximizing for hidden bias?

One of the best programmers I ever worked with was Susan Warren, an ex-Microsoft engineer who taught me about the People Like Us problem, way back in 2004:

I think there is a real issue around diversity in technology (and most other places in life). I tend to think of it as the PLU problem. Folk (including MVPs) tend to connect best with folks most like them ("People Like Us"). In this case, male MVPs pick other men to become MVPs. It's just human nature.

As one reply notes, diversity is good. I'd go as far as to say it's awesome, amazing, priceless. But it's hard to get to -- the classic chicken and egg problem -- if you rely on your natural tendencies alone. In that case, if you want more female MVPs to be invited you need more female MVPs. If you want more Asian-American MVPs to be invited you need more Asian-American MVPs, etc. And the (cheap) way to break a new group in is via quotas.

IMO, building diversity via quotas is bad because they are unfair. Educating folks on why diversity is awesome and how to build it is the right way to go, but also far more costly.

Susan was (and is) amazing. I learned so much working under her, and a big part of what made her awesome was that she was very much Not Like Me. But how could I have appreciated that before meeting her? The fact is that as human beings, we tend to prefer what's comfortable, and what's most comfortable of all is … well, People Like Us. The effect can be shocking because it's so subtle, so unconscious – and yet, surprisingly strong:

  • Baseball cards held by a black hand consistently sold for twenty percent less than those held by a white hand.

  • Using screens to hide the identity of auditioning musicians increased women's probability of advancing from preliminary orchestra auditions by fifty percent.

  • Denver police officers and community members were shown rapidly displayed photos of black and white men, some holding guns, some holding harmless objects like wallets, and asked to press either the "Shoot" or "Don't Shoot" button as fast as they could for each image. Both the police and community members were three times more likely to shoot black men.

It's not intentional, it's never intentional. That's the problem. I think our industry needs to shed this old idea that it's OK, even encouraged to turn away technical candidates for anything less than absolute 100% confidence at every step of the interview process. Because when you do, you are accidentally optimizing for implicit bias. Even as a white guy who probably fulfills every stereotype you can think of about programmers, and who is in fact wearing an "I Rock at Basic" t-shirt while writing this very blog post*, that's what has always bothered me about it, more than the strictness. If you care at all about diversity in programming and tech, on any level, this hiring approach is not doing anyone any favors, and hasn't been. For years.

I know what you're thinking.

Fine, Jeff, if you're so smart, and "hiring the best" isn't the right strategy for startups, and maybe even harmful to our field as a whole, what should be doing?

Well, I don't know, exactly. I may be the wrong person to ask because I'm also a big believer in geographic diversity on top of everything else. Here's what the composition of the current Discourse team looks like:

I would argue, quite strongly and at some length, that if you want better diversity in the field, perhaps a good starting point is not demanding that all your employees live within a tiny 30 mile radius of San Francisco or Palo Alto. There's a whole wide world of Internet out there, full of amazing programmers at every level of talent and ability. Maybe broaden your horizons a little, even stretch said horizons outside the United States, if you can imagine such a thing.

I know hiring people is difficult, even with the very best of intentions and under ideal conditions, so I don't mean to trivialize the challenge. I've recommended plenty of things in the past, a smorgasboard of approaches to try or leave on the table as you see fit:

… but the one thing I keep coming back to, that I believe has enduring value in almost all situations, is the audition project:

The most significant shift we’ve made is requiring every final candidate to work with us for three to eight weeks on a contract basis. Candidates do real tasks alongside the people they would actually be working with if they had the job. They can work at night or on weekends, so they don’t have to leave their current jobs; most spend 10 to 20 hours a week working with Automattic, although that’s flexible. (Some people take a week’s vacation in order to focus on the tryout, which is another viable option.) The goal is not to have them finish a product or do a set amount of work; it’s to allow us to quickly and efficiently assess whether this would be a mutually beneficial relationship. They can size up Automattic while we evaluate them.

What I like about audition projects:

  • It's real, practical work.
  • They get paid. (Ask yourself who gets "paid" for a series of intensive interviews that lasts multiple days? Certainly not the candidate.)
  • It's healthy to structure your work so that small projects like this can be taken on by outsiders. If you can't onboard a potential hire, you probably can't onboard a new hire very well either.
  • Interviews, no matter how much effort you put into them, are so hit and miss that the only way to figure out if someone is really going to work in a given position is to actually work with them.

Every company says they want to hire the best. Anyone who tells you they know how to do that is either lying to you or to themselves. But I can tell you this: the companies that really do hire the best people in the world certainly don't accomplish that by hiring from the same tired playbook every other company in Silicon Valley uses.

Try different approaches. Expand your horizons. Look beyond People Like Us and imagine what the world of programming could look like in 10, 20 or even 50 years – and help us move there by hiring to make it so.

* And for the record, I really do rock at BASIC.

[advertisement] Building out your tech team? Stack Overflow Careers helps you hire from the largest community for programmers on the planet. We built our site with developers like you in mind.
27 Mar 07:33

Docker for Windows Beta announced

by Scott Hanselman

Docker Desktop AppI'm continuing to learn about Docker and how it works in a developer's workflow (and Devops, and Production, etc as you move downstream). This week Docker released a beta of their new Docker for Mac and Docker for Windows. They've included OS native apps that run in the background (the "tray") that make Docker easier to use and set up. Previously I needed to disable Hyper-V and use VirtualBox, but this new Docker app automates Hyper-V automatically which more easily fits into my workflow, especially if I'm using other Hyper-V features, like the free Visual Studio Android Emulator.

I signed up at http://beta.docker.com. Once installed, when you run the Docker app with Hyper-V enabled Docker automatically creates the Linux "mobylinux" VM you need in Hyper-V, sets it up and starts it up.

"Moby" the Docker VM running in Hyper-V

After Docker for Windows (Beta) is installed, you just run PowerShell or CMD and type "docker" and it's already set up with the right PATH and Environment Variables and just works. It gets setup on your local machine as http://docker but the networking goes through Hyper -V, as it should.

The best part is that Docker for Windows supports "volume mounting" which means the container can see your code on your local device (they have a "wormhole" between the container and the host) which means you can do a "edit and refresh" type scenarios for development. In fact, Docker Tools for Visual Studio uses this feature - there's more details on this "Edit and Refresh "support in Visual Studio here.

The Docker Tools for Visual Studio can be downloaded at http://aka.ms/dockertoolsforvs. It adds a lot of nice integration like this:

Docker in VS

This makes the combination of Docker for Windows + Docker Tools for Visual Studio pretty sweet. As far as the VS Tools for Docker go, support for Windows is coming soon, but for now, here's what Version 0.10 of these tools support with a Linux container:

  • Docker assets for Debug and Release configurations are added to the project
  • A PowerShell script added to the project to coordinate the build and compose of containers, enabling you to extend them while keeping the Visual Studio designer experiences
  • F5 in Debug config, launches the PowerShell script to build and run your docker-compose.debug.yml file, with Volume Mapping configured
  • F5 in Release config launches the PowerShell script to build and run your docker-compose.release.yml file, with an image you can verify and push to your docker registry for deployment to other environment

You can read more about how Docker on Windows works at Steve Lasker's Blog and also watch his video about Visual Studio's support for Docker in his video on Ch9 and again, sign up for Docker Beta at http://beta.docker.com.


Sponsor: Thanks to Seq for sponsoring the feed this week! Need to make sense of complex or distributed apps? Structured logging helps your team cut through that complexity and resolve issues faster. Learn more about structured logging with Serilog and Seq at https://getseq.net.


© 2016 Scott Hanselman. All rights reserved.
     
22 Mar 06:46

Understanding ES5, ES2015 and TypeScript

What is the difference between ES5, ES2015 (formerly known as ES6), and TypeScript? Which should we learn and use?

First, let’s create a foundation for our discussion for each of these. TypeScript is a superset of JavaScript. ES2015 is the evolution of ES5. This relationship makes it easier to learn them progressively.

ES5 to ES2015 to TypeScript

We want to understand the differences between them, but first we must understand what each of these are and why they exist. We’ll start with ES5.

ES5

ES5 is what most of us have used for years. Functional programming at its best, or worst, depending on how you view it. I personally love programming with ES5. All modern browsers support it. It’s extremely flexible but also has many factors that contribute to apps that can become train wrecks. Scoping, closures, IIFE’s and good guard logic are required to keep our train on the rails with ES5. Despite this, its flexibility is also a strength that many of us have leaned on.

This chart shows the current compatibility of browsers for ES5.

Perhaps the most difficult problem that ES5 poses for us is the difficulty in identifying issues at development time. Tooling for ES5 is lacking as it is complicated, at best, for a tool to decipher how to inspect ES5 for concerns. We’d like to know what properties an object in another file contains, what an invalid parameter to a function may be, or let us know when we use a variable in an improper scope. ES5 makes these things difficult on developers and on tooling.

ES6/E2015 Leaps Forward

ES2015 is a huge leap forward from ES5. It adds a tremendous amount of functionality to JavaScript. These features address some of the issues that made ES5 programming challenging. They are optional, as we can still use valid ES5 (including functions) in ES2015.

Here are some of the ES2015 features as seen in Luke Hoban’s reference. A full list can be seen here at the spec. - arrows - classes - enhanced object literals - template strings - destructuring - default + rest + spread - let + const - iterators + for..of - generators - unicode - modules - module loaders - map + set + weakmap + weakset - proxies - symbols - subclassable built-ins - promises - math + number + string + array + object APIs - binary and octal literals - reflect api - tail calls

This is a dramatic leap forward to ES5 and modern browsers are racing to implement all of the features. This chart shows the current compatibility of browsers for ES2015.

Node.js is built against modern versions of the V8 engine. Node has implemented much of ES2015, according to its docs.

Node 4.x labels itself as Long Term Support (LTS). The LTS label indicates their release line. All even numbered major versions focus on stability and security. All odd numbered major versions (e.g. 5.x) fall under Short Term Support (STS), which focus on active development and more frequent updates. In short, I recommend you stay on node 4 for production development and node 5 for future research of features that may be in future LTS versions. You can read the official node guidelines for versioning here.

Bringing it back to ES2015, we now have an incredible amount of functionality that we can optionally use to write code.

How Do Developers Consider ES2015?

We might wonder who might be interested in ES2015 and who might not. There are many ES5 developers who are well versed in the pros and cons of the language. After over a decade in JavaScript, we may feel very comfortable with ES5. Once we master a language it can be difficult to justify leaping to a new version if we do not see the value. What are we gaining? What problem are we solving? This is a natural way of thinking. Once we decide if there is value in moving to ES2015, then we can decide to make the move.

There are also many ES5 developers who couldn’t wait to use ES2015. The point is that many folks who have used ES5 are already on to ES2015, while many more are still making that decision to migrate.

There are many JavaScript developers today, but even more are coming. I believe the number now who are considering learning JavaScript and those still on their way, will dwarf that number using it today. JavaScript is growing and not everyone will have had a solid ES5 background. Some are coming from Java and C# and other popular languages and frameworks. Many of these already already have the features that ES2015 recently introduced, and have had them for years. This makes ES2015 a much easier transition for them, than ES5. And it’s good timing too, as many modern browsers and Node are supporting ES2015.

So there are many of us, all with different perspectives, all leading to an eventual ES2015 (or beyond) migration.

Supporting ES5 Browsers

How do we run ES2015 in browsers that do not yet support ES2015? We can use ES2015 and transpile to ES5 using a tool like Babel. Babel makes it easy to write ES2015 (an din the future ES2016 and beyond), and still compile down to an older version of JavaScript. Pretty cool!

TypeScript

Where does TypeScript fit in? Should we even bother with it?

First, I think the name throws people off. The word Type in TypeScript indicates that we now have types. These types are optional, so we do not have to use them. Don;t believe me? Try pasting your ES5 code into the TypeScript playground. Look mom! No types needed! So shouldn’t we optionally call it Type?Script or [Type]Script ? Kidding aside, the types are just once piece of TypeScript. Perhaps a better name is simply ES+.

Let’s step back for a moment and revisit one of the concerns I mentioned previously that many developers have with writing JavaScript: the difficulty in identifying mistakes at development time.

What if we could identify scoping issues as we type them? What if we could identify mismatched parameters in our tool with red underlines? What if our editors and IDEs could tell us when we make a mistake in using the other people’s or our own code improperly? This is what we generally rely on tooling for.

Identifying Issues Early

Whether we use Atom, VS Code, Visual Studio, Web Storm, or Sublime Text we enjoy a plethora of innate features or extensions to our tool of choice that help us write better code faster. These tools should (and can) help use identify problems early.

Is it more fun to find an issue right away as we code it, so we can fix it there … or to get called at 5am due to a production outage when traffic cranked up on our app and hit our hidden bug? I prefer to be home at 5 with my family :)

These tools today try their best to help identify problems, and they do an admirable job with what they have to work with. But what if we could give them a little more help? What if we could give them the same types of help that other languages like C# and Java provide today? Then these tools can really help us identify issues early and often.

This is where TypeScript shines.

The value in TypeScript is not in the writing less code. The value of TypeScript is in writing safer code. Over the long haul, it helps us to write code more efficiently as we take advantage of tooling for identifying issues and automatically filling in parameters, properties, functions, and more (often known as autocomplete and intellisense).

You can try out TypeScript here in their playground.

ES+

I joke that TypeScript should be called ES+, but when we examine it more closely, that is what is really is.

So what does TypeScript offer over ES2015? I’ll focus on the three main additions I feel add the most value:

  1. Types
  2. Interfaces
  3. Future ES2016+ features (such as Annotations/Decorators and async/await)

TypeScript is ES plus features like these.

Types and interfaces help provide the tooling it needs to identify problems early as we type them. With these features our editors don’t have to guess whether we used a function properly or not. The information is readily available for the tool to raise a red flag to us so we can fix he issues right away. In some cases, these tools can also help recommend and refactor for us!

TypeScript promises to be forward thinking. It helps bring the agreed upon features in the future ECMAScript spec to us today. For example features like decorators (used in Angular 2) and async/await (a popular technique to make async programming easier in C#). Decorators are available now in TypeScript while async/await is coming soon in v 2.0 according to the TypeScript roadmap.

Is TypeScript Deviating from JavaScript?

From the top of the TypeScript website’s front page we find this statement:

TypeScript is a typed superset of JavaScript that compiles to plain JavaScript.

This is hugely important. TypeScript is not a shortcut language. It doesn’t deviate from JavaScript. It doesn’t take us in another direction. It’s purpose is to allow us to use features in the future versions of JavaScript today, and to provide a better and safer experience.

Why Not Just use ES2015?

That’s a great option! Learning ES2015 is a huge leap from ES5. Once you master ES2015, I argue that going from their to TypeScript is a very small step. So I suggest back, once you learn ES2015, try TypeScript and take advantage of its tooling.

What About Employability?

Does learning ES2015 or TypeScript hurt my employability? Absolutely not. But it also doesn’t mean that you shouldn’t understand ES5. ES5 is everywhere today. That will curve down eventually, but there is a lot of ES5 code and it’s good to understand the language both to support it and to understand what problems ES2015 and TypeScript help solve. Plus we can use our knowledge of ES5 to help use debug issues using sourcemaps in the browsers.

Keeping Up with the Emerging Technology

For a long time we didn’t need transpilers. The Web used JavaScript and most folks who wrote in ES3 and ES5 used jQuery to handle any cross browser issues. When ES5 came along, not much changed there. For a long period of years in Web development we had a stable set of JavaScript features that most browsers understood. Where there were issues we used things like es5-shim.js and even jQuery to work around them. Things have changed.

The Web is moving at a fast pace. New Web standards are emerging. Libraries like Angular 2, Rx.js, React, and Aurelia are pushing the Web forward. More developers are coming to JavaScript via the web and Node.js.

The ECMAScript team is now adopting a new name for the language versions using the year as an identifier. No more ES6, now we call it ES2015. The next version is targetted as ES2016. The intention is to drive new features into JavaScript more frequently. It takes time for all browsers to adopt the standards across the desktop and mobile devices.

What does this all mean? Just when we have browsers that support ES2015, ES2016 may be out. Without help, this could be awful if we want to support all ubiquitous browsers and use the new features! Unless we have a way to use the new features today and support the browsers we need.

This is why the emergence of transpilers has become so important in the Web today. TypeScript and Babel (the major players in transpiling) both supported ES2015 before it was in the browsers. They both plan to support (and already do in some cases) ES2016 features. These tools are the current answer to how we move forward without leaving behind our customers.

How Do We Transpile?

We can use tools like Gulp, Grunt, WebPack, and SystemJS with JSPM to transpile with Babel or TypeScript. Many editors connect directly to these tasks to transpile for us as we code. Many IDEs now support automatic transpilation with a click of a button. We can even use TypeScript from the command line to watch our files and transpile as we go.

No matter how or where we code, there are many ways to transpile.

What It All Means

A fact in our chosen profession is that technology changes. It evolves. Sometimes it happens much faster than we can absorb it. That’s why it is important to take advantage of tools that can help us absorb and adapt to the changes, like TypeScript and Babel for ES2015 (and beyond). In this case, we’re using technology to keep up with technology. Seems like a paradox, but at the core it’s simply using our time effectively to keep up.

17 Mar 22:01

Orange étendra son réseau LoRa avec une lampe connectée

by Geoffray

A l’occasion du 4ème ‘Show Hello’ d’Orange, Stéphane Richard –PDG du groupe– a levé le voile sur la nouvelle Livebox du groupe ainsi que ses ambitions dans l’internet des objets connectés.

Devant 900 invités triés sur le volet (décideurs, partenaires, journalistes, salariés du groupe, etc) le patron d’Orange, Stéphane Richard a animé une ‘keynote’ d’environ 1 heure pour présenter la vision et les nouveautés du groupe dans l’internet des objets.

Au registre des grandes nouveautés, la nouvelle Livebox et sa télécommande bluetooth simplifiée mais aussi de nombreux partenariats dans le cadre du service Data Share permettant le contrôle du partage des données personnelles.

orange-livebox-24

Orange persiste et signe

Focalisé sur les objets connectés, Stéphane Richard a rappelé l’ambition du groupe dans le secteur, affirmant qu’Orange table sur 25 milliards d’objets connectés à l’échelle mondiale en 2020.

Le patron du premier opérateur télécom en France a rappelé qu’Orange avait l’ambition de connecter en LoRa 17 grandes agglomérations de l’Hexagone en 2016 et que ce déploiement s’accompagnerait aussi de deux opérations : ‘Mon Réseau LoRa’ ainsi que le lancement d’une lampe connectée embarquant une antenne LoRa.

La première initiative vise à proposer des balises connectées au réseau LoRa d’Orange pour proposer un service ‘retrouve-clés’ capable de localiser un animal ou un objet perdu sur le territoire couvert par le réseau.

Ce n’est pas très innovant (la filiale de Bouygues Objenious propose un service similaire) et n’a pas retenu notre attention autant que l’annonce par Orange du lancement d’une phase de déploiement collaboratif de son réseau LoRa.

orange-livebox-21

Un déploiement collaboratif

Pour que la couverture du réseau IoT d’Orange soit la plus étendue possible et ainsi accélérer son déploiement et adresser des cas d’usages ‘Deep Indoor’, Orange a imaginé une démarche innovante de densification du réseau et propose aux salariés d’Orange qui le souhaitent de s’équiper d’une lampe LoRa.

En plus d’éclairer, ces lampes agissent comme autant de points relais supplémentaires du réseau LoRa d’Orange. Ainsi, Stéphane Richard a annoncé que dans un premier temps, 5.000 collaborateurs du groupe accueilleront chez eux cette lampe connectée en LoRa avec des balises, permettant ainsi aux salariés de tester ces nouveaux services utilisant le nouveau réseau dédié aux objets connectés.

 

En plus d’éclairer, cette lampe devrait donc participer à l’émergence d’une nouvelle famille d’objets connectés utiles au quotidien pour les particuliers.

On ignore encore cependant à ce stade si cette annonce a vocation a venir concurrencer l’initiative PicoWAN d’Archos, qui vise à mettre au point un réseau collaboratif de bornes LoRa dissimulées dans des prises de courant.

12 Mar 05:54

[Tribune] La bataille des réseaux bas-débit pour l’IoT : Sigfox vs LoRa

by Geoffray

Dans le monde impitoyable des objets connectés, s’affrontent deux philosophies de réseaux IoT bas-débit. La vision globale de Sigfox affronte la stratégie locale de la communauté LoRa. Cyril Masson dresse le tableau d’un paysage stratégique en pleine structuration dans les réseaux LPWAN.

Selon de nombreuses prévisions, environ 50 Milliards d’objets devraient être connectés d’ici à 2022. un tiers de ces objets continueront d’utiliser les protocoles GSM actuels, un tiers utiliseront des réseaux à très courte portée, notamment dans des usages domotiques, un tiers devraient être connectés à des réseaux bas débit/ longue portée, soit un marché total adressable potentiel de 5 milliards d’objets connectés.

On peut également considérer qu’une partie des usages domotiques pourra concerner ces réseaux bas débit : en effet, la configuration des objets connectés dans le cadre du Smart Home est actuellement rendu difficile par l’usage d’un réseau Wifi (configuration par l’utilisateur par capteur).

m2m-reseaux-antennes

Les réseaux IoT bas-débit

Dans ce marché, 2 grands types de réseaux bas débit s’opposent actuellement :

  • le réseau Sigfox qui mise sur des messages très courts et légers (12 bytes – 140 messages par jour), avec le déploiement rapide d’un réseau « sans couture » à l’échelle mondiale.
  • le réseau LoRa permet des messages plus importants (50kbyes), et s’appuie sur les acteurs mobiles en place pour construire son réseau ;

De ce fait, le réseau Sigfox –par les contraintes qu’il impose aux objets de par la taille des messages– réduit sa taille de son marché ‘adressable’ :

  1. la taille des messages ne permet une communication que en mode ascendant (de l’objet vers le réseau) et interdit donc des messages descendants –depuis le réseau, tels que des messages de maintenance par exemple–.
  2. la taille des messages ne permet pas de crypter ces derniers et rend ainsi potentiellement vulnérable la communication et l’accès aux données.
  3. la taille des messages ne permet pas réellement le mouvement excédant 10 à 20km/h, sans quoi la perte d’information risque de limiter la transmission de l’information.

De par ces contraintes, il est réaliste d’envisager que seuls 50% du marché est adressable soit environ 2,5 Milliards d’objets à horizon 2022.

Bien qu’ambitieux, le projet de Sigfox à prendre 20% de parts de marché adressable, soit environ 500 Millions d’objets connectés à l’horizon 2022, parait toutefois réaliste compte tenu d’une part de l’avance prise par la société toulousaine, de par le soutien de l’Etat français à faire de l’acteur un étendard sur une technologie « stratégique » au niveau européen, mais aussi au regard des investissements réalisés.

Les modèles et donc les implications financières sont relativement distincts.

Le coût d’une antenne sur le réseau Sigfox est estimé à environ 2.500€ (Capex) et 1.000€ (Opex) annuels, ainsi que 5% de frais de maintenance. Il est estimé qu’à ce jour, Sigfox a déployé environ 1.500 antennes pour couvrir la France, tout en restant réservé sur la capacité d’un tel réseau de répondre à l’ensemble des besoins de l’Internet des Objets. Il faut ajouter à cela le coût de location de site de chaque antenne qui peut représenter environ 2.500€ / an.

wireless-technologies-m2m-wifi

La stratégie de Sigfox

Afin d’assurer le déploiement de son réseau, Sigfox a mis en place un système de partenariats dans les différents pays avec le déploiement réalisé par des acteurs Télécom (par exemple TDF en France, Arqiva au Royaume Uni, Albertis en Espagne) sur une base de partage de revenu garantissant une partie estimée à 50% du revenu facturé par objet connecté à celui qui déploie et opère le réseau.

M2M

[Sigfox/LoRa] Les Vrai/Faux des réseaux dédiés aux objets connectés

Le mois dernier, Aruco s’est rendu au Meetup organisé par Avnet-Memec-Silica et consacré aux réseaux […]

630

By Thibaut

Il y a bien entendu un risque à miser sur un tel modèle : si les revenus générés ne sont pas suffisants pour couvrir les frais de déploiement mentionnés plus haut, les partenaires sont fragilisés économiquement et cela représente un risque.

Sigfox pourrait être ainsi amené à devoir racheter ces partenaires et à supporter directement les investissements et le coût des opérations du réseau, voire à le déployer lui-même comme c’est le cas dans certains pays. De fait, la lourdeur de ces investissements semble justifier les énormes investissements réalisés par l’opérateur Internet des Objets.

De son côté, Bouygues Télécom a déclaré avoir prévu un déploiement de 4.000 antennes pour couvrir le territoire français. Si l’on compare aux quelques 1500 antennes annoncées par le réseau Sigfox, certaines questions se posent : le réseau Sigfox couvre-t-il tous les usages, est-il suffisant ?

Si ce n’est pas le cas, il est possible que cela représente un risque en termes de resserrement du marché adressable que nous avons vu plus haut.

Une autre hypothèse pourrait être l’affrontement de conceptions du marché entre Sigfox et LoRa, le premier privilégiant la couverture internationale avec un maillage faible pour être rapidement en mesure de s’imposer comme un opérateur international, quitte à jouer sur les effets d’annonce, le second misant sur un maillage local plus important.

Rappelons que Bouygues Télécom et Orange déploient un réseau LoRA mais que Archos, autre acteur français, déploie également un tel réseau avec l’ambition de déployer 2000 antennes relais en France.

Fonctionnement en LoRa

En outre, il semble que le réseau LoRa doive supporter des coûts de déploiement moins importants. Chaque antenne représenterait ainsi un investissement de 1.500€ / mois installation comprise (CAPEX) et des frais de maintenance de 5% (OPEX). L’un des aspects fondamentaux distinguant ici LoRA est qu’il repose principalement sur des opérateurs existants qui disposent déjà de leurs antennes GSM et du savoir-faire opérationnel.

De tels coûts permettent d’envisager un déploiement national avec une couverture plus maillée mais aussi plus longue. Lorsque Sigfox annonce avoir déjà couvert le territoire national, LoRa annonce une couverture française pour fin 2016. Il n’est pas impossible par ailleurs que le rapprochement prévu entre Orange et Bouygues Télécom accélère ce processus.

fonctionnement_reseau_sigfox

Mais ce sont aussi 2 conceptions distinctes du marché lui-même qui s’affrontent : alors que Sigfox se veut un acteur global et dit pouvoir proposer un modèle unique de facturation à l’échelle mondiale, LoRa semble de son côté miser sur un modèle où chaque pays, chaque géographie aura ses propres acteurs et sa propre stratégie IoT.

A ce stade, les uniques projets d’envergure annoncés par les 2 acteurs concernent surtout des projets locaux portés par les actionnaires de chaque concurrent : Engie Cofely pour Sigfox, Colas pour LoRa sont des projets très localisés.

Déjà que les industriels risquent d’être confrontés à des délais de déploiement de leur stratégie IoT et la production d’objets, à quelle échéance peut-on réellement entrevoir le déploiement de projets IoT à l’échelle mondiale, auquel cas, Sigfox peut en effet se retrouver avec un réseau mondial, … mais avec des clients locaux, soumis à une concurrence certes moins globale mais plus forte sur chaque marché.

 

Note : en réponse à ces informations, Sigfox a souhaité rectifier l’argumentation de l’auteur, notamment sur les points suivants :

  1. Le réseau Sigfox permet une communication ascendante et descendante. L’entreprise a d’ores et déjà sur son réseau commercial des clients qui utilisent la fonction bidirectionnelle.
  2. Le cryptage dont est responsable Sigfox concerne l’identité de l’objet. Le cryptage des données client est parfaitement possible dans les 12 octets du message. Ce chiffrage est décidé par le client.
  3. L’architecture du réseau Sigfox permet bien une mobilité supérieure à 20 kms. L’entreprise a d’ores et déjà sur son réseau des applications mobiles commercialement disponibles.
  4. Sigfox rappelle également qu’en France notamment, son service est commercialement disponible à l’échelle nationale, avec une carte de couverture publique et un niveau d’engagement de service.
07 Mar 06:39

E-mail inventor Ray Tomlinson, who popularized @ symbol, dies at 74

by Cyrus Farivar

(credit: Steve Snodgrass)

If you’ve ever sent an e-mail, you can thank Raymond Samuel Tomlinson for putting the @ symbol there.

On Friday, Tomlinson died of suspected heart failure. He was 74.

Tomlinson was born in Amsterdam, New York in 1941, and he earned a master’s degree from the Massachusetts Institute of Technology. In 1967, he joined Bolt Beranek and Newman (BBN), a company that played a key role in the development of the ARPANET, a precursor to the modern Internet.

Read 2 remaining paragraphs | Comments

05 Mar 13:09

Genève 2016 : Goodyear Eagle-360, un concept de pneu sphérique

by Flavien Robert
EAGLE360Au salon de Genève, aux cotés des constructeurs, certains équipementiers ont aussi fait le déplacement. Goodyear y présente un nouveau concept de pneu sphérique pour véhicules autonomes. Avec le développement attendu des voitures autonomes, nous devrions assister à l’arrivée de nombreuses innovations. Notamment au niveau des pneumatiques. Le Goodyear Eagle-360 est un concept un peu particulier. […]
03 Mar 12:04

Grâce à son Fitbit, il découvre que sa femme est enceinte

by Geoffray

Grâce à son objet connecté Fitbit, un homme a appris avant sa femme qu’elle était enceinte. Le capteur a relevé une augmentation de son rythme cardiaque, symptomatique d’un début de grossesse.

Comme nous l’évoquons dans l’épisode 24 du podcast SmartShow (sur iTunes) cette semaine, l’histoire de cet américain a de quoi surprendre et amuser.

L’histoire de David Trinidad a fait le tour d’internet suite à la découverte d’une bonne nouvelle grâce à l’objet connecté que portait sa femme, Ivonne.

fitbit-force

Découvrir sa grossesse en ligne

Celle-ci portait en effet courramment un bracelet connecté pour suivre son activité physique au jour le jour.

Grâce à l’application mobile de Fitbit et surpris par la découverte d’une brutale accélération du rythme cardiaque de sa femme depuis quelques jours, ce mari s’est inquiété de ces constatations sur le forum Reddit.

Alors que ce dernier pensait qu’il s’agissait peut-être d’un défaut du capteur ou d’une possible maladie… il sollicite d’abord la communauté pour régler le dysfonctionnement, demandant « existe-t-il un moyen de remettre à zéro ou de recalibrer l’appareil ? ».

Reddit interprète les données Fitbit

Mais un autre utilisateur le met rapidement sur la voie d’une autre hypothèse :

« a-t-elle connu une grande période de stress récemment, ou est-il possible qu’elle soit enceinte ? ».

Intrigués par ce message, David et Ivonne Trinidad ont fini par prendre un rendez-vous chez le médecin pour en avoir le coeur net. Ils ont finalement découvert qu’elle était effectivement enceinte.

Dans cette histoire, Fitbit n’a pas manqué de se réjouir de la nouvelle de son côté : c’était la première fois qu’une utilisatrice apprennait sa grossesse grâce à un de leurs objets connectés.

Les capteurs de la startup américaines Fitbit représentent la majorité du marché des wearables, estimé à plusieurs dizaines de millions de ventes par an dans le monde.

Via – images : 1 / 2

01 Mar 15:52

A Subtle Case Sensitivity Gotcha with Regular Expressions

Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems. - Jamie Zawinski

For other people, when confronted with writing a blog post about regular expressions, think “I know, I’ll quote that Jamie Zawinski quote!”

It’s the go to quote about regular expressions, but it’s probably no surprise that it’s often taken out of context. Back in 2006, Jeffrey Friedl tracked down the original context of this statement in a fine piece of “pointless” detective work. The original point, as you might guess, is a warning against trying to shoehorn Regular Expressions to solve problems they’re not appropriate for.

As XKCD noted, regular expressions used in the right context can save the day!

XKCD - CC BY-NC 2.5 by Randall Munroe

If Jeffrey Friedl’s name sounds familiar to you, it’s probably because he’s the author of the definitive book on regular expressions, Mastering Regular Expressions. After reading this book, I felt like the hero in the XKCD comic, ready to save the day with regular expressions.

The Setup

This particular post is about a situation where Jamie’s regular expressions prophecy came true. In using regular expressions, I discovered a subtle unexpected behavior that could have lead to a security vulnerability.

To set the stage, I was working on a regular expression to test to see if potential GitHub usernames are valid. A GitHub username may only consist of alphanumeric characters. (The actual task I was doing was a bit more complicated than what I’m presenting here, but for the purposes of the point I’m making here, this simplification will do.)

For example, here’s my first take at it ^[a-z0-9]+$. Let’s test this expression against the username shiftkey (a fine co-worker of mine). Note, these examples assume you import the System.Text.RegularExpressions namespace like so: using System.Text.RegularExpressions; in C#. You can run these examples online using CSharpPad, just be sure to output the statement to the console. Or you can use RegexStorm.net to test out the .NET regular expression engine.

Regex.IsMatch("shiftkey", "^[a-z0-9]+$"); // true

Great! As expected, shiftkey is a valid username.

You might be wondering why GitHub restricts usernames to the latin alphabet a-z. I wasn’t around for the initial decision, but my guess is to protect against confusing lookalikes. For example, someone could use a character that looks like an i and make me think they are shiftkey when in fact they are shıftkey. Depending on the font or whether someone is in a hurry, the two could be easily confused.

So let’s test this out.

Regex.IsMatch("shıftkey", "^[a-z0-9]+$"); // false

Ah good! Our regular expression correctly identifies that as an invalid username. We’re golden.

But no, we have another problem! Usernames on GitHub are case insensitive!

Regex.IsMatch("ShiftKey", "^[a-z0-9]+$"); // false, but this should be valid

Ok, that’s easy enough to fix. We can simply supply an option to make the regular expression case insensitive.

Regex.IsMatch("ShiftKey", "^[a-z0-9]+$", RegexOptions.IgnoreCase); // true

Ahhh, now harmony is restored and everything is back in order. Or is it?

The Subtle Unexpected Behavior Strikes

Suppose our resident shiftkey imposter returns again.

Regex.IsMatch("ShİftKey", "^[a-z0-9]+$", RegexOptions.IgnoreCase); // true, DOH!

Foiled! Well that was entirely unexpected! What is going on here? It’s the Turkish İ problem all over again, but in a unique form. I wrote about this problem in 2012 in the post The Turkish İ Problem and Why You Should Care. That post focused on issues with Turkish İ and string comparisons.

The tl;dr summary is that the uppercase for i in English is I (note the lack of a dot) but in Turkish it’s dotted, İ. So while we have two i’s (upper and lower), they have four.

This feels like a bug to me, but I’m not entirely sure. It’s definitely a surprising and unexpected behavior that could lead to subtle security vulnerabilities. I tried this with a few other languages to see what would happen. Maybe this is totally normal behavior.

Here’s the regular expression literal I’m using for each of these test cases: /^[a-z0-9]+$/i The key thing to note is that the /i at the end is a regular expression option that specifies a case insensitive match.

/^[a-z0-9]+$/i.test('ShİftKey'); // false

The same with Ruby. Note that the double negation is to force this method to return true or false rather than nil or a MatchData instance.

!!/^[a-z0-9]+$/i.match("ShİftKey")  # false

And just for kicks, let’s try Zawinski’s favorite language, Perl.

if ("ShİftKey" =~ /^[a-z0-9]+$/i) {
  print "true";    
}
else {
  print "false"; # <--- Ends up here
}

As I expected, these did not match ShİftKey but did match ShIftKey, contrary to the C# behavior. I also tried these tests with my machine set to the Turkish culture just in case something else weird is going on.

It seems like .NET is the only one that behaves in this unexpected manner. Though to be fair, I didn’t conduct an exhaustive experiment of popular languages.

The Fix

Fortunately, in the .NET case, there’s two simple ways to fix this.

Regex.IsMatch("ShİftKey", "^[a-zA-Z0-9]+$"); // false
Regex.IsMatch("ShİftKey", "^[a-z0-9]+$", RegexOptions.IgnoreCase | RegexOptions.CultureInvariant); // false

In the first case, we just explicitly specify capital A through Z and remove the IgnoreCase option. In the second case, we use the CultureInvariant regular expression option.

Per the documentation,

By default, when the regular expression engine performs case-insensitive comparisons, it uses the casing conventions of the current culture to determine equivalent uppercase and lowercase characters.

The documentation even notes the Turkish I problem.

However, this behavior is undesirable for some types of comparisons, particularly when comparing user input to the names of system resources, such as passwords, files, or URLs. The following example illustrates such as scenario. The code is intended to block access to any resource whose URL is prefaced with FILE://. The regular expression attempts a case-insensitive match with the string by using the regular expression $FILE://. However, when the current system culture is tr-TR (Turkish-Turkey), “I” is not the uppercase equivalent of “i”. As a result, the call to the Regex.IsMatch method returns false, and access to the file is allowed.

It may be that the other regular expression engines are culturally invariant by default when ignoring case. That seems like the correct default to me.

While writing this post, I used several helpful online utilities to help me test the regular expressions in multiple languages.

Useful online tools

  • https://repl.it/languages provides a REPL for multiple languages such as Ruby, JavaScript, C#, Python, Go, and LOLCODE among many others.
  • http://www.tutorialspoint.com/execute_perl_online.php is a Perl REPL since that last site did not include Perl.
  • http://regexstorm.net/tester is a regular expression tester that uses the .NET regex engine.
  • https://regex101.com/#javascript allows testing regular expressions using PHP, JavaScript, and Python engines.
  • http://rubular.com/ allows testing using the Ruby regular expression engine.
29 Feb 20:35

Genève 2016 : Bugatti Chiron

by Gautier Bottet
2016_Bugatti-ChironBugatti se relance dans la course à la puissance et à la vitesse avec la nouvelle Chiron. Pas d’hybridation en vue, mais 1500 ch issus de son W16 à 4 turbos, et une vitesse limitée à 420 km/h. Le design de la nouvelle Bugatti Chiron a déjà été largement annoncé, par des photos de prototypes […]
29 Feb 12:04

Raspberry Pi 3 has Wi-Fi and Bluetooth, 64-bit chip, still just $35

by Jon Brodkin

The Raspberry Pi 3. (credit: Raspberry Pi Foundation)

The third major version of the Raspberry Pi will go on sale Monday, with the $35/£30 credit card-sized Raspberry Pi 3 Model B now sporting a 64-bit processor and embedded Wi-Fi and Bluetooth.

In previous versions, the Pi needed USB adapters to get Wi-Fi and Bluetooth connectivity. Raspberry Pi 3 supports 802.11n Wi-Fi (2.4GHz only) and Bluetooth 4.0 without an adapter, freeing up its four USB ports for other purposes.

The computer will be on sale Monday from “all the usual resellers,” the Raspberry Pi Foundation told us. That would likely include Element14, Think Allied, and RS Components.

Read 12 remaining paragraphs | Comments

27 Feb 13:22

Pitfalls of Unlimited Vacations

Vacation, All I ever wanted
Vacation, Had to get away
Vacation, Meant to be spent alone
Lyrics by The Go Go’s

Beatnik Beach CC BY-SA 2.0 Photo by Ocad123

When I joined GitHub four years ago, I adored its unlimited paid time off benefit. It’s not that I planned to take a six month trek across Nepal (or the more plausible scenario of playing X-Box in my pajamas for six months), but I liked the message it sent.

It told me this company valued its employees, wanted them to not burn out, and trusted them to behave like stakeholders in the company and be responsible about their vacation.

And for me, it’s worked out well. This, in tandem with our flexible work hours, helps me arrange my work schedule so that I can be a better spouse and parent. I walk my son to the bus stop in the mornings. I chaperone every field trip I can. I take the day off when my kids have no school. It’s great!

I also believe it’s a tool to help recruit great people.

For example, in their famous Culture Deck, Netflix notes that…

Responsible People Thrive on Freedom and are Worthy of Freedom

They go on…

Our model is to increase employee freedom as we grow, rather than limit it, to continue to attract and nourish innovative people, so we have a better chance of sustained success

In one slide they note that “Process-focus Drives More Talent Out”

The most talented people have the least tolerance for processes that serve to curtail their freedom to do their best work. They’d rather be judged by the impact of their work than when and how much they worked.

This is why Netflix also has a policy that there is no vacation policy. They do not track vacation in the same way they do not track hours worked per day or week.

Pitfalls

As you might expect, there are some subtle pitfalls to such a policy or lack thereof. I believe such policies that rely on the good judgment of individuals are well intentioned, but often ignore the very real psychological and sociological factors that come into play with such policies.

Only the pathologically naïve employee would believe they can go on a world tour for twelve months and expect no repercussions when they return to work.

In the absence of an explicit policy, there’s an implicit policy. But it’s an implicit policy that in practice becomes a big game of Calvinball where nobody understands the rules.

But unlike Calvinball where you make the rules as you go, the rules of vacationing are driven by subtle social cues from managers and co-workers. And the rules might even be different from team to team even in a small company because of different unspoken expectations.

At GitHub, this confusion comes into sharp relief when you look at our generous parental policy. GitHub provides four months of paid time off for either parent when a new child enters the family through birth or adoption. I love how family friendly this policy is, but it raises the question, why is it necessary when we already have unlimited paid time?

Well, one benefit of this policy, even if it seems redundant, is that it sets the right expectations of what is deemed reasonable and acceptable.

Travis CI (the company) realized this issue in 2014.

When everyone keeps track of their own vacation days, two things can happen. They either forget about them completely, or they’re uncertain about how much is really okay to use as vacation days.

They also noted that people at companies with unlimited time off tend to take less time off or work here and there during their vacations.

A short-sighted management team might look at this as a plus, but it’s a recipe for burnout among their most committed employees. Humans need to take a break from time to time to recharge.

Travis CI took the unusual step of instituting a minimum vacation policy. It sets a shared understanding of what is considered an acceptable amount of time to take.

In talking about this with a my friend Drew Miller, he had an astute observation. He noted that while such a policy is a good start, it doesn’t address the root cause. A company with no vacation policy where people don’t take vacation should take a deep look at its culture and ask itself, “What about our culture causes people to feel they can’t take time off?”

For example, and the Travis-CI post notes this, leaders at a company have to model good behavior. If the founders, executives, managers, take very little vacation, they unconsciously communicate to others that going on vacation is not valued or important at this company.

His words struck me. While I like the idea of a minimum vacation to help people feel more comfortable taking vacation, I feel such a move has to be in tandem with a concerted effort to practice what we preach.

Ever since then, as a manager, I’ve tried to model good responsible vacation behavior. Before I take off, I communicate to those who need to know and perform the necessary hand-offs. And more importantly, while on vacation, I disconnect and stay away from work. I do this even though I sometimes want to check work email because I enjoy reading about work. I abstain because I want the people on my team to feel free to do the same when they are on vacation.

Apparently I’ve done a good job of vacationing because somebody at GitHub noticed. We have an internal site called Team that’s sort of like an internal Twitter and then some. One fun feature is that anybody at GitHub can change your team status. At one point, I returned from vacation and noticed my status was…

I can live with that!

It’s since been changed to “Supreme Vice President.” It’s a long story.

27 Feb 13:11

The real silly season: Formula 1’s new rules

by Jonathan M. Gitlin

Mercedes F1

Nico Rosberg puts some miles on his Mercedes W07 at the first preseason test in Barcelona.

3 more images in gallery

This week, the people that control Formula 1 racing got together in Geneva, Switzerland, to come up with some ideas to fix the sport. At first glance, it appears they might instead have broken the one bit of the show—qualifying on Saturday afternoons—that still holds any real excitement for fans. Changes to 2017's technical rules are also coming. The introduction of better head protection for drivers is welcome, but the rest of the tweaks appear—to almost everyone outside of the F1 Strategy Group and the F1 Commission—to be exactly what the doctor didn't order.

Silly season is a name often given to that time of year in sports calendars when news is slow and so, to fill pages or screens, the media reports on stories that wouldn't otherwise merit the attention. In the racing world that normally coincides with late summer, there are gaps in the schedules, people take vacations, and the media is left to speculate on rumors about who's changing teams and the like. You normally wouldn't think of F1's preseason ramp-up in this way. After all, the new-for-2016 cars are currently being unveiled, and some of the teams are testing in Barcelona this week—that stuff actually matters.

However, "silly" accurately describes the proposed changes to the qualifying process this year.

Read 19 remaining paragraphs | Comments

27 Feb 08:30

Microsoft confirms: Android-on-Windows Astoria tech is gone

by Peter Bright

(credit: Sean Gallagher)

At its Build developer conference last year, Microsoft announced four "bridges" designed to help developers bring applications into the Windows Store. Three of these are still around: "Westminster" for porting Web apps, "Centennial" for Win32 apps, and "Islandwood" for iOS apps. But the company confirmed on Thursday that the fourth bridge, "Astoria," intended to help bring Android apps to Windows, is no longer in development.

Early builds of Windows 10 Mobile included a version of Astoria that largely succeeded in enabling Android apps to be run on Windows phones. But last November, the Android layer was quietly removed, with Microsoft saying that it was "not ready yet."

Thursday's announcement suggests that it's never going to be ready. The company writes, rather peculiarly, that choosing between Astoria and Islandwood "could be confusing" and that having two systems for porting non-Windows applications was "unnecessary." Accordingly, Islandwood is the only bridge, and Astoria is being abandoned.

Read 7 remaining paragraphs | Comments

27 Feb 08:27

Microsoft at last buys .NET-for-iOS, Android vendor Xamarin

by Peter Bright

Xamarin Studio, Xamarin's development environment. (credit: Xamarin)

Microsoft will buy Xamarin, maker of .NET tooling that can build apps for iOS, Android, and OS X, for an undisclosed sum.

When Microsoft first launched .NET in the early 2000s, it promised a cross-platform environment that could reach beyond Windows. The company did publish an early FreeBSD-compatible version of .NET named Rotor, and it produced versions of its Silverlight plugin for OS X, but functionally, .NET was a Windows-only affair, with the other platforms distant memories.

In parallel with Microsoft's efforts, an open source version of .NET named Mono was created by Ximian, an open source company founded by Miguel de Icaza and Nat Friedman. Ximian was acquired by Novell in 2003, and Novell was bought by Attachmate in 2011. Attachmate laid off all Mono staff shortly after the acquisition, and de Icaza and Friedman founded Xamarin later that same year to continue their work with Mono.

Read 3 remaining paragraphs | Comments

19 Feb 10:40

Inside the 2016 New York Toy Fair: Every kid’s dream expo

by Valentina Palladino

Video shot/edited by Jennifer Hahn. (video link)

NEW YORK—There exists a place in the world where toys rule, wall to wall, and that place is the annual New York Toy Fair. It's where all the biggest names in fun (think Lego, Mattel, and similar companies) gather to announce the next big things hitting toy store shelves. Much to the dismay of kids everywhere, the Toy Fair is an industry-only event, so we wanted to give you an inside look at the newest toys that may make it into your living room.

The toys shown off at the fair ran the gamut from educational and interesting to fun and lighthearted. Of course there was a new Barbie Dreamhouse that's bigger than many dog houses, and new Star Wars Lego sets featuring Stormtroopers, Rey, and Kylo Ren. Mattel took a break from Barbie hype to also announce the $300 ThingMaker 3D printing toy machine, which lets kids make their own toys by using a simplified 3D printing app with designs that can be sent to the ThingMaker for creation.

Read 3 remaining paragraphs | Comments

19 Feb 10:33

Original 1977 Star Wars 35mm print has been restored and released online

by Mark Walton

A restored HD version of the original Star Wars Episode IV: A New Hope 35mm print has appeared online. While this isn't the first time that attempts have been made to restore Star Wars to its original theatrical version—that's the one without the much-maligned CGI effects and edits of later "special" editions—it is the first to have been based entirely on a single 35mm print of the film, rather than cut together from various sources.

The group behind the release, dubbed Team Negative 1, is made up of Star Wars fans and enthusiasts who spent thousands of dollars of their own cash to restore the film without the blessing of creator George Lucus, or franchise owner Disney. Lucas has famously disowned the original theatrical version of Star Wars, telling The Today Show back in 2004:

The special edition, that’s the one I wanted out there. The other movie, it’s on VHS, if anybody wants it. ... I’m not going to spend the—we’re talking millions of dollars here, the money and the time to refurbish that, because to me, it doesn’t really exist anymore. It’s like this is the movie I wanted it to be, and I’m sorry you saw half a completed film and fell in love with it.

Lucasfilm later claimed that the original negatives of Star Wars were permanently altered for the special edition releases, making restoration next to impossible. How Team Negative 1 got its hands on a 35mm print of the 1977 release of the movie is a mystery. But for fans who don't want to see ropey CGI, a pointless Jabba the Hutt scene, and know for a fact that Han shoots first, this restored version of the film—even with some pops, scratches, and colour issues—is the one to watch.

Read 4 remaining paragraphs | Comments

19 Feb 08:53

Headshot: A visual history of first-person shooters

by Ars Staff

Many of us are familiar with the first-person shooter (FPS) creation myth—that it materialized fully formed in the minds of id Software founders John Carmack and John Romero shortly before they developed Wolfenstein 3D. Afterward, it was pushed forward only by id until Valve's Half-Life came along.

But the reality behind FPS evolution is messier. Innovations came from multiple sources and often took years to catch on. Even Wolfenstein 3D had numerous predecessors within and without id. And like the genres we've previously explored—a list including city builders, graphic adventures, kart histories, and simulation games—there have been many high and low points throughout this long, violent, gory history.

Minus '90s cult favorite Descent (because I personally consider it a flight combat shooter), these are the shooters that pushed the genre forward or held it back. Many of us encountered at least one that truly spoke to us, but together, these titles made it cool to shoot pixel-rendered dudes, dudettes, mutants, and weird alien creatures in the face.

Read 103 remaining paragraphs | Comments

19 Feb 07:56

The fall… and rise and rise and rise of chat networks

by Ars Staff

At the end of October 2014, something very important came to an end. After 15 years of changing the way people communicated forever, Microsoft closed down its MSN Windows Live service.

Originally named MSN Messenger, its demise was not an overnight failure. Microsoft’s acquisition of Skype for £5.1 billion in 2012 meant it was only a matter of time before it was finally closed. China was the last territory to migrate the service to Skype; other countries did so 12 months earlier.

At its height, MSN Messenger had more than 330 million users after originally being launched to rival the emerging chat networks of AOL's AIM service and ICQ, followed by the entry of Yahoo Messenger. It was the social network of its day and as influential and dominant as Facebook is today.

Read 47 remaining paragraphs | Comments

19 Feb 07:00

France says Facebook must face French law in nudity censorship case

by Megan Geuss

(credit: Spencer E Holtaway)

Facebook will have to face a censorship lawsuit over a 19th century oil painting of a woman's genitalia, a Paris appeals court ruled on Friday.

The ruling favored a French teacher whose Facebook account was suspended when he posted an image (NSFW) of a famous Gustave Courbet painting called L’Origine du monde. The portrait depicts a woman naked from the waist down at a graphic angle, and it hangs in the Musée d’Orsay in Paris.

The teacher claimed that Facebook censored him, and he is asking for €20,000 (or about $22,500) in damages. Facebook countered that the man’s lawsuit was invalid because Facebook's Terms of Service stipulate (section 15) that all users must resolve disputes with the social network, "in the US District Court for the Northern District of California or a state court located in San Mateo County.”

Read 5 remaining paragraphs | Comments

30 Jan 13:20

Interactive Coding with C# and F# REPLs (ScriptCS or the Visual Studio Interactive Window)

by Scott Hanselman

REPLs are great! REPL stands for Read–eval–print loop and is pronounced "REP-L" quickly, like "battle." Lots of languages and environments have interactive coding and REPLS at their heart and have for years. C# and F# do also, but a lot of people don't realize there are REPLs available!

ScriptCS

In 2013 once the Roslyn open source C# compiler started to mature, Glenn Block and many friends made ScriptCS. It now lives at http://scriptcs.net and has a great GitHub and active community. The Mono project has also had a REPL for a very long time.

The C# Interactive Shell CSI

You can install ScriptCS in minutes with the Chocolatey Package Manager or OneGet with Chocolatey on Windows 10. In the screenshot above I'm writing code at the command prompt, making mistakes, and fixing them. It's a great way to learn and play with C#, but it's also VERY powerful. You can create C# Scripts (.csx files) kind of like PowerShell but it's just C#!

Visual Studio's REPLs - CSI and FSI

The Visual Studio team meets/met with the ScriptCS folks in the open and even publishes their meeting notes on GitHub! In May of last year they got ScriptCS working in OmniSharp and Visual Studio Code, which is amazing. There's a great set of directions here on how to set up ScriptCS in Visual Studio Code and the code is moving fast on GitHub.

Visual Studio 2015 Update 1 has REPLs within the IDE itself. If you have Visual Studio 2015, make sure you've updated to Update 1. If you don't have VS, you can get the free Visual Studio Community at http://visualstudio.com/free.

VS ships a command line RELP called "CSI" that you can use to run ".csx" scripts as well. Turns out the source code for CSI is basically nothing! Check it out at http://source.roslyn.io/#csi/Csi.cs and you can see how easy it would be for you to add scripting (interactive or otherwise) to your own app.

C# Interactive REPL inside Visual Studio

There's a great C# Interactive Walkthrough by Kasey Uhlenhuth that you should take a moment and play with. She's the Program Manager on this feature and also has a great video on Channel 9 on how to use the C# Interactive REPL.

Introducing the Visual Studio 'C# REPL'

Of course, F# has always had a REPL called "fsi.exe" that also ships with VS. You may have this on your PATH and not realize it, in fact. F# script files are ".fsx" so there's a nice symmetry with scripting and REPLs available in both languages, either in VS itself, or at the command line.

The F# Interactive Shell

F#'s REPL is also inside VS, right here next to the C# Interactive Window.

C# Interactive and F# Interactive in VS

These are all great options for learning and exploring code in a more interactive way than the traditional "write, compile, wait, run" that so many of us are used to.

Let's hear in the comments how (or if!) you're using REPLs like these two make your programming life better.


Sponsor: Big thanks to Wiwet for sponsoring the feed this week. Build responsive ASP.NET web appsquickly and easily using C# or VB for any device in 1 minute. Wiwet ASP.Net templates are integrated into Visual Studio for ease of use. Get them now at Wiwet.com.



© 2016 Scott Hanselman. All rights reserved.
     
30 Jan 13:10

Azure Stack, Microsoft’s on-premises cloud service, is now available as a preview

by Peter Bright

A block diagram that is supposed to clarify what Azure Stack does and is. (credit: Microsoft)

Microsoft today released a preview of Azure Stack, a version of the Azure services and infrastructure that you can run in your own datacenters.

Azure Stack was announced at the Ignite conference last year. It's an Azure-flavored counterpart to OpenStack, offering enterprises the ability to use the same services and management systems for both local on-premises deployments and true cloud deployments.

Currently, the Azure Stack offers only a subset of Azure services, and it runs on just a single server. Its full release is planned for the fourth quarter, but even this will not have parity with the full Azure service. Microsoft's aim for the initial release is to provide all the major parts to support deploying platform-as-a-service Web Apps and infrastructure-as-a-service virtual machines. It will also include components for storage and virtualized networking. The Azure Portal front-end for managing the service will also be included.

Read 1 remaining paragraphs | Comments

29 Jan 20:04

Le Paul Ricard candidat pour accueillir un GP de France ?

by Flavien Robert
prDepuis 2009, il n’y a plus de GP de France au calendrier du championnat du monde de F1. Les tentatives pour faire revenir la discipline dans le pays se sont depuis succédées sans succès. Pendant les essais hivernaux organisés par Pirelli, Stéphane Clair, le directeur général du Paul Ricard, a fait un point sur la situation […]