Shared posts

20 Nov 18:22

Unimportant egocentric rant: don’t .ToUpper() your titles

Here’s a minor inconvenience that I’m going to put out there in the hope that one or two of the horrible people responsible for it may realize the errors of their ways and make the world a marginally better place.

So you like your titles to be all caps. Fine. I’m not sure why you insist on screaming like that, but that’s your choice, and it’s your web site/blog. If you are going to do this, however, please don’t do this:

@post.Title.ToUpper()

And not just because you neglected to specify a culture (you monster). No, this is wrong because next time the guy who builds the week in .NET wants to mention your site and copies the title over, he’s going to copy this:

HOW TO USE ENTITY FRAMEWORK TO KILL PUPPIES!!!

instead of this:

How to use Entity Framework to make a rainbow

He’s going to curse loudly, his kids may hear him, and they’ll grow up to become as obnoxious as he is, all because of you. Well done.

This is particularly infuriating because there is a perfectly good way to do this bad thing that you insist on doing. In your CSS, just do something like this:

h1 { text-transform: uppercase; }

There. Was that so hard?

Except that no. When you copy text that has been styled this way, you’ll still get the all caps text instead of the unstyled underlying text. F@$#!

OK.

Fine.

Just don’t .ToUpper your titles. Please. It looks like crap and it’s less readable. And you sound like a Facebook comment from that uncle who ruins Thanksgiving every year.

14 Jul 07:01

Basics of Web Application Security: Hash and Salt Your Users' Passwords

If you need to store your users' passwords, it's essential that you never store them plainly. Instead you must store a cryptographic hash of them, so that people who get access to your database don't get the passwords. Cade and Daniel explain how to do this properly: salting the hash to avoid lookup table attacks, and using an appropriate hashing algorithm to defend against well-equipped attackers.

more…

11 Jun 12:22

Blue-Green Deployment

by Gabriel Schenker

Introduction

These days our customers expect their applications to be up and running all the time and literally experience no down-time at all ever. At the same time we should be able to add new features to the existing application or fix existing defects. Is that even possible? Yes, it is, but it is not for free. We have to make a certain effort to achieve what is called zero-downtime deployments.

This post is part of my series about Implementing a CI/CD pipeline. Please refer to this post for an introduction and a full table of contents.

Zero down-time

If we need to achieve zero down-time then we can not use the more classical way of deploying new versions of our application. There we used to stop the current version of the application and put up the maintenance page for all our potential users that wanted to use the application while we were deploying. We would tell them something along the line

“Sorry, but our site is currently down for maintenance. We appologize for the inconvenience. Please come back later.”

But we cannot afford to do that anymore. Every minute of down-time signifies a lot of missed opportunities and with that potential revenue. So what we have to do is install the new version of the application while the current version is still up and running. For that we need to either have additional servers at hand to which we can install the new version or we need to find a way how we can have to versions of the same application running on the same servers. This is also called a non destructive deployment.

Since we leave the current version running while we deploy the new bits we (usually) have more time to execute the deployment. Once the new version is installed we can run some tests against it – also called smoke tests – to make sure the application is working as expected. We also use this opportunity to warm up the new application and potentially pre-fill the caches (if we use any) so that once it is hit by the public it is operating with maximal speed.

It is important to notice that during this time the new application is not visible to the public and we can only reach it internally to e.g. test it. Once we’re sure the new version is working as expected we reconfigure the routing from the current version to the new version. This reconfiguration happens (near to) instantaneous and all public traffic is now funneled through the new version. We can keep the previous version around for a while until we are sure that no unexpected fatal error was introduced with the new version of the application.

Rollback

Bad things happen, it’s just a sad truth. Sometimes we introduce a severe bug with a new release of our software. This can happen no matter how good we test. In this situation we need a way to rollback the application to the previous know good version. This rollback needs to be fool proof and quick.

When we are using zero-downtime or non-destructive deployment we gain this possiblity for free. Since the new version has been deployed without destroying or overwriting the previous version the latter is still around as long as we want after we have switched the public traffic over to the new version. We thus “only” need to redirect the traffic back to the old version if bad things happen. Again, re-routing traffic is a near instantaneous operation and can be fully automated so that a rollback is absolutely risk-free.

Compare this with a rollback in the case of a classically deployed application which used to be destructive. Once the new version was in place the old version was gone. One had to find the correct previous bits and re-install them… a nightmare! Most often the necessary steps to execute during a rollback were maybe documented but never exercised. A huge risk and nerve wrecking for everyone.

Schema Changes

The attentive reader may now say: “Wait a second, what about the case where a deployment encompasses breaking database schema changes?”

This is an interesting question and it needs some further discussion. I will go into full details in an upcoming post dedicated to this topic. Suffice to say at this point that there is a recommended best practice how to exercise breaking schema changes in situations like these. In a nutshell, we need to deploy schema changes separately from code changes and ALWAYS make the schema changes such as that they are backwards compatible. I promise, I’m going to describe the how and why in much more details in this upcoming post. Please stay tuned.

Blue-Green Deployment

Ok, now we have spoken about non destructive deployment and zero downtime deployment. There are various ways how this can be achieved. One is called Blue-Green Deployment. In this case we have a current version up and running in production. We can label this version as the blue version. Then we install a new version of our application in production. This time we label it with the color green. Once green is completely installed, smoke tested and ready to go we funnel all traffic through it. After waiting for some time until we are sure that no rollback is needed the blue version is now obsolete and can be decommissioned. The blue label is now free. When we deploy yet a newer version we will call it the blue version, and so on and so on. We are permanently switching from blue to green to blue to green.

Implementation Details

Let’s assume we are going to use blue-green deployment for a micro service called Foo. This service Foo is currently running in production and is at version v3. In a blue-green scenario each service has a color assigned. Let’s assume that we are currently blue. So far we talked about a logical service when we talked about Foo. But now let’s go to the physical components. Since our service needs to be highly available we run at least three instances of the service. We call them Foo-1-blue, Foo-2-blue and Foo-3-blue. These are now 3 physical instances of the logical blue service Foo. They run on different nodes (where node is either a VM or a physical server). We have a reverse proxy in front of the service which load-balances the traffic to the 3 instances using e.g. a round robin algorithm. We can use e.g. Nginx or HAProxy, etc. for this task.

If now another service say Bar needs to access Foo then it will make the request to a load balancer or reverse proxy which makes the color decision. Currently this one is configured to route the traffic to blue. For clarification please see the graphic below

Now it is time to deploy v4 of the service Foo. This new version gets the color green assigned. We once again for high availability reasons deploy 3 instances of the new version of the service. We call those 3 instances Foo-1-green, Foo-2-green and Foo-3-green. These 3 instances can be deployed to new nodes or if possible and requested coexist with the blue versions on the same nodes. Once we have smoke-tested and warmed up the new instances we can now reconfigure the router that controls the color. It will now funnel all public traffic through the green version as shown in the graphic below.

As you can see, the blue version of the service is still there and can be used in case we have to exercise a rollback.

Summary

In this post I have discussed in detail what zero-downtime deployments are and how this can be achieved by using the technique on non-destructive deployments. One variant of this latter technique is the so called blue-green deployment. These days this is a technique frequently used in fully automated CI/CD pipelines. This post is part of a series about Implementing a CI/CD pipeline.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

09 Jun 05:47

Making it easier to port to .NET Core

by Immo Landwerth [MSFT]

In my last post, I talked about porting to .NET Core and requested feedback from our community on what their experience was and what we could improve.

This sparked many great conversations with our users.

Based on these conversations as well as our experience working with first- and third-party partners, we’ve decided to drastically simplify the porting effort by unifying the core APIs with other .NET platforms, specifically the .NET Framework and Mono/Xamarin.

In this blog post, I’ll talk about our plans, how and when this work will happen, and what this means for existing .NET Core customers.

Reflecting on .NET Core

The .NET Core platform evolved from a desire to create a modern, modular, app-local, and cross-platform .NET stack. The business goals that drove its creation were focused on providing a stack for brand new application types (such as touch-based UWP apps) or modern cross-platform applications (such as ASP.NET Core web sites and services).

We are about to ship .NET Core 1.0 and we have succeeded in creating a powerful and cross platform development stack. .NET Core 1.0 is the beginning of a journey to get .NET everywhere.

While .NET Core works well for the scenarios that we set out to address, it provides access to fewer technologies than other .NET platforms, especially the .NET Framework. Partly because not everything has been made cross platform, and partly because we aggressively trimmed some features out of it.

It became clear that adopting .NET Core would require existing .NET developers to spend a considerable amount of time to port to it.

While there is certainly some value in presenting new customers with a cleaner API, it disproportionately penalized our existing loyal customers who have invested over many years in using the APIs and technologies we advertised to them. We want to extend the reach of the .NET platform and gain new customers, but we can’t do so at the expense of existing users.

Xamarin is a great role model in this regard. They allow .NET developers to build mobile applications for the iOS and Android platforms with minimal effort. Consider iOS. It shares many of the characteristics of the UWP platform, such as the high focus on end-user experience and the requirement of static compilation. However, in contrast to .NET Core, Xamarin did not start with reimagining the .NET stack. They took Mono as-is, removed the application model components (Windows.Forms, ASP.NET), added a new one for iOS, and made minimal changes to make it suitable for embedded use. Since Mono is virtually identical to the .NET Framework, the resulting API set is fairly comprehensive and makes porting existing code to Xamarin substantially easier.

Since its inception the key promise of .NET has been to make developers more productive and help them write robust code. It was designed from the start to support developers on a wide range of areas and scenario, starting with desktop and web applications to microservices, mobile applications and gaming.

For us to deliver on our promise, it is critical that we provide a unified core API that is available everywhere. A unified core API allows developers to easily share code across these workloads and allows them to focus their skills where it matters the most – creating great services and user experiences.

.NET Core moving forward

At Build 2016, Scott Hunter presented the following slide:

NetStandard

Here is the promise we want to make to you:

Whether you need to build a desktop application, a mobile app, a web site, or a micro service: you can rely on .NET to get you there. Code sharing is as easy as possible because we provide a unified BCL. As a developer, you can focus on the features and technologies that are specific to the user experiences and platforms you’re targeting.

This is how we want to realize this promise: we will provide source and binary compatibility for applications that target the core Base Class Libraries (BCL) across all platforms with the same behavior across platforms. The Base Class Libraries are those that existed in mscorlib, System, System.Core, System.Data, and System.Xml and that are not tied to a particular application model and are not tied to a particular operating system implementation.

Whether you target the .NET Core 1.0 surface (System.Runtime-based surface), or the upcoming version of .NET Core with the expanded API (mscorlib-based surface), your existing code will continue to work.

The promise of making it easier to bring existing code extends to libraries and NuGet packages. Obviously this includes portable class libraries, regardless of whether they used mscorlib or System.Runtime.

Here are a few examples of the additions that will make your life easier when targeting to .NET Core:

  • Reflection will become the same as the .NET Framework, no need for GetTypeInfo(), good old .GetType() is back.
  • Types will no longer miss members we’ve removed for clean up reasons (Clone(), Close() vs Dispose(), old APM APIs)
  • Binary serialization (BinaryFormatter) will be available again

A full list of the planned additions will be made available in our corefx GitHub repo.

What does this mean for .NET Core?

From talking to our community on social media it seems there is a concern that these API additions degrade the .NET Core experience. Nothing could be further from the truth. The vast majority of investments we made for .NET Core, be it that it can be deployed in an app-local fashion, XCOPY deployment, that we have an ahead-of-time (AOT) compiler tool chain, that it’s open source and cross- platform remain unchanged. The same is true for all the additional features and performance improvements we made, such as the new networking component called Kestrel.

Originally, when we designed .NET Core we’ve talked heavily about modularization and pay for play, meaning you only have to consume the disk space for the features that you end up using. We believe we can still realize these goals without compromising so heavily on compatibility.

Initially, our plan to achieve minimum disk space usage relied on a manual process of splitting the functionality in tiny libraries and we know that our users liked this. We will now provide a linking tool that will be more precise and provide better savings than any manual process could have provided. This is similar to what Xamarin developers get today.

Timelines and process

The process to extend the API surface of .NET Core will come after we ship .NET Core 1.0 RTM. This way, those of you that have been following along .NET Core will be able to deploy to production.

You can expect to see more details and plans over the next couple of weeks published in our corefx GitHub repository. One of the first things we will do is to publish a set of API references that list which APIs we’re planning to bring. So when porting code you will be able to tell whether you want to jump to .NET Core 1.0 or wait for the new APIs to come. We’ll also call out which APIs we don’t plan on bringing. Our desire is to provide a dashboard for our users to check on the project status and goals.

This will be an improvement over the process that we followed in the lead up to .NET Core 1.0, as we did not share enough of our internal processes with the world.

Lastly, we’re planning on releasing incremental updates to .NET Core on NuGet that extends the set of available APIs. This way, you will not have to wait until all the API additions are done in order to take advantage of them. This also allows us to incorporate your feedback on behavioral compatibility.

Over the next weeks we will publish more details in the corefx repo. You can expect this blog to communicate the status and all major decisions.

Stay tuned for more details!

08 Jun 09:47

Foosball Society : le premier babyfoot connecté et social !

by Geoffray

Tecbak a mis au point Foosball Society, un babyfoot connecté conçu avec Bonzini et couplé au premier réseau social dédié à cette activité de loisirs, très populaire au sein des entreprises.

La startup Tecbak propose Foosball Society, qu’elle présente comme le premier réseau social des joueurs de baby-foot.

Conçu en partenariat avec la célèbre marque française Bonzini, le babyfoot connecté combine la convivialité d’un jeu très répandu en entreprises et les bénéfices des nouvelles technologies grâce à l’arbitrage électronique.

ebabyfoot_Foosball Society-vitesse de tir

Cette nouvelle expérience du jeu améliore la jouabilité en introduisant le comptage et la détection automatique des buts et des gamelles, le calcul de la vitesse de tir, l’annulation des points et l’affichage des scores en temps réel… tout est détecté et comptabilisé par le babyfoot connecté Foosball Society !

ebabyfoot-Foosball-Society

Un réseau social d’entreprise ?

Véritable réseau social et connecté, la Foosball Society permet aux joueurs d’accéder à leurs statistiques, leurs classements et leurs trophées gagnés, et surtout leur permet de rester en contact avec des amis ou collègues après la partie, créant ainsi une nouvelle forme de relation sociale à travers le jeu de table.

L’e-babyfoot est l’outi parfait pour améliorer l’engagement des :

  • Marques qui souhaitent disposer d’un moyen moderne de création de trafic et de fidélisation dans leurs points de ventes, avec un suivi statistique des visites ;
  • Entreprises qui souhaitent investir dans des solutions RH qui fédèrent leurs employés dans un contexte de transformation digitale des organisations.

Disponible en location ou dans le cadre d’évènements pour le grand-public, la Foosball Society a pour vocation de changer en profondeur les relations entre les joueurs, les entreprises et les marques.

La start-up dévoilera son système à l’occasion du festival Futur-en-Seine.

02 Jun 11:46

Monitor madness, part one

by ericlippert

Locks are tricky; I thought today I’d talk a bit about some of the pitfalls of locking that you might not have seen before.

As you probably know, the lock statement in C# is a syntactic sugar for the use of a monitor, so I’ll use the terms “lock” and “monitor” somewhat interchangeably. A monitor is an oddly-named data structure that gives you two basic operations: “enter” the monitor and “exit” the monitor. The fabulous thing about a monitor is that only one thread can enter a given monitor at a time; if threads alpha and beta both attempt to enter a monitor then only one wins the race; the other blocks at least until the winner exits. After the winner exits the monitor, the loser gets another chance to attempt to acquire the monitor; of course, there might be another race.

I give an example of a very very simple implementation of a monitor here, though of course you would never do this yourself; you’d just use the built-in support in the framework and the C# language.

The lock(x) { body } statement in C# is a syntactic sugar for

bool lockWasTaken = false;
var temp = x;
try 
{ 
  Monitor.Enter(temp, ref lockWasTaken); 
  { 
      body 
  }
}
finally 
{ 
  if (lockWasTaken) 
    Monitor.Exit(temp); 
}

The details of how the operating system implements (or fails to implement!) a system whereby every thread gets access eventually in some sort of fair manner is beyond the scope of what I want to talk about today. Rather, I want to talk about an advanced use of monitors. In addition to the straightforward “enter, do some work, exit” mechanism, monitors also provide a mechanism for temporarily exiting a monitor in the middle of a lock! That is, for the workflow:

  • block until I enter the monitor
  • do some work
  • temporarily exit the monitor until something happens on another thread that I care about
  • when the thing I care about happens, block until the monitor can be entered again
  • do some work
  • exit the monitor

Under what circumstances would we want to do this? The classic example is that we have a not-threadsafe finite-size queue of jobs to perform, and two threads called the producer and the consumer. The consumer thread sits there in a loop attempting to remove items from the queue so that the job can be performed. The producer thread runs around looking for work to do and puts it on the queue. Our code must have the following characteristics:

  • if the consumer is attempting to modify the queue then an attempt by the producer to modify the queue must block
  • and similarly vice versa
  • if the producer is attempting to put work onto a full queue, it must block and allow the consumer to clear out space
  • if the consumer is attempting to take work off of an empty queue, it must block and allow the producer to find more work

To achieve these goals, a monitor provides three operations in addition to enter and exit:

  • “wait” causes the monitor to exit and puts the current thread to sleep. More specifically, it causes a thread to enter the “wait state”.
  • “notify” — which, oddly enough is called Pulse in .NET — allows the thread which is currently in the monitor to place a single waiting thread (of the runtime’s choice) in the “ready” state. A ready thread is still blocked, but it is marked as ready to enter the monitor when the monitor becomes available. (There is no guarantee that it will do so; again, it might be racing against other threads.)
  • “notify all” pulses every thread in the wait state that is waiting for a particular monitor.

Of course there are many other uses for these operations than producer-consumer queues, but as a canonical example we’ll stick with that for the purposes of this series.

What does the code typically look like? On the producer we’d have something like:

while(true)
{
  var someWork = FindSomeWork();
  lock(myLock)
  {
    while (myQueue.IsFull)
    { 
      Monitor.Wait(myLock); 
      // We cannot do any work while the queue is full. 
      // Put the producer thread into the wait state.
      // When someone pulses us, try to acquire the lock again,
      // and then check again to see if the queue is full.
    }
    // If we got here then we have acquired the lock,
    // and the queue has room.
    myQueue.Enqueue(someWork);
    // The queue might have been empty, and the consumer might be
    // waiting. If so, put the consumer thread in the ready state.
    Monitor.Pulse(myLock); 
    // The consumer thread, if it was asleep, is now ready, but we
    // still own the lock. Let's leave the lock and give it a chance
    // to run. It might lose the race of course.
  }
}

And on the consumer side we’d see something unsurprisingly similar. I won’t labour the point with excessive comments here.

while(true) 
{
  Work someWork = null;
  lock(myLock)
  {
    while (myQueue.IsEmpty)
      Monitor.Wait(myLock);
    someWork = myQueue.Dequeue();
    Monitor.Pulse(myLock);
  }
  DoWork(someWork);
}

OK, I hope that is all very clear. This was a long introduction to what is a surprisingly not-straightforward question: why did I write a loop around each wait? Wouldn’t the code be just as correct if written

  lock(myLock)
  {
    if (myQueue.IsEmpty) // no loop!
      Monitor.Wait(myLock);
    ...

Next time on FAIC: Yeah, what’s up with that?


28 May 14:15

Brainstorming development workflows with Docker, Kitematic, VirtualBox, Azure, ASP.NET, and Visual Studio

by Scott Hanselman

Kitematic for WindowsFirst, a disclaimer. I have no idea what I'm talking about. I'm learning and exploring some ideas, and I wanted to see what the development process looks like today (December 2015) with Docker, ASP.NET, and Visual Studio on my Windows 10 machine. I'm also interested in your ideas in the comments, and I'll share them directly with the folks who are working on making Docker integration with Visual Studio.

This post uses the bits and stuff and hacks that are working today. Some of this is alpha, some is hacky, but it's all very interesting. What do you think?

Setting up Docker on Windows

I got a new laptop and needed to set it up. This seemed like a good to time re-discover Docker on Windows.

  • For this ASP.NET-centric example, I'm assuming you have Windows with Visual Studio, but you can get Visual Studio 2015 Community for free if you need it. You'll want ASP.NET 5 RC1 as well.
  • Go to https://www.docker.com, click Get Started, then Windows. You'll end up here: http://docs.docker.com/windows/started/.
    • Note, you'll need hardware virtualization enabled in your systems BIOs, and if you are already running HyperV, either turn it off (I just to go Windows Features and uncheck it. It can be quickly turned back on later) or create a boot menu to switch between Hyper-V and VirtualBox.
    • The Docker website could get to the point faster, but they are making sure you're prepped for success.
  • Download Docker Toolbox which has a great chained installer that includes:
    • Docker Client - This is the "docker" windows command you'll use at the command line, if you want to.
    • Docker Machine - Docker Machine creates Docker hosts anywhere and configures Docker to talk to those machines.
    • Docker Compose - This is a tool for defining multi-container Docker applications.
    • Docker Kitematic - Kitematic is really accessible. It's the Docker GUI and runs on Mac and Windows.
      • I like to think of Docker Kitematic as "GitHub for Windows for Docker." Just as GitHub for Windows is an attractive and functional GUI for 80% of the things you'd want to do with Git, then Kitematic is the same for Docker. I personally think that while Kitematic is in alpha, it will be the thing that gets new people using Docker. It definitely made onboarding more comfortable for me.
    • VirtualBox - Oracles free and excellent Virtual Machine software. I use this instead of Hyper-V on the client. Hyper-V is great on the server or in the cloud, but it's not optimized for client software development or running Ubuntu VMs and remoting into them. Also, VirtualBox is extremely easy to automate, and Docker and Kitematic will be automating creating the VMs for you.

When you run Kitematic the first time it will automate VirtualBox and use a "boot2docker.iso" to boot up a new that will host your Docker containers.

VirtualBox

If you want to test things, click New in Kitematic and search for "Ghost." Kitematic will download the Dockerfile, create a VM and Container, provision everything, and run Ghost inside Docker within your (hidden from view) VM. Click Settings and you can see what port it's running on, or just click the Arrow next to Web Preview and Kitematic will launch a web browser talking to the node.js-based Ghost Blog running in Docker.

Note: Microsoft Edge is having some troubles talking to VirtualBox virtual network adapters, and I'm tracking workarounds here. Other browsers are fine.

Kitematic publishig Ghost

ASP.NET 5 and Linux and Docker

ASP.NET 5 and the .NET Core CLR are both open source and run on Windows, Mac, and Linux. We're going to make an ASP.NET in Visual Studio and deploy it to a Linux Container via Docker. The "Dockerfile" that describes ASP.NET 5 is open source and is here on GitHub https://github.com/aspnet/aspnet-docker but you don't really need to sweat that even if it is interesting.

NOTE: You can get and install ASP.NET here http://get.asp.net. Visit it from any OS and it will give you the details you need to install and get started.

An example Dockerfile for your basic ASP.NET 5 application would look like this:

FROM microsoft/aspnet:1.0.0-rc1-final


ADD . /app

WORKDIR /app/approot

ENTRYPOINT ["./web"]

It says, "start from this base docker file, add the files in . to ./app, and we'll be running from /app/approot. Then run ./web."

Deploy to Docker from within Visual Studio

The Visual Studio 2015 Tools for Docker are just a Preview, but they are pretty useful even in their Alpha state. Install them in Visual Studio 2015 - it just takes a second.

Make a new ASP.NET application with File | New Project. I made one without authentication.

Go into the Project.json and change this line to include the --server.urls bit. The important part is the *, otherwise the Kestrel web server will only listen for localhost and we want it to listen everywhere:

"commands": {

"web": "Microsoft.AspNet.Server.Kestrel --server.urls http://*:5000"
}

Right Click Solution Explorer and click Publish and you should see this:

Docker Tools for Visual Studio

From here, select Docker, and you will have a change to make a VM in Azure or publish to an existing VM.

Instead, click "Custom Docker Host" because we are going to public to our local VM.

Here's what my settings look like. Yours will be different.

Custom Docker Profile in Visual Studio

In order to get the settings YOU need, go to Kitematic and click Docker CLI to get a cool PowerShell preconfigured command prompt all setup with knowledge of your system.

Type "docker-machine config default" and you'll get a command line showing where your certs are and the IP and port of your Docker setup.

Note the result is missing a carriage return there after the port 2376.

docker-machine config default

Fill out the form with the Server Url, and image name, and some ports. I mapped port 5000 inside the container because I'll have the ASP.NET Kestrel web server listening on Port 5000.

Here's what my "Auth Options" text box looks like. Your paths will be different.

--tlsverify 

--tlscacert=C:\Users\scott\.docker\machine\machines\default\ca.pem
--tlskey=C:\Users\scott\.docker\machine\machines\default\server-key.pem
--tlscert=C:\Users\scott\.docker\machine\machines\default\server.pem

Click Validate Connection and you'll hopefully get a green checkbox.

WEIRD BUG: As of this writing the November 2015 version of the preview Docker Tools for Visual Studio 2015 has a bug when publishing to a custom host. The generated .ps1 in the PublishProfile is wrong. I think they'll fix it ASAP but the fix is to fake publish a Hello World ASP.NET project to a Docker container in any Azure VM and grab the .ps1 it generates. You don't need to hit publish, the file gets generated when you hit Next. Copy that file off somewhere and copy it OVER the wrong one in your actual project. You only have to do this once. I'm sure it will get fixed soon. You can confirm you have the right .ps1 because it'll say "Docker" at the top of the file.

image

When you hit publish, the project will build locally, and deploy into a new Docker container. You can watch Kitematic update as the deploy happens. The selected Container there is ASP.NET, and I know it worked because Kitematic puts a nice Web Preview there as well!

ASP.NET 5 in Docker in Kitematic

Brainstorming Improvements

So this is what I was able to do with existing bits. What I'd like to see is:

  • Press Ctrl-F5 in Visual Studio and have it build the project, deploy to Docker, and launch the browser all in one go. Do you agree?
    • I was thinking to make a "docker" command in the ASP.NET 5 "launchSettings.json" which would appear like this in Visual Studio.
      Docker in VS
  • Today you have to delete the container manually in Kitematic and publish again. How would you want things to work?
  • If Docker is promoting Kitematic as the best way to get started with Docker, should Visual Studio plugins know that Kitematic and Docker Machine are there and auto-configure things?

Additionally, when  Windows Containers happens, Visual Studio should clearly be able to publish an ASP.NET 5 application to the container, but even better, if this Docker flow works cleanly, I should be able to publish via Docker to Linux OR Windows from the same dialog in VS. Then after a local deployment to Docker I could Right-Click Publish and publish to Docker in an Azure VM and or Azure Container Service.

IMHO given an ASP.NET 5 app, you should be able to:

  • Publish to a folder
  • Publish to a Docker container (Linux or Windows)
    • Ctrl-F5 build AND F5 debug that container.
    • Publish to Docker in any cloud
  • Publish to an Azure VM, Web Site (App Service), or Docker within Azure Container Service
  • Editor support and syntax highlighting for Dockerfiles and Docker Compose files.
  • Docker Tools for VS should make a basic Dockerfile if one doesn't exist
  • Run xUnit and tests in the Docker Container

What do you think, Dear Reader? How much Visual should Visual Studio have? I personally like these lightweight dialogs that launch command line tools. How do you expect Docker to integrate with Visual Studio?


Sponsor: Big thanks to Infragistics for sponsoring the feed this week. Responsive web design on any browser, any platform and any device with Infragistics jQuery/HTML5 Controls.  Get super-charged performance with the world’s fastest HTML5 Grid - Download for free now!


© 2015 Scott Hanselman. All rights reserved.
     
26 May 05:24

Votre smartphone n'a plus de batterie ? Uber le sait et pourrait en profiter

by webmaster@futura-sciences.com (Futura-Sciences)
La fonction d'économie de batterie de l'application mobile Uber sait détecter l'autonomie restante du téléphone. Selon les constatations de ce service de véhicule de transport avec chauffeur, les clients dont le smartphone est sur le point de s'éteindre seraient susceptibles d'accepter des...
25 May 19:57

Announcing Open Live Writer - An Open Source Fork of Windows Live Writer

by Scott Hanselman

Open Live Writer is the spiritual successor to Windows Live Writer

Meta enough for you?Today is the day. An independent group of volunteers within Microsoft has successfully open sourced and forked Windows Live Writer. The fork is called Open Live Writer (also known as OLW) and it is part of the .NET Foundation and managed by this group of volunteers. Read the fantastic announcement at the .NET Foundation Blog! Download Open Live Writer now!

Windows Live Writer 2012 was the last version Microsoft released and can still be downloaded from http://www.windowslivewriter.com. If you're not comfortable using Open Source Software, I recommend you stick with classic WLW.

If you're willing to put up with some bugs, then join us in this brave new world, you can download Open Live Writer from http://www.openlivewriter.org. We're calling today's release version 0.5.

Here's some of the added features, the removed features, the stuff that doesn't work, and our plans for the future:

  • REMOVED: Spell Checking. The implementation was super old and used a 3rd party spell checker we didn't have a license to include an open source release. Going forward we will add Spell Check using the built-in spell checker that was added in Windows 8. Open Live Writer on Windows 7 probably won't have spell check.
  • REMOVED: The Blog This API. It was a plugin to Internet Explorer and Firefox and was a mess of old COM stuff.
  • REMOVED: The "Albums" feature. It uploaded photos to OneDrive but depended on a library that was packaged with Windows Live Mail and Live Messenger and we couldn't easily get permission to distribute it in an open source project.
  • ADDING VERY SOON: Google runs the excellent Blogger blog service. We've worked with the Blogger Team within Google on this project, and they've been kind enough to keep an older authentication endpoint running for many months while we work on Open Live Writer. Soon, Google and Blogger will finally shut down this older authentication system. Blogger will use the more modern OAuth 2 and Open Live Writer will be updated to support OAuth 2. Windows Live Writer will never support this new OAuth 2 authentication system, so if you use Blogger, you'll need to use Open Live Writer.
  • BROKEN/KNOWN ISSUES: We are actively working on supporting Plugins. We have an plan in place and we are looking for your feedback on the most popular plugins that you want brought over from the Windows Live Writer ecosystem.

Our roadmap for the future is published here on GitHub.

NOTE: Open Live Writer is NOT a Microsoft product. It is an open source project under the .NET Foundation and is managed and coded by volunteers. Some of the volunteers work for Microsoft and are doing this work in their spare time.

Are you an existing user of Windows Live Writer?

We encourage you to install Open Live Writer and try it out! OLW will run side-by-side with your existing Windows Live Writer installation. Open Live Writer installs VERY quickly and updates itself automatically. Try it out! It's early but it's a start. Please bear with us as we work to improve Open Live Writer.

if you do find bugs, please share your bugs at https://github.com/OpenLiveWriter/OpenLiveWriter/issues and be specific about what's not working. And please, be patient. We are doing this as volunteers - we are NOT representing Microsoft. Open Live Writer is no longer a Microsoft project, so while we will do our best to support you, let's all try to support one another!

Are you a developer/designer/writer?

We've got dozens of volunteers and a few dedicated core committers. Your Pull Requests and code ARE appreciated, but please talk to the team and comment on issues before submitting any major Pull Requests (PRs). Community is appreciated and we don't want to reject your hard work, so it's best you talk to the team in a GitHub Issue and get approved for large work items before you spend a lot of time on OLW. We welcome http://firsttimersonly.com to open source as well! Help us with our docs, as well!

IMPORTANT HISTORICAL NOTE: Much of the code in Open Live Writer is nearly 10 years old. The coding conventions, styles, and idioms are circa .NET 1.0 and .NET 1.1. You may find the code unusual or unfamiliar, so keep that in mind when commenting and discussing the code. Before we start adding a bunch of async and await and new .NET 4.6isms, we want to focus on stability and regular updates. 

Building Open Live Writer and making your own personal copy!

To be clear, you don't need to be a programmer to run OLW. Just head over to http://www.openlivewriter.org and download now. However, if you do want to hack on OLW here's how!

  • Clone the sources: git clone https://github.com/OpenLiveWriter/OpenLiveWriter.git

At this point, you can build and run inside Visual Studio 2015 Community. It's free to download at https://www.visualstudio.com/free. A solution file for OLW is located at.\src\managed\writer.sln.

  • Alternatively, you can build at the command prompt:
    • Run .\build to compile. The binaries are dropped in .\src\managed\bin\Debug\i386\Writer\
    • Run .\run to launch Writer.

Going Forward

I know it felt like it took a long time to open source Open Live Writer. In fact, my buddy John Gallant found the first email where we started asking questions in April of 2013. There was a lot involved both legally and technically as we were breaking new ground for Microsoft. Consider this. We've successfully open sourced a previously completely proprietary piece of Windows software that shipped as part of Windows Live Essentials. This software was used by millions and contained code as old as a decade or more. Persistence pays off.

This is just the beginning! Big thanks to the team that made this possible. Specifically I want to call out Will Duff, Rob Dolin, and Robert Standefer who have been there from the beginning offering coding, logistical, and legal support. Thanks to Ben Pham for our logo, and Martin Woodward from the .NET Foundation for his support, Azure Storage account, and code signing certificate! I can't thank everyone here, there's a longer list of contributors on our home page!

We are looking forward to hearing from you and perhaps you'll join us in our open source journey.


Sponsor: Big thanks to Infragistics for sponsoring the feed this week. Responsive web design on any browser, any platform and any device with Infragistics jQuery/HTML5 Controls.  Get super-charged performance with the world’s fastest HTML5 Grid - Download for free now!



© 2015 Scott Hanselman. All rights reserved.
     
25 May 19:39

Fixed: Microsoft Edge can't see or open VirtualBox-hosted local web sites

by Scott Hanselman

I'm using VirtualBox on a Windows 10 machine along with Docker to deploy ASP.NET websites to local Linux Containers. To be clear, this isn't accessing websites with http://localhost, this is accessing locally an VirtualBox virtual network.

For example, my local IP and subnet is here, but my VirtualBox is here:

Ethernet adapter Ethernet:

IPv4 Address. . . . . . . . . . . : 192.168.0.140

Ethernet adapter VirtualBox Host-Only Network:
IPv4 Address. . . . . . . . . . . : 192.168.99.1

Make sense? A Linux VM running Docker containers is then http://192.168.99.100, for example, on various ports.

Strangely, however, I was unable to access these VirtualBox-hosted websites with Microsoft Edge, while they worked on Chrome and Firefox. I wanted to fix this. Just saying "use another browser" isn't enough, I like to figure it out.

I ended up trying this, and oddly, I was right. Go to Start, type "Internet Options" then then the Security Tab, then click Local Intranet, then Sites. Add your Virtual Machine's IP (in this case, the Docker Host) in that list and you're golden.

Add your VirtualBox VM's IP in the local intranet list of sites

Now about the WHY....I have no idea. I'll report back as I keep poking around.


Sponsor: Big thanks to Infragistics for sponsoring the feed this week. Responsive web design on any browser, any platform and any device with Infragistics jQuery/HTML5 Controls.  Get super-charged performance with the world’s fastest HTML5 Grid - Download for free now!



© 2015 Scott Hanselman. All rights reserved.
     
25 May 05:41

How to enable HTTP Strict Transport Security (HSTS) in IIS7+

by Scott Hanselman

I got a report of a strange redirect loop on a website I (inherited, but help) manage. The reports were only from Chrome and Firefox users and just started suddenly last week, but the code on this site hadn't changed in at least 3 years, maybe longer.

Chrome shows an error "this webpage has a redirect loop"

What's going on here? Well, it's a redirect loop, LOL. But what KIND of redirects?

We know about these redirects, right?

  • 302 - Object Moved - Look over here at THIS URL!
  • 301 - Moved Permanently - NEVER COME HERE AGAIN. Go over to THIS URL!

A redirect loop builds up in the Chrome Developer Tools

But there's another kind of redirect.

  • 307 - Internal Redirect or "Redirect with method" - Someone told me earlier to go over HERE so I'm going to go there without talking to the server. Imma redirect myself and keeping using the same VERB. That means you can redirect a POST without the extra insecure back and forth.

A 307 Internal Redirect

Note the reason for the 307! HSTS. What's that?

HSTS: Strict Transport Security

HSTS is a way to keep you from inadvertently switching AWAY from SSL once you've visited a site via HTTPS. For example, you'd hate to go to your bank via HTTPS, confirm that you're secure and go about your business only to notice that at some point you're on an insecure HTTP URL. How did THAT happen, you'd ask yourself.

But didn't we write a bunch of code back in the day to force HTTPS?

Sure, but this still required that we ask the server where to go at least once, over HTTP...and every subsequent time, user keeps going to an insecure page and then redirecting.

HSTS is a way of saying "seriously, stay on HTTPS for this amount of time (like weeks). If anyone says otherwise, do an Internal Redirect and be secure anyway."

Some websites and blogs say that to implement this in IIS7+ you should just add the CustomHeader require for HSTS like this in your web.config. This is NOT correct:

<system.webServer>

<httpProtocol>
<customHeaders>
<add name="Strict-Transport-Security" value="max-age=31536000"/>
</customHeaders>
</httpProtocol>
</system.webServer>

This isn't technically to spec. The problem here is that you're sending the header ALWAYS even when you're not under HTTPS.

The HSTS (RFC6797) spec says

An HTTP host declares itself an HSTS Host by issuing to UAs (User Agents) an HSTS Policy, which is represented by and conveyed via the

Strict-Transport-Security HTTP response header field over secure transport (e.g., TLS).

You shouldn't send Strict-Transport-Security over HTTP, just HTTPS. Send it when they can trust you.

Instead, redirect folks to a secure version of your canonical URL, then send Strict-Transport-Security. Here is a great answer on StackOverflow from Doug Wilson.

Note the first rule directs to a secure location from insecure one. The second one adds the HTTP header for Strict-Transport-Security. The only thing I might change would be to formally canonicalize the www. prefix versus a naked domain.

<?xml version="1.0" encoding="UTF-8"?>

<configuration>
<system.webServer>
<rewrite>
<rules>
<rule name="HTTP to HTTPS redirect" stopProcessing="true">
<match url="(.*)" />
<conditions>
<add input="{HTTPS}" pattern="off" ignoreCase="true" />
</conditions>
<action type="Redirect" url="https://{HTTP_HOST}/{R:1}"
redirectType="Permanent" />
</rule>
</rules>
<outboundRules>
<rule name="Add Strict-Transport-Security when HTTPS" enabled="true">
<match serverVariable="RESPONSE_Strict_Transport_Security"
pattern=".*" />
<conditions>
<add input="{HTTPS}" pattern="on" ignoreCase="true" />
</conditions>
<action type="Rewrite" value="max-age=31536000" />
</rule>
</outboundRules>
</rewrite>
</system.webServer>
</configuration>

Note also that HTTP Strict Transport Security is coming to IE and Microsoft Edge as well, so it's an important piece of technology to understand.

What was happening with my old (inherited) website? Well, someone years ago wanted to make sure a specific endpoint/page on the site was served under HTTPS, so they wrote some code to do just that. No problem, right? Turns out they also added an else that effectively forced everyone to HTTP, rather than just using the current/inherited protocol.

This was a problem when Strict-Transport-Security was turned on at the root level for the entire domain. Now folks would show up on the site and get this interaction:

  • GET http://foo/web
  • 301 to http://foo/web/ (canonical ending slash)
  • 307 to https://foo/web/ (redirect with method, in other words, internally redirect to secure and keep using the same verb (GET or POST))
  • 301 to http://foo/web (internal else that was dumb and legacy)
  • rinse, repeat

What's the lesson here? A configuration change that turned this feature on at the domain level of course affected all sub-directories and apps, including our legacy one. Our legacy app wasn't ready.

Be sure to implement HTTP Strict Transport Security (HSTS) on all your sites, but be sure to test and KNOW YOUR REDIRECTS.

Related Links



© 2015 Scott Hanselman. All rights reserved.
     
21 May 11:23

The Golden Age of x86 Gaming

by Jeff Atwood

I've been happy with my 2016 HTPC, but the situation has changed, largely because of something I mentioned in passing back in November:

The Xbox One and PS4 are effectively plain old PCs, built on:

  • Intel Atom class (aka slow) AMD 8-core x86 CPU
  • 8 GB RAM
  • AMD Radeon 77xx / 78xx GPUs
  • cheap commodity 512GB or 1TB hard drives (not SSDs)

The golden age of x86 gaming is well upon us. That's why the future of PC gaming is looking brighter every day. We can see it coming true in the solid GPU and idle power improvements in Skylake, riding the inevitable wave of x86 becoming the dominant kind of (non mobile, anyway) gaming for the forseeable future.

And then, the bombshell. It is all but announced that Sony will be upgrading the PS4 this year, no more than three years after it was first introduced … just like you would upgrade a PC.

Sony may be tight-lipped for now, but it's looking increasingly likely that the company will release an updated version of the PlayStation 4 later this year. So far, the rumoured console has gone under the moniker PS4K or PS4.5, but a new report from gaming site GiantBomb suggests that the codename for the console is "NEO," and it even provides hardware specs for the PlayStation 4's improved CPU, GPU, and higher bandwidth memory.

  • CPU: 1.6 → 2.1 Ghz CPU
  • GPU: 18 CUs @ 800Mhz → 36 CUs @ 911Mhz
  • RAM: 8GB DDR5 176 GB/s → 218 GB/s

In PC enthusiast parlance, you might say Sony just slotted in a new video card, a faster CPU, and slightly higher speed RAM.

This is old hat for PCs, but to release a new, faster model that is perfectly backwards compatible is almost unprecedented in the console world. I have to wonder if this is partially due to the intense performance pressure of VR, but whatever the reason, I applaud Sony for taking this step. It's a giant leap towards consoles being more like PCs, and another sign that the golden age of x86 is really and truly here.

I hate to break this to PS4 enthusiasts, but as big of an upgrade as that is – and it really is – it's still nowhere near enough power to drive modern games at 4k. Nvidia's latest and greatest 1080 GTX can only sometimes manage 30fps at 4k. The increase in required GPU power when going from 1080p to 4k is so vast that even the PC "cost is no object" folks who will happily pay $600 for a video card and $1000 for the rest of their box have some difficulty getting there today. Stuffing all that into a $299 box for the masses is going to take quite a few more years.

Still, I like the idea of the PS4 Neo so much that I'm considering buying it myself. I strongly support this sea change in console upgradeability, even though I swore I'd stick with the Xbox One this generation. To be honest, my Xbox One has been a disappointment to me. I bought the "Elite" edition because it had a hybrid 1TB drive, and then added a 512GB USB 3.0 SSD to the thing and painstakingly moved all my games over to that, and it is still appallingly slow to boot, to log in, to page through the UI, to load games. It's also noisy under load and sounds like a broken down air conditioner even when in low power, background mode. The Xbox One experience is way too often drudgery and random errors instead of the gaming fun it's supposed to be. Although I do unabashedly love the new controller, I feel like the Xbox One is, overall, a worse gaming experience than the Xbox 360 was. And that's sad.

Or maybe I'm just spoiled by PC performance, and the relatively crippled flavor of PC you get in these $399 console boxes. If all evidence points to the golden age of x86 being upon us, why not double down on x86 in the living room? Heck, while I'm at it … why not triple down?

This, my friends, is what tripling down on x86 in the living room looks like.

It's Intel's latest Skull Canyon NUC. What does that acronym stand for? Too embarrassing to explain. Let's just pretend it means "tiny awesome x86 PC". What's significant about this box is it contains the first on-die GPU Intel has ever shipped that can legitimately be considered console class.

It's not cheap at $579, but this tiny box bristles with cutting edge x86 tech:

  • Quad-core i7-6770HQ CPU (2.6 Ghz / 3.5 Ghz)
  • Iris Pro Graphics 580 GPU with 128MB eDRAM
  • Up to 32GB DDR4-2666 RAM
  • Dual M.2 PCI x4 SSD slots
  • 802.11ac WiFi / Bluetooth / Gigabit Ethernet
  • Thunderbolt 3 / USB 3.1 gen 2 Type-C port
  • Four USB 3.0 ports
  • HDMI 2.0, mini-DP 1.2 video out
  • SDXC (UHS-I) card reader
  • Infrared sensor
  • 3.5mm combo digital / optical out port
  • 3.5mm headphone jack

All impressive, but the most remarkable items are the GPU and the Thunderbolt 3 port. Putting together a HTPC that can kick an Xbox One's butt as a gaming box is now as simple as adding these three items together:

  1. Intel NUC kit NUC6i7KYK $579
  2. 16GB DDR4-2400 $75
  3. Samsung 950 Pro NVMe M.2 (512GB) $317

Ok, fine, it's a cool $970 plus tax compared to $399 for one of those console x86 boxes. But did I mention it has skulls on it? Skulls!

The CPU and disk performance on offer here are hilariously far beyond what's available on current consoles:

  • Disk performance of the two internal PCIe 3.0 4x M.2 slots, assuming you choose a proper NVMe drive as you should, is measured in not megabytes per second but gigabytes per second. Meanwhile consoles lumber on with, at best, hybrid drives.

  • The Jaguar class AMD x86 cores in the Xbox One and PS4 are about the same as the AMD A4-5000 reviewed here; those benchmarks indicate a modern Core i7 will be about four times faster.

But most importantly, its GPU performance is on par with current consoles. NUC blog measured 41fps average in Battlefield 4 at 1080p and medium settings. Digging through old benchmarks I find plenty of pages where a Radeon 78xx or 77xx series video card, the closest analog to what's in the XBox One and PS4, achieves a similar result in Battlefield 4:

I personally benchmarked GRID 2 at 720p (high detail) on all three of the last HTPC models I owned:

Max Min Avg
i3-4130T, HD 4400 32 21 27
i3-6100T, HD 530 50 32 39
i7-6770HQ, Iris Pro 580 96 59 78

When I up the resolution to 1080p, I get 59fps average, 38 min, 71 max. Checking with Notebookcheck's exhaustive benchmark database, that is closest to the AMD R7 250, a rebranded Radeon 7770.

What we have here is legitimately the first on-die GPU that can compete with a low-end discrete video card from AMD or Nvidia. Granted, an older one, one you could buy for about $80 today, but one that is certainly equivalent to what's in the Xbox One and PS4 right now. This is a real first for Intel, and it probably won't be the last time, considering that on-die GPU performance increases have massively outpaced CPU performance increases for the last 5 years.

As for power usage, I was pleasantly surprised to measure that this box idles at 15w at the Windows Desktop doing nothing, and drops to 13w when the display sleeps. Considering the best idle numbers I've measured are from the Scooter Computer at 7w and my previous HTPC build at 10w, that's not bad at all! Under full game load, it's more like 70 to 80 watts, and in typical light use, 20 to 30 watts. It's the idle number that matters the most, as that represents the typical state of the box. And compared to the 75 watts a console uses even when idling at the dashboard, it's no contest.

Of course, 4k video playback is no problem, though 10-bit 4K video may be a stretch. If that's not enough — if you dream bigger than medium detail 1080p gameplay — the presence of a Thunderbolt 3 port on this little box means you can, at considerable expense, use any external GPU of your choice.

That's the Razer Core external graphics dock, and it's $499 all by itself, but it opens up an entire world of upgrading your GPU to whatever the heck you want, as long as your x86 computer has a Thunderbolt 3 port. And it really works! In fact, here's a video of it working live with this exact configuration:

Zero games are meaningfully CPU limited today; the disk and CPU performance of this Skull Canyon NUC is already so vastly far ahead of current x86 consoles, even the PS4 Neo that's about to be introduced. So being able to replace the one piece that needs to be the most replaceable is huge. Down the road you can add the latest, greatest GPU model whenever you want, just by plugging it in!

The only downside of using such a small box as my HTPC is that my two 2.5" 2TB media drives become external USB 3.0 enclosures, and I am limited by the 4 USB ports. So it's a little … cable-y in there. But I've come to terms with that, and its tiny size is an acceptable tradeoff for all the cable and dongle overhead.

I still remember how shocked I was when Apple switched to x86 back in 2005. I was also surprised to discover just how thoroughly both the PS4 and Xbox One embraced x86 in 2013. Add in the current furor over VR, plus the PS4 Neo opening new console upgrade paths, and the future of x86 as a gaming platform is rapidly approaching supernova.

If you want to experience what console gaming will be like in 10 years, invest in a Skull Canyon NUC and an external Thunderbolt 3 graphics dock today. If we are in a golden age of x86 gaming, this configuration is its logical endpoint.

[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
20 May 18:57

.NET Core 1.0 RC2 - Upgrading from previous versions

by Scott Hanselman

.NET Core at http://dot.net.NET Core RC2 is out, it's open source, and it's on multiple platforms. I'm particularly proud of the cool vanity domain we got for it. http://dot.net. ;) It makes me smile.

Here's the important blog posts to check out:

Head over to http://dot.net and check it out. A great aspect of .NET Core is that everything it does is side-by-side. You can work with it without affecting your existing systems. Be sure also explore the complete .NET Downloads Page for all the manual downloads as well as SHA hashes.

The best way to develop with .NET Core on Windows is to download the Visual Studio official MSI Installer and the latest NuGet Manager extension for Visual Studio. If you don't have Visual Studio already, you can download Visual Studio Community 2015 for free.

We'll have documentation and insights on how to moving from ASP.NET 4.x over to ASP.NET Core 1.0 soon, but for now I've collected these resources for folks who are upgrading from previous versions of .NET Core and ASP.NET Core (the framework formerly new as ASP.NET 5).

Enjoy!


Sponsor: Build servers are great at compiling code and running tests, but not so great at deployment. When you find yourself knee-deep in custom scripts trying to make your build server do something it wasn't meant to, give Octopus Deploy a try.



© 2016 Scott Hanselman. All rights reserved.
     
20 May 15:00

Avec la PCM, IBM fait un pas de plus vers une mémoire universelle

by webmaster@futura-sciences.com (Futura-Sciences)
Big Blue a créé une mémoire à changement de phase (ou PCM pour phase change memory) combinant la vitesse et l’endurance de la DRam avec le faible coût de production de la mémoire flash. Ce compromis est un pas important vers la création d’une mémoire universelle qui doperait les...
20 May 14:06

Fitbit rachète Coin, pour permettre de payer avec ses objets connectés

by Geoffray

Leader mondial des objets connectés de sport, Fitbit annonce le rachat de la startup Coin, qui développait un outil de paiement permettant d’émuler plusieurs cartes de crédit dans le cadre de paiements magnétiques et sans-contact.

Fitbit vient d’annoncer l’acquisition de la start-up américaine Coin, à l’origine d’une technologie de gestion du paiement mobile dédiée aux wearables, comme nous vous en parlions en 2013.

Comme son concurrent historique Jawbone –qui a parié très tôt sur les fonctions de paiement pour wearables en s’associant avec American Express– Fitbit mise sur la possibilité de payer ses achats à l’aide de ses objets connectés de tous les jours.

coin-1

Fitbit rachète Coin

Fitbit fait ce pari en s’offrant l’essentiel des technologies développés par Coin, basée à San Francisco, et qui s’est faite connaître grâce à son système permettant de réunir plusieurs cartes de paiement (crédit, débit, fidélité…) sur un même objet connecté versatile.

Dans la pratique, l’utilisateur est invité à scanner ses différentes cartes bancaires à l’aide d’un lecteur relié à son smartphone et dont les informations sont transmises via Bluetooth à la carte Coin, grâce à une application dédiée pour iOS et Android.

fitbit-blaze-smartwatch

Fonctionnement de Coin

La carte bancaire connectée de Coin, dotée d’un affichage et d’un bouton permettant de changer de cartes à volonté. Si la nouvelle version (2.0 de ce produit est arrivée il y a quelques mois déjà, ajoutant au passage la compatibilité avec les cartes à puce à la norme EMV, Coin avait échoué à convaincre les organismes bancaires de jouer le jeu… condamnant automatiquement le système au passage, comme précise Engadget dans son test du produit.

La commercialisation du produit devrait s’interrompre rapidement et le programme destiné aux développeurs va être suspendu.

Quel avenir pour Coin ?

En plus de ces incompatibilités, il fallait rajouter quelques difficultés supplémentaires avec les terminaux de paiement les plus anciens ou certains distributeurs de billets récalcitrants. Dans le cadre du rachat par Fitbit, les technologies associées à ce produit ont été évitées.

coin-3

Le fabricant américain d’objets connectés de quantified self est leader mondial sur le marché des « wearables selon IDC et s’intéresse davantage à la plateforme de paiement développée par Coin, qui fait l’objet d’une alliance avec d’autres fabricants tels que Moov, Omate ou encore Atlas Wearables.

Fitbit attendra sans aucun doute plusieurs mois avant d’intégrer la technologie de Coin dans ses services maison, mais la nouvelle est intéressante car elle préfigure une sérieuse bataille entre les géants du secteurs, Swatch (avec sa montre Bellamy) et quelques autres en tête.

Via

18 May 05:51

Playing with TensorFlow on Windows

by Scott Hanselman

TensorFlow is a machine learning library from Google. There are no Windows builds but I wanted to run it on Windows. There are some other blog posts that show people trying to get TensorFlow running on Windows with VMs or Docker (using a VM) but they are a little complex. This seems like a great chance to see of I can just run Bash on Windows 10, build TensorFlow and run it.

TensorFlow on Ubuntu on Windows 10

I'm running Windows 10 Insiders Build 14422 as of the time of this writing. I launched Bash on Windows and followed these pip (Python) instructions, just as if I was running Linux. Note that the GPU support won't work so I followed the CPU only instructions from my Surface Pro 3.

$ sudo apt-get install python-pip python-dev

$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whl

It built, then I tested it like this:

$ python

...
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))
Hello, TensorFlow!
>>> a = tf.constant(10)
>>> b = tf.constant(32)
>>> print(sess.run(a + b))
42
>>>

Cool, but this is Hello World. Let's try the more complex example against the MINST Handwriting Models. The simple demo model for classifying handwritten digits from the MNIST dataset is in the sub-directorymodels/image/mnist/convolutional.py. You'll need to check when your mnist folder is.

$ cd tensorflow/models/image/mnist

$ python convolutional.py
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
Initialized!
Step 0 (epoch 0.00), 9.3 ms
Minibatch loss: 12.054, learning rate: 0.010000
Minibatch error: 90.6%
Validation error: 84.6%
Step 100 (epoch 0.12), 826.7 ms
Minibatch loss: 3.289, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 7.0%
...

This set appears to be working great and is currently on Step 1500

There's bugs in the Bash on Windows 10, of course. It's in Beta. But it's not a toy, and it's gonna be a great addition to my developer toolbox. I like that I was able to follow the Linux instructions exactly and they just worked. I'm looking forward to seeing how hard I can push Ubuntu and Bash on Windows 10.


Sponsor: Big thanks to SQL Prompt for sponsoring the feed this week! Have you got SQL fingers? TrySQL Prompt and you’ll be able to write, refactor, and reformat SQL effortlessly in SSMS and Visual Studio.Find out more.


© 2016 Scott Hanselman. All rights reserved.
     
11 May 05:56

On the stand, Google’s Eric Schmidt says Sun had no problems with Android

by Joe Mullin

Alphabet Chairman Eric Schmidt at an event in 2015. Schmidt took the stand in San Francisco today in the second Oracle v. Google trial. (credit: SeongJoon Cho/Bloomberg via Getty Images)

SAN FRANCISCO—Alphabet Chairman and former Google CEO Eric Schmidt testified in a federal court here today, hoping to overcome a lawsuit from Oracle accusing his company of violating copyright law.

During an hour of questioning by Google lawyer Robert Van Nest, Schmidt discussed his early days at Google and the beginnings of Android. Everything was done by the book, Schmidt told jurors, emphasizing his positive relationship with Sun Microsystems and its then-CEO Jonathan Schwartz.

Schmidt himself used to work at Sun Microsystems after getting his PhD in computer science from UC Berkeley in 1982. Schmidt was at Sun while the Java language was developed.

Read 32 remaining paragraphs | Comments

05 May 16:34

Microsoft unveils new effort to make its developer, IT documentation great again

by Peter Bright

Above: the new docs.microsoft.com appearance. Below: the same article in old TechNet. (credit: Microsoft)

Microsoft's developer documentation used to be the model that all others should follow. The documentation itself was thorough, combining reference material with usage guides and sample code. Its use of, at the time, novel JavaScript and XML techniques (known in those days as dynamic HTML, or DHTML) made it easy to browse through the documentation and quickly switch between related portions. But successive "updates" to MSDN Library have made it harder and harder to use, obscuring the consistent structure and organization and becoming much less useful to developers as a result. These updates had other side effects, often breaking URLs, so that both internal and external links to the documentation broke or bounced you through numerous redirects.

After years of ad hoc changes to its documentation system, Microsoft has announced a new plan to overhaul both its TechNet and MSDN documentation to make it fit for the purpose. Documentation will have a new site, docs.microsoft.com, with a new consistent look and features.

Some teams within Microsoft, such as those developing ASP.NET and .NET Core, had already open sourced their documentation. On the new site, all documentation will be handled similarly. Every article will have an "edit" button enabling changes and fixes to be proposed. These changes will be handled as pull requests on GitHub, with the documentation itself using the popular Markdown markup language. All pages will also have both commenting and annotating using LiveFyre, putting an end to the ugly commenting system currently used.

Read 4 remaining paragraphs | Comments

05 May 16:28

La SNCF investit 70 M€ dans le train supersonique Hyperloop

by Geoffray

La SNCF aurait investi environ 70 millions d’euros (selon BFM TV) dans le train supersonique Hyperloop, un projet initié par Elon Musk il y a quelques années. La compagnie ferroviaire n’a pas souhaité confirmer l’information.

Hyperloop pourrait bien être le TGV du futur tant il est rapide et innovant.

La SNCF a semble-t-il décidé de miser sur l’Hyperloop. Le projet, imaginé par l’entrepreneur star Elon Musk (SpaceX, Paypal, Tesla), vient de boucler sa deuxième levée de fonds pour continuer à affiner le fonctionnement de son mode de transport innovant.

hyperloop-1

Cette augmentation de capital, d’un montant total de 80 millions de dollars, aurait également été souscrite par les fonds GE Ventures et 137 Ventures.

Rappelons que la SNCF est le premier client d’Alstom –fabricant des TGV– et un allié important lorsqu’il s’agit de les commercialiser à l’international.

Les deux entreprises collaborent d’ailleurs à la mise au point de la prochaine génération du TGV, prévue à l’horizon 2020.

Hyperloop-1

Mais la compagnie ferroviaire publique aurait décidé de ne pas mettre tous ses oeufs dans le même panier en misant également sur Hyperloop :

« Hyperloop est un projet à la fois allumé et visionnaire. Nous le suivons de près » – Guillaume Pepy à l’Obs, Sept. 2015

Hardware

Le concept de train Hyperloop arrivera dès 2018, en Asie et au Moyen-Orient

Tout droit sorti de l’imagination d’Elon Musk, le projet Hyperloop vise à mettre au point un […]

91

By Geoffray

Deux mois après cette déclaration élogieuse, le patron partageait la scène avec Brogan Banbrogan, le cofondateur d’Hyperloop Technologies et avait réitéré son appel du pied : « Nous sommes intéressés par une coopération avec Hyperloop, car c’est une technologie de rupture (…) et nous avons besoin d’appréhender ce qui peut émerger d’ici 10 à 15 ans. »

Les contacts viennent manifestement de se concrétiser par un joli chèque pour la SNCF.

Fonctionnement d’Hyperloop

Déjà en test dans le désert du Nevada, le système de transport Hyperloop ne devrait pas arriver en Europe avant 2025 mais les premiers essais semblent concluants. Le fonctionnement de ce train supersonique est simple : des capsules sont déplacées à haute vitesse dans un ‘tube’ d’environ 2,5 mètres de diamètre.

Ces tronçons seront hermétiquement scellés les uns aux autres pour pouvoir maintenir une dépression à l’intérieur du conduit et de limiter les frottements lors du déplacement des véhicules.

En pratique, les passagers s’installeront au sein de petites navettes posées sur des coussins d’air. Celles-ci voyageront à plusieurs milliers de kilomètres par heure au sein de tubes à basse pressurisation…

A titre d’exemple, avec Hyperloop on pourra parcourir 4.500 km en 45 minutes… soit de New York à Los Angeles !

Un projet collaboratif

Elon Musk a plusieurs fois décrit l’Hyperloop comme étant  « à mi-chemin entre le Concorde, le canon électrique et la table de Air Hockey ». Le Concorde pour la vitesse, le canon électrique comme moyen de propulsion et le Air Hockey pour les conditions de déplacement par l’élimination des frottements superflus.

Mais la startup HTTpour Hyperloop Transportation Technologies, a commencé à se pencher très sérieusement sur la question de la faisabilité technique d’un tel projet.

Avec des étudiants de l’Université de Californie UCLA et le financement du fond JumpStartFund, ils entendent proposer aux ingénieurs volontaires des groupes Boeing, Airbus et SpaceX de rejoindre un projet de recherche collaboratif sur leur temps libre, en échange d’actions de la future société.

Via

03 May 14:01

UX Design : particularités dans la conception d’objets connectés

by Geoffray

Le design de l’expérience utilisateur (UX design) impacte l’appropriation des objets connectés et influence les mécanismes de conception dans l’internet des objets. Cette variable est prise en compte dans le modèle développé au sein de la Chaire RSOC.

Dans le cadre de leur partenariat, Aruco publie régulièrement les travaux de la Chaire Réseaux Sociaux et Objets Connectés de l’école Télécom EM et de l’Institut Mines-Télécom.

Cet article est une introduction à l’univers complexe de l’User Experience Design adapté aux objets connectés ; la Chaire RSOC lancera prochainement une étude approfondie sur le sujet.

zen-thermostat-connecte-design

Design et objets connectés

Une expérience utilisateur réussie est l’une des clés de l’appropriation de ces nouveaux objets et services dans notre vie quotidienne.

Cet axe est d’autant plus important à approfondir lorsque l’on constate que plus d’un tiers des objets connectés sont abandonnés de manière définitive par leur utilisateur avant 6 mois d’utilisation seulement (étude GfK, 2014).

Cependant, le design de l’expérience utilisateur dans le cadre de l’Internet des Objets se révèle plus complexe que le design restreint à un simple service web.

La partie hardware ajoute des contraintes supplémentaires difficilement modifiables post-production, contrairement au logiciel qui pourra toujours être modifié à posteriori via une mise-à-jour.

Il est donc essentiel de penser et designer l’ensemble d’un objet connecté en amont.

Pourquoi l’UX design pour l’IoT est-il différent ?

Comme l’explique Claire Rowland (auteur du livre Designing Connected Products, 2015), spécialiste en UX design pour l’Internet of Things, designer un objet connecté nécessite une approche holistique.

Se concentrer sur les parties tangibles, comme le design industriel de l’objet ou bien le design d’Interfaces Utilisateur (ou UIs pour User Interfaces), n’est pas suffisant.

Ces deux derniers éléments ont un impact indéniable, et même majeur, sur l’expérience utilisateur finale. Cependant, de nombreux autres aspects dans la conception d’un objet connecté doivent être pris en compte.

facets-of-IoT-UX-Claire-Rowland
credit : Claire Rowland

IoT : un écosystème complexe

Comme l’illustre la cartographie « I MAKE THINGS » d’Alexandra Deschamps-Sonsino (designswarm), spécialiste en stratégie et IoT Design, les industries créatives ne cessent de se chevaucher.

L’Internet des Objets fait intervenir une multitude d’acteurs aux visions variées. Dans cet environnement, où monde physique et virtuel se rencontrent, software, hardware et éventuels services associés à l’objet connecté doivent être complémentaires.

Designers, Ingénieurs et Business ont ainsi besoin d’avoir une compréhension commune des objectifs afin de livrer un ensemble cohérent pour l’utilisateur final.

I-make-things2-Alexandra-Deschamps-Sonsino
credit : designwarm

Les caractéristiques singulières d’un objet connecté

Un concepteur d’objets connectés doit garder en tête trois caractéristiques qui rendent ces objets si particuliers :

  1. Leur capacité à faire le lien entre le monde physique et digital
  2. Le fait que la plupart des objets connectés fassent partie d’un système multi-appareils
  3. Les contraintes et opportunités que leur mise en réseau implique
Actuator-sensor-Claire-Rowland
credit : Claire Rowland

L’interusabilité : kezako ?

Comme il vient de l’être mentionné, la majorité des objets connectés font souvent partie de systèmes multi-appareils. Les fonctionnalités peuvent être distribuées au travers de plusieurs appareils aux capacités différentes. L’un des rôles du designer est d’examiner la meilleure façon de distribuer ces fonctionnalités.

Ainsi, il devra designer l’UIs et les interactions à travers l’ensemble du système. Il ne faut pas oublier que le traitement des données collectées via ce système aboutit souvent à la création d’un service en ligne. Ce dernier élément devra également être pensé, designé et intégré à l’ensemble du système.

L’expérience globale du système est toute aussi importante que celle de l’appareil seul.

Concevoir l’expérience utilisateur dans son ensemble

Et ce n’est pas tout, si nous reprenons le modèle de Claire Rowland après l’Interusabilité se trouvent encore :

  • Le Modèle Conceptuel : quelles sont les attentes de l’utilisateur vis-à-vis du système ? quelle est sa perception globale du système ?
  • La Plateforme : élément essentiel pour gérer la complexité de ces systèmes multi-appareils.
  • Le Service : un objet connecté est rarement synonyme d’une “vente one-shot”. Il est certainement accompagné d’un service impliquant une relation continue avec le concepteur.
  • La Productization : le fait de transformer un concept ou bien une technologie en un produit commercialisable. En d’autres termes, réussir à créer une proposition de valeur évidente pour le public ciblé.

Avec toutes ces facettes, on comprend mieux pourquoi il est si tentant “d’oublier” certaines étapes dans la conception d’un objet connecté.

Comme le dit si bien Alexandra Deschamps-Sonsino :

“[…] that’s the trouble of the internet of things, there are too many touch points that need designing so we start with the easiest: the screen.”

Il faut ainsi veiller à ne pas se contenter de designer les parties tangibles d’un objet connecté, comme l’écran, et le penser dans son ensemble.

Sortir de la norme de l’écran

A l’ère du “Terminal Worlds”, où smartphones et tablettes gouvernent, il est difficile de réussir à sortir de cette norme de l’écran. Pourtant, le champ des possibles qu’offrent les objets connectés est immense. Il serait donc regrettable de se limiter seulement à l’écran comme support d’information.

Evénements

Les objets connectés au travers du prisme de la créativité

Au C2 de Montréal de cette année, la « créativité » était le maître-mot. Nous nous sommes rendus […]

50

By Inès

Des designers, comme David Rose (auteur de Enchanted Objects : Innovation, Design and The Future of Technology, 2014), s’y attèlent. Ce professeur du MIT Lab et serial entrepreneur de la Silicon Valley, développe des objets connectés semblables à des objets magiques issus de nos contes et de notre imaginaire collectif. Aruco l’a interviewé en Mai 2015 à Montréal.

Pour aller plus loin, nous vous invitons à regarder la conférence TED de Tom Uglow, directeur créatif chez Google, qui explore à quoi pourrait ressembler un Internet sans écran. Il nous y explique notamment que nos « Happy Places » sont bâties de solutions simples et naturelles venant répondre à notre besoin en information. Un Internet où l’écran serait moins présent qu’il ne l’est à l’heure actuelle serait peut-être l’une des solutions pour rendre l’Internet plus joyeux et humain.

Pour suivre l’actualité de la Chaire Réseaux Sociaux et Objets Connectés, suivez Chaire RSOC (@ChaireRSOC) et Maud Guillerot (@MaudGet), auteur de cet article.

Via / images : 1 – 2

30 Apr 06:15

Thank You For Your Pull Request

As an open source maintainer, it’s important to recognize and show appreciation for contributions, especially external contributions.

We’ve known for a while that after a person’s basic needs are met, money is a poor motivator and does not lead to better work. This seems especially true for open source projects. Often, people are motivated by other intrinsic factors such as the recognition and admiration of their peers, the satisfaction of building something that lasts, or because they need the feature. In the workplace, good managers understand that acknowledging good work is as important if not more so than providing monetary rewards.

This is why it’s so important to thank contributors for their contributions to your projects, big and small.

Seems obvious, but I was reminded of this when I read this blog post by Hugh Bellamy about his experiences contributing to the .NET CoreFX repository. In the post, he describes both his positive and negative experiences. Here’s one of his negative experiences.

In the hustle and bustle of working at Microsoft, many of my PRs (of all sizes) are merged with only a “LGTM” once the CI passes. This can lead to a feeling of lack of recognition of the work you spent time on.

Immo Landwerth, a program manager on the .NET team, gracefully responds on Twitter in a series of Tweets

.@bellamy_hugh Thanks for the valid criticism and the points raised. We’ve started to work so closely with many contributors that team…

@bellamy_hugh …members treat virtually all PRs as if coming from Microsofties. This results reduction to essence, LGTM, and micro speak.

.@bellamy_hugh Quite fair to say that we should improve in this regard!

What I found interesting though was the part where they treat PRs if it came from fellow employees. That’s very admirable! But it did make me wonder, “WHA?! You don’t thank each other!” ;)

To be clear, I have a lot of admiration for Immo and the CoreFX team. They’ve been responsive to my own issues in the past and I think overall they’re doing a great job of managing open source on GitHub. In fact, a tremendous job! (Side note, Hey Immo! Would love to see a new Open Source Update)

This is one of those easy things to forget. In fact, I forgot to call it out in my own blog post about conducting effective code reviews. Recognition makes contributors feel appreciated. And often, all it takes is something small. It doesn’t require a ceremony.

GitHub Selfie to the rescue

However, if you want to add a little bit of ceremony, I recommend the third party GitHub Selfie Extension which is available in the Chrome Web Store as well as for Firefox.

One important thing to note is that this extension does a bit of HTML screen scraping to inject itself into the web page, so when GitHub.com changes its layout, it can sometimes be broken until the author updates it. The extension is not officially associated with GitHub.

I’ve tweeted about it before, but realized I never blogged about it. The extension adds a selfie button for Pull Requests that let you take a static selfie or an animated Gif. My general rule of thumb is to try and post an animated selfie for first time contributions and major contributions. In other cases, such as when I’m reviewing code on my phone, I’ll just post an emoji or animated gif along with a simple thank you.

Here’s an example from the haacked/scientist.net repository.

Phil checks the timing

My co-worker improved on it.

Phil's Head Explodes

Here’s an example where I post a regular animated gif because the contributor is a regular contributor.

Dancing machines

However, there’s a dark side to GitHub Selfie I must warn you about. You can start to spend too much time filming selfies when you should be reviewing more code. Mine started to get a bit elaborate and I nearly hurt myself in one.

Phil crash

Octocat involved review

Code review in the car. I was not driving.

Fist pump

These became such a thing a co-worker created a new Hubot command at work .haack me that brings up a random one of these gifs in Slack.

Anyways, I’m losing the point here. GitHub Selfie is just one approach, albeit a fun one that adds a nice personal touch to Pull Request reviews and managing an OSS project. There are many other ways. The common theme though, is that a small word of appreciation goes a long way!

28 Apr 07:33

Visual Studio Code editor hits version 1, has half a million users

by Peter Bright

Visual Studio Code, Microsoft's no-cost and open source developer-oriented editor and debugger, has reached version 1.0.

Over its short life, the editor has made itself remarkably popular, with Microsoft saying it has been installed more than two million times, with half a million active users. It has also grown from a Web-oriented text editor geared toward JavaScript and TypeScript developers into a much more capable multi-language development and debugging tool. Extension support was added less than six months ago, and a healthy range of extensions has already been developed. These extensions have been used to greatly extend the number of languages that Code works with, expanding it from its Web origins to handle C++, Go, Python, PHP, F#, and many more options.

Visual Studio Code is arguably one of the projects that most demonstrates the "new" Microsoft. Code is MIT-licensed open source, and Microsoft is continuing to try to do its open source development the right way—not merely dumping periodic code drops on the outside world but actually working with the broader developer community to fix bugs and develop new features. Some 300 outside contributions have been merged in, making it far more than just a Microsoft project. It also continues to be a solid cross-platform app, running on Windows, OS X, and Linux.

Read 1 remaining paragraphs | Comments

18 Apr 12:15

To SQL or NoSQL? That’s the database question

by Ars Staff

It's a tangled, database web out there. (credit: Getty Images)

Poke around the infrastructure of any startup website or mobile app these days, and you're bound to find something other than a relational database doing much of the heavy lifting. Take, for example, the Boston-based startup Wanderu. This bus- and train-focused travel deal site launched about three years ago. And fed by a Web-generated glut of unstructured data (bus schedules on PDFs, anyone?), Wanderu is powered by MongoDB, a "NoSQL" database—not by Structured Query Language (SQL) calls against traditional tables and rows.

But why is that? Is the equation really as simple as "Web-focused business = choose NoSQL?" Why do companies like Wanderu choose a NoSQL database? (In this case, it was MongoDB.) Under what circumstances would a SQL database have been a better choice?

Today, the database landscape continues to become increasingly complicated. The usual SQL suspects—SQL Server-Oracle-DB2-Postgres, et al.—aren't handling this new world on their own, and some say they can't. But the division between SQL and NoSQL is increasingly fuzzy, especially as database developers integrate the technologies together and add bits of one to the other.

Read 51 remaining paragraphs | Comments

17 Apr 04:54

Here's The Programming Game You Never Asked For

by Jeff Atwood

You know what's universally regarded as un-fun by most programmers? Writing assembly language code.

As Steve McConnell said back in 1994:

Programmers working with high-level languages achieve better productivity and quality than those working with lower-level languages. Languages such as C++, Java, Smalltalk, and Visual Basic have been credited with improving productivity, reliability, simplicity, and comprehensibility by factors of 5 to 15 over low-level languages such as assembly and C. You save time when you don't need to have an awards ceremony every time a C statement does what it's supposed to.

Assembly is a language where, for performance reasons, every individual command is communicated in excruciating low level detail directly to the CPU. As we've gone from fast CPUs, to faster CPUs, to multiple absurdly fast CPU cores on the same die, to "gee, we kinda stopped caring about CPU performance altogether five years ago", there hasn't been much need for the kind of hand-tuned performance you get from assembly. Sure, there are the occasional heroics, and they are amazing, but in terms of Getting Stuff Done, assembly has been well off the radar of mainstream programming for probably twenty years now, and for good reason.

So who in their right mind would take up tedious assembly programming today? Yeah, nobody. But wait! What if I told you your Uncle Randy had just died and left behind this mysterious old computer, the TIS-100?

And what if I also told you the only way to figure out what that TIS-100 computer was used for – and what good old Uncle Randy was up to – was to read a (blessedly short 14 page) photocopied reference manual and fix its corrupted boot sequence … using assembly language?

Well now, by God, it's time to learn us some assembly and get to the bottom of this mystery, isn't it? As its creator notes, this is the assembly language programming game you never asked for!

I was surprised to discover my co-founder Robin Ward liked TIS-100 so much that he not only played the game (presumably to completion) but wrote a TIS-100 emulator in C. This is apparently the kind of thing he does for fun, in his free time, when he's not already working full time with us programming Discourse. Programmers gotta … program.

Of course there's a long history of programming games. What makes TIS-100 unique is the way it fetishizes assembly programming, while most programming games take it a bit easier on you by easing you in with general concepts and simpler abstractions. But even "simple" programming games can be quite difficult. Consider one of my favorites on the Apple II, Rocky's Boots, and its sequel, Robot Odyssey. I loved this game, but in true programming fashion it was so difficult that finishing it in any meaningful sense was basically impossible:

Let me say: Any kid who completes this game while still a kid (I know only one, who also is one of the smartest programmers I’ve ever met) is guaranteed a career as a software engineer. Hell, any adult who can complete this game should go into engineering. Robot Odyssey is the hardest damn “educational” game ever made. It is also a stunning technical achievement, and one of the most innovative games of the Apple IIe era.

Visionary, absurdly difficult games such as this gain cult followings. It is the game I remember most from my childhood. It is the game I love (and despise) the most, because it was the hardest, the most complex, the most challenging. The world it presented was like being exposed to Plato’s forms, a secret, nonphysical realm of pure ideas and logic. The challenge of the game—and it was one serious challenge—was to understand that other world. Programmer Thomas Foote had just started college when he picked up the game: “I swore to myself,” he told me, “that as God is my witness, I would finish this game before I finished college. I managed to do it, but just barely.”

I was happy dinking around with a few robots that did a few things, got stuck, and moved on to other games. I got a little turned off by the way it treated programming as electrical engineering; messing around with a ton of AND OR and NOT gates was just not my jam. I was already cutting my teeth on BASIC by that point and I sensed a level of mastery was necessary here that I probably didn't have and I wasn't sure I even wanted.

I'll take a COBOL code listing over that monstrosity any day of the week. Perhaps Robot Odyssey was so hard because, in the end, it was a bare metal CPU programming simulation, like TIS-100.

A more gentle example of a modern programming game is Tomorrow Corporation's excellent Human Resource Machine.

It has exactly the irreverent sense of humor you'd expect from the studio that built World of Goo and Little Inferno, both excellent and highly recommendable games in their own right. If you've ever wanted to find out if someone is truly interested in programming, recommend this game to them and see. It starts with only 2 instructions and slowly widens to include 11. Corporate drudgery has never been so … er, fun?

I'm thinking about this because I believe there's a strong connection between programming games and being a talented software engineer. It's that essential sense of play, the idea that you're experimenting with this stuff because you enjoy it, and you bend it to your will out of the sheer joy of creation more than anything else. As I once said:

Joel implied that good programmers love programming so much they'd do it for no pay at all. I won't go quite that far, but I will note that the best programmers I've known have all had a lifelong passion for what they do. There's no way a minor economic blip would ever convince them they should do anything else. No way. No how.

I'd rather sit a potential hire in front of Human Resource Machine and time how long it takes them to work through a few levels than have them solve FizzBuzz for me on a whiteboard. Is this interview about demonstrating competency in a certain technical skill that's worth a certain amount of money, or showing me how you can improvise and have fun?

That's why I was so excited when Patrick, Thomas, and Erin founded Starfighter.

If you want to know how competent a programmer is, give them a real-ish simulation of a real-ish system to hack against and experiment with – and see how far they get. In security parlance, this is known as a CTF, as popularized by Defcon. But it's rarely extended to programming, until now. Their first simulation is StockFighter.

Participants are given:

  • An interactive trading blotter interface
  • A real, functioning set of limit-order-book venues
  • A carefully documented JSON HTTP API, with an API explorer
  • A series of programming missions.

Participants are asked to:

  • Implement programmatic trading against a real exchange in a thickly traded market.
  • Execute block-shopping trading strategies.
  • Implement electronic market makers.
  • Pull off an elaborate HFT trading heist.

This is a seriously next level hiring strategy, far beyond anything else I've seen out there. It's so next level that to be honest, I got really jealous reading about it, because I've felt for a long time that Stack Overflow should be doing yearly programming game events exactly like this, with special one-time badges obtainable only by completing certain levels on that particular year. Stack Overflow is already a sort of game, but people would go nuts for a yearly programming game event. Absolutely bonkers.

I know we've talked about giving lip service to the idea of hiring the best, but if that's really what you want to do, the best programmers I've ever known have excelled at exactly the situation that Starfighter simulates — live troubleshooting and reverse engineering of an existing system, even to the point of finding rare exploits.

Consider the dedication of this participant who built a complete wireless trading device for StockFighter. Was it necessary? Was it practical? No. It's the programming game we never asked for. But here we are, regardless.

An arbitrary programming game, particularly one that goes to great lengths to simulate a fictional system, is a wonderful expression of the inherent joy in playing and experimenting with code. If I could find them, I'd gladly hire a dozen people just like that any day, and set them loose on our very real programming project.

[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
14 Apr 09:27

An update on ASP.NET Core 1.0 RC2

by Scott Hanselman

What's going on with ASP.NET Core 1.0 RC2? Why is RC2 taking so long over RC1 and what's going to happen between now and the final release? I talked to architect David Fowler about this and tried to put together some clear answers.

This stuff is kind of deep and shows "how the sausage gets made" so the TL;DR version of this is "the guts are changing for the better and it's taking longer than we thought it would to swap out the guts."

That said, ASP.NET Core RC2 has some high level themes:

Re-plat on top of the .NET CLI

This is the biggest one and there are quite a few changes and tweaks made to the hosting model to support this. The way your application boots up is completely different. I'd encourage you to take a look at the https://github.com/aspnet/cli-samples. Some of the changes are very subtle but important. We baked a bunch of assumptions into DNX specific for web applications and now we're building on top of a tool chain that doesn't assume a web application is the only target and we have to account for that.

There were a couple of fundamental things affected by this move:

  • Acquisition
    • How do you get the tool chain and shared runtime?
  • Runtime
    • The API used to find dependencies at runtime ILibraryManager
    • The API used to find compilation assemblies at runtime ILibraryExporter
  • Tooling
    • There's no dnvm replacement
    • Visual Studio Tooling (UI) support needs to use the new CLI
    • OmniSharp needs to use the new CLI
    • What's the dnx-watch successor?

The list goes on and on. I'd suggest watching the ASP.NET Community stand up as we're pretty transparent about where we are in the process. We just got everyone internally using builds of Visual Studio that have CLI support this last week.

The new .NET CLI (again, replacing DNX) will be the most de-stabilizing change in RC2. This is a good intro to where things are headed https://vimeo.com/153212604. There's been tons of changes since then but it's still a good overview.

Moving to netstandard

This has been a long time coming and is a massive effort to get class library authors to move to the next phase of PCL. This is critical to get right so that everyone can have their favorite packages working on .NET Core, and as such, working everywhere. 

https://channel9.msdn.com/Events/ASPNET-Events/ASPNET-Fall-Sessions/Class-Libraries

https://github.com/dotnet/corefx/blob/master/Documentation/architecture/net-platform-standard.md

.NET Standard Library means a modular BCL that can be used on all app models

Polish

We're looking at all of the patterns that we have invented over the last 2 years and making sure it's consistent across the entire stack. One example of that is the options API. We went through the entire stack and made sure that we were using them consistently in middleware and other places. That is a breaking change but it's an important one.

Other examples of this include things like making sure we have the right extension methods in places and that it looks like they were designed in a coherent manner (logging is an example).

Other small things:

  • Remove the service locator pattern as much as we can. Some of this requires API change.
  • Making sure we have the right set of DI abstractions so that DI vendors can properly plug into the stack.
  • Taking the time to look at feedback we're receiving to make sure we're doing the right things. This is ongoing, but if there are small changes we can make that solve a common issue people are having, we'll make that change while we still have this freedom.
  • Change how we plug in and configure servers in our Hosting APIs https://github.com/aspnet/KestrelHttpServer/pull/741

Fundamentals - Stress, Security, Performance

This is always ongoing but now that most of the features are done, we have more time to spend on making things like Kestrel (the web/app server) rock solid and secure.

We're also doing more stress runs to make sure the stack is very stable memory wise and to make sure nothing crashes .

More Performance

This is part of fundamentals but deserves to be called out specifically. We're still making changes to make sure things are very "performant." Some of these are tweaks that don't affect consuming code, others are actual design changes that affect API. MVC is getting tons of love in this area (https://github.com/aspnet/Mvc/pull/4108). HttpAbstractions and other higher level APIs are also getting lots of love https://github.com/aspnet/HttpAbstractions/pull/556 to make sure we reduce allocations for things like file upload.

We're also looking at higher level scenarios to make sure that not only focusing on microbenchmarks. You can see some of them at https://github.com/aspnet/Performance/tree/dev/testapp.

Techempower is still on our radar and we're running the plain text benchmark on similar hardware now and comparing against the competition (we're in the top 10 right now!) and we'll hope to be there and official for RTM.

I hope this gives you some context. We'll cover this and more every week on the Community Standup as we move towards RC2, then on to RTM on three platforms!


Sponsor: Big thanks to RedGate and my friends on ANTS for sponsoring the feed this week! How can you find & fix your slowest .NET code? Boost the performance of your .NET application with the ANTS Performance Profiler. Find your bottleneck fast with performance data for code & queries. Try it free


© 2016 Scott Hanselman. All rights reserved.
     
14 Apr 05:33

Give yourself permission to have work-life balance

by Scott Hanselman
Stock photos by #WOCTechChat used with Attribution

I was having a chat with a new friend today and we were exchanging stories about being working parents. I struggle with kids' schedules, days off school I failed to plan for, unexpected things (cars break down, kids get sick, life happens) while simultaneously trying to "do my job."

I put do my job there in quotes because sometimes it is in quotes. Sometimes everything is great and we're firing on all cylinders, while other times we're barely keeping our heads above water. My friend indicated that she struggled with all these things and more but she expressed surprise that *I* did. We all do and we shouldn't be afraid to tell people that. My life isn't my job. At least, my goal is that my life isn't my job.

Why are you in the race?

We talked a while and we decided that our lives and our careers are a marathon, not a giant sprint. Burning out early helps no one. WHY are we running? What are we running towards? Are you trying to get promoted, a better title, more money for your family, an early retirement, good healthcare? Ask yourself these questions so you at least know and you're conscious about your motivations. Sometimes we forget WHY we work.

Saying no is so powerful and it isn't something you can easily learn and just stick with - you have to remind yourself it's OK to to say no every day. I know what MY goals are and why I'm in this industry. I have the conscious ability to prioritize and allocate my time. I start every week thinking about priorities, and I look back on my week and ask myself "how did that go?" Then I optimized for the next week and try again.

Sometimes Raw Effort doesn't translate to Huge Effect.

She needed to give herself permission to NOT give work 100%. Maybe 80% it OK. Heck, maybe 40%. The trick was to be conscious about it, rather than trying to give 100% twice.

Yes, there are consequences. Perhaps you won't get promoted. Perhaps your boss will say you're not giving 110%. But you'll avoid burnout and be happier and perhaps accomplish more over the long haul than the short. 

Work Life

Look, I realize that I'm privileged here. There's a whole knapsack of privilege to unpack, but if you're working in tech you likely have some flexibility. I'm proposing that you at least pause a moment and consider it...consider using it. Consider where your work-life balance slider bar is set and see what you can say no to, and try saying yes to yourself.

I love this quote by Christopher Hawkins that I've modified by making a blank space for YOU to fill out:

"If it’s not helping me to _____ _____, if it’s not improving my life in some way, it’s mental clutter and it's out." - Christopher Hawkins

The Red Queen's Race

Are you running because everyone around you is running? You don't always need to compare yourself to other people. This is another place where giving yourself permission is important.

"Well, in our country," said Alice, still panting a little, "you'd generally get to somewhere else—if you run very fast for a long time, as we've been doing."

"A slow sort of country!" said the Queen. "Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!" - Red Queen's Race

There's lots of people I admire, but I'm not willing to move to LA to become Ryan Reynolds (he stole my career!) and I'm not willing to work as hard as Mark Russinovich (he stole my hair!) so I'm going to focus on being the best me I can be.

What are you doing to balance and avoid burnout?

* Stock photo by #WOCTechChat used with Attribution


Sponsor: Big thanks to RedGate and my friends on ANTS for sponsoring the feed this week! How can you find & fix your slowest .NET code? Boost the performance of your .NET application with the ANTS Performance Profiler. Find your bottleneck fast with performance data for code & queries. Try it free



© 2016 Scott Hanselman. All rights reserved.
     
06 Apr 19:11

Visual C++ for Linux and Raspberry Pi Development

by Scott Hanselman

It's bananas over at Microsoft. Last week they announced you can run Bash on Ubuntu on Windows 10, and now I'm seeing I missed an announcement of an extension to Visual Studio that enables Visual C++ for Linux Development.

With this extension you can author C++ code for Linux servers, desktops and devices. You can manage your connections to these machines from within VS. VS will automatically copy and remote build your sources and can launch your application with the debugger. Their project system supports targeting specific architectures, including ARM which means Raspberry Pi, folks.

ASIDE: I also noticed there's a C/C++ extension for Visual Studio Code also. I need to add that to my list of stuff to check out, it looks pretty compelling as well.

Once Visual C++ for Linux Development is installed, you go and File New Project like this. Cool to see Linux in that list along with a Raspberry Pi project.

File New | Linux App

You can pick x86, x64, and ARM, and you can see Remote GDB Debugger is an option.

Remote GDB Debugger

Here I'm running Ubuntu in a VM and connecting to it over SSH from Visual Studio. I needed to set up a few things first on the Ubuntu machine

sudo apt-get install openssh-server g++ gdb gdbserver

Once that was setup, connecting to the remote Linux machine was pretty straightforward as VS is using SSH.

Debugging C++ apps remotely talking to a Linux VM

Pretty cool.

NOTE: Today this cool extension has nothing to do with the Bash on Ubuntu on Windows announcement or that subsystem.  The obvious next question is "can I use this without a VM and talk to gdb on the local Linux subsystem?" From what I can tell, no, but I'm still trying to get SSH and GDB working locally. It's theoretically possible but I'm not sure if it's also insane. Both teams are talking, but again, this feature isn't related to the other.

This extension feels a little beta to me but it does a good job providing the framework for talking to Linux from VS. The team looks to be very serious and even has a cool demo where they code and debug a Linux desktop app.

If you're looking for a another full featured solution for Linux and Embedded Systems development with Visual Studio, be sure to download and check out VisualGDB, it's amazing.


Sponsor: Quality instrumentation is critical for modern applications. Seq helps .NET teams make sense of complex, asynchronous, and distributed apps on-premises or in the cloud. Learn more about structured logging and try Seq free for 30 days at https://getseq.net.



© 2016 Scott Hanselman. All rights reserved.
     
05 Apr 05:19

Developers can run Bash Shell and user-mode Ubuntu Linux binaries on Windows 10

by Scott Hanselman

UPDATE: I've recorded a 30 min video with developers from the project as well as Dustin from Ubuntu about HOW this works if you want more technical details.

As a web developer who uses Windows 10, sometimes I'll end up browsing the web and stumble on some cool new open source command-line utility and see something like this:

A single lonely $

In that past, that $ prompt meant "not for me" as a Windows user.

I'd look for prompts like

C:\>

or

PS C:\>

Of course, I didn't always find the prompts that worked like I did. But today at BUILD in the Day One keynote Kevin Gallo announced that you can now run "Bash on Ubuntu on Windows." This is a new developer feature included in a Windows 10 "Anniversary" update (coming soon). It lets you run native user-mode Linux shells and command-line tools unchanged, on Windows.

After turning on Developer Mode in Windows Settings and adding the Feature, run you bash and are prompted to get Ubuntu on Windows from Canonical via the Windows Store, like this:

Installing Ubuntu on Windows

This isn't Bash or Ubuntu running in a VM. This is a real native Bash Linux binary running on Windows itself. It's fast and lightweight and it's the real binaries. This is an genuine Ubuntu image on top of Windows with all the Linux tools I use like awk, sed, grep, vi, etc. It's fast and it's lightweight. The binaries are downloaded by you - using apt-get - just as on Linux, because it is Linux. You can apt-get and download other tools like Ruby, Redis, emacs, and on and on. This is brilliant for developers that use a diverse set of tools like me.

This runs on 64-bit Windows and doesn't use virtual machines. Where does bash on Windows fit in to your life as a developer?

If you want to run Bash on Windows, you've historically had a few choices.

  • Cygwin - GNU command line utilities compiled for Win32 with great native Windows integration. But it's not Linux.
  • HyperV and Ubuntu - Run an entire Linux VM (dedicating x gigs of RAM, and x gigs of disk) and then remote into it (RDP, VNC, ssh)
    • Docker is also an option to run a Linux container, under a HyperV VM

Running bash on Windows hits in the sweet spot. It behaves like Linux because it executes real Linux binaries. Just hit the Windows Key and type bash.

After you're setup, run apt-get update and get a few developer packages. I wanted Redis and Emacs. I did an apt-get install emacs23 to get emacs. Note this is the actual emacs retrieved from Ubuntu's feed.

Running emacs on Windows

Of course, I have no idea how to CLOSE emacs, so I'll close the window. ;)

Note that this isn't about Linux Servers or Server workloads. This is a developer-focused release that removes a major barrier for developers who want or need to use Linux tools as part of their workflow. Here I got Redis via apt-get and now I can run it in standalone mode.

Running Redis Standalone on Windows

I'm using bash to run Redis while writing ASP.NET apps in Visual Studio that use the Redis cache. I can then later deploy to Azure using the Azure Redis Cache, so it's a very natural workflow for me.

Look how happy my Start Menu is now!

A happy start menu witih Ubuntu

Keep an eye out at http://blogs.msdn.microsoft.com/commandline for technical details in the coming weeks. There's also some great updates to the underlying console with better support for control codes, ANSI, VT100, and lots more. This is an early developer experience and the team will be collection feedback and comments. You'll find Ubuntu on Windows available to developers as a feature in a build Windows 10 coming soon. Expect some things to not work early on, but have fun exploring and seeing how bash on Ubuntu on Windows fits into your developer workflow!


Sponsor:  BUILD - it’s what being a developer is all about so do it the best you can. That’s why Stackify built Prefix. No .NET profiler is easier or more powerful. You’re 2 clicks and $0 away, so build on! prefix.io



© 2016 Scott Hanselman. All rights reserved.
     
04 Apr 20:09

What’s New for C# and VB in Visual Studio

by Rich Lander [MSFT]

This week at Build 2016, we released Visual Studio 2015 Update 2 and Visual Studio “15” Preview. Both releases include many new language features that you can try today. It’s safe to install both versions of Visual Studio on the same machine so that you can check out all of the new features for yourself.

New C# and VB features in Visual Studio 2015 Update 2

In Visual Studio 2015 Update 2, you’ll notice that we’ve added some enhancements to previous features as well as added some new refactorings. The team focused on improving developer productivity by cutting down time, mouse-clicks, and keystrokes to make the actions you perform every day more efficient.

Interactive Improvements (C# only in Update 2, VB planned for future)

The C# Interactive window and command-line REPL, csi, were introduced in Visual Studio Update 1. In Update 2, we’ve paired the interactive experience with the editor by allowing developers to send code snippets from the editor to be executed in the Interactive window. We’ve also enabled developers to initialize the Interactive window with a project’s context.

To play with these features:

  • Highlight a code snippet in the editor, right-click, and press Execute in Interactive (or Ctrl+E, Ctrl+E), as shown in the image below.
  • Right-click on a project in the Solution Explorer and press Initialize Interactive with project.

clip_image0029.png

Add Imports/Using Improvements (VB and C# in Update 2)

We’ve improved the Adding Imports/Using command to support “fuzzy” matching on misspelled types and to search your entire solution and metadata for the correct type—adding both a using/imports and any project/metadata references, if necessary.

You can see an example of this feature with a misspelled “WebCleint” type. The type name needs to be fixed (two letters are transposed) and the System.Net using needs to be added.

clip_image0046.png

Refactorings

A couple refactorings we sprinkled in were:

  • Make method synchronous (VB and C# in Update 2)
  • Use null-conditional for delegate invocation (C# only in Update 2, Maybe for VB? – Read More below)

So, the killer scenario for this feature is raising events in a thread-safe way. Prior to C# 6 the proper way to do this was to copy the backing field of the event to a local variable, check the variable for null-ness and invoke the delegate inside the if. Otherwise Thread B could set the delegate to null by removing the last handler after Thread A has checked it for null resulting in Thread A unintentionally throwing a NullReferenceException. Using the null-conditional in C# is a much shorter form of this pattern. But in VB the RaiseEvent statement already raised the event in a null safe way, using the same code-gen. So the killer scenario for this refactoring really didn’t exist and worse, if we add the refactoring people might mistakenly change their code to be less idiomatic with no benefit. From time to time we review samples that don’t understand this and perform the null check explicitly anyway so this seems likely to reinforce that redundant behavior. Let us know in the comments if you think the refactoring still has tons of value for you outside of raising events and we’ll reconsider! -ADG

clip_image0067.png

Roslyn Features (VB and C# in Update 2)

We’ve added two new compiler flags to the Roslyn compiler:

  • deterministic: This switch will ensure builds with the same inputs produce the same the outputs, byte for byte. Previously, PE entries–like MVID, PDB ID and Timestamp–would change on every build but now can be calculated deterministically based on the inputs.
  • publicSign: Supports a new method of signing that is similar to delay signing except it doesn’t need to add skip verification entries to your machine. Binaries can be public signed with only the public key and load into contexts necessary for development and testing. This is also known as OSS signing.

Sneak Peek: What’s in Visual Studio “15” Preview

We released a first look of Visual Studio “15” this week at Build. It is point in time view of what we’ve been working on. Some features will still change and others are still coming. It’s a good opportunity to provide feedback on the next big release of Visual Studio.

Play with C# 7 Prototypes (VB 15 Prototypes planned)

The guiding theme for C# 7 language design is “working with data”. While the final feature set for C# 7 is still being determined by the Language Design Committee, you can play with some of our language feature prototypes today in Visual Studio “15” Preview.

To access the language prototypes, right-click on your project in Solution Explorer > Properties > Build and type “__DEMO__” in the “Conditional compilation symbols” text box. This will enable you to play with a preview of local functions, digit separators, binary literals, ref returns, and pattern matching.

conditional

There is a known bug related to ref return IntelliSense, which can be worked around by:

  • right-click on your project in Solution Explorer > Unload Project
  • right-click on your project after it’s been unloaded > Edit csproj
  • in the first Property Group under <AssemblyName> add: <Features>refLocalsAndReturns</Features>
  • Ignore any XML schema warnings you may see

features

Custom Code Style Enforcement (VB and C# in Visual Studio “15” Preview)

The feature you all have been asking for is almost here! In Visual Studio “15” Preview, you can play around and give us feedback on our initial prototype for customizable code style enforcement. To see the style options we support today, go to Tools > Options > C#/VB > Code Style. Under the General options, you can tweak “this.”/”Me.”, predefined type, and “var”/type inference preferences. Today, with “var’ preferences, you can control the severity of enforcement—e.g., I can prefer “var” over explicit types for built-in types and make any violation of this squiggle as an error in the editor.

You can also add Naming rules, for instance, to require methods to be PascalCase.

style-enforcement

Please Keep up the Feedback

Thanks for all the feedback we’ve received over the last year. It’s had a big impact on the features that I’ve described here and on others we’ve been working on. Please keep it coming. The language feedback on the open source Roslyn project has been extensive. It’s great to see a broader language community developing around the Roslyn open source project on Github.

To give feedback, try one of the following places:

Thanks for using the product. I hope you enjoy using it to build your next app.

Over ‘n’ out
Kasey Uhlenhuth, Program Manager, Managed Languages Team

04 Apr 19:55

Sigfox intègre la plateforme Azure IoT Hub de Microsoft

by Geoffray

La startup toulousaine Sigfox et Microsoft annoncent l’intégration du cloud Sigfox à la plateforme Azure IoT Hub de Microsoft en France.

Après avoir annoncé son partenariat avec SFR la semaine dernière, la pépite Sigfox et Microsoft France annoncent l’intégration du cloud de la startup toulousaine à la plateforme Azure IoT Hub de Microsoft.

Cette intégration offrira aux entreprises une solution pré-configurée pour connecter, visualiser et analyser en temps réel des millions de données issues des objets connectés via Sigfox, simplifiant ainsi le déploiement à grande échelle de solutions IoT pour les professionnels.

microsoft-logo-hq-1

Sigfox intègre l’Azure IoT Hub

Sigfox et Microsoft entendent faciliter le déploiement de solutions connectées en combinant la remontée des données en provenance d’objets connectés au réseau SIGFOX via la plateforme Azure IoT Hub de Microsoft.

Cette association permettra aux entreprises d’installer rapidement puis de déployer leur solution IoT en accédant à des offres à plus forte valeur ajoutée via Microsoft Azure IoT Suite. Les entreprises n’auront ainsi plus besoin de construire eux même et de gérer leur base de données et pourront ainsi se concentrer sur la logique applicative de leurs solutions

« L’intégration du cloud Sigfox avec Azure IoT Hub est une étape importante dans l’évolution de l’Internet des objets, offrant aux utilisateurs une solution clé en main pour la collecte, le stockage et la gestion des données, tout en assurant l’intégration avec leurs systèmes existants, » – Stuart Lodge, VP Global Sales, Sigfox

Grâce à Microsoft Azure IoT Hub, service qui établit des communications fiables et sécurisées entre des millions d’appareils IoT et un serveur principal de solution, les données des entreprises seront directement
stockées sur Azure IoT Suite et restituées –en temps-réel– en informations utiles à la décision, au sein de tableaux de bords sur la plateforme.

onenote-microsoft-1

 

Par exemple, à l’aéroport d’Orly, Econocom et Servair ont déployé une solution logistique permettant de mieux sécuriser les pistes de l’aéroport : un réseau de boutons connectés en Sigfox. Ceux-ci permettent la notification des demandes d’enlèvement sur les smartphones des agents Servair, optimisant ainsi les coûts opérationnels liés à la collecte de colis et la réduction des mouvements de véhicules sur l’aéroport d’Orly.

image : shutterstock