Shared posts

06 Sep 19:26

In a blast from the past, Logitech releases a new trackball

by Peter Bright

Logitech

Of all the pointing devices that have been invented, the most neglected is probably the trackball. Trackball fans swear by them, arguing that they're kinder to wrists and hence a good choice if you suffer from repetitive strain injury or joint problems. But they've never quite made it into the mainstream.

Logitech is hoping to bring the trackball back with its new MX Ergo trackball, announced today and shipping later this month. The $99 wireless trackball is a thumbball design, with your hand resting on it as if it were a mouse and pointing done with the thumb. The angle of the trackball can be adjusted between 0 and 20 degrees to help you find the angle that's most comfortable for your wrist. Beyond the ball, there's a tilting scroll wheel and a DPI switch to toggle between normal and high precision modes.

Read 1 remaining paragraphs | Comments

02 Sep 07:44

Endless Immensity of the Sea

by Phil Haack

There’s this quote about leadership that resonates with me.

If you want to build a ship, don’t drum up people together to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea.

Most attribute it to the French author Antoine de Saint-Exupéry, but it’s doubtful that he wrote these exact words. For one, he’s French, so the words he wrote probably had a lot of àccênts and “le”, “la”, and “et” words in them.

This English quote appears to be one of the rare cases where a paraphrase has more impact than the original. None of that diminishes the power of the quote.

Obligatory image of the sea

The quote encourages leaders to cultivate intrinsic motivation as a means of leading people rather than an approach built on authority and command. Surprisingly, Cartman, with his incessant requests to respect his authority, is not an exemplar of good leadership.

If you question the value of intrinsic motivation, take a moment to watch this Dan Pink video. I’ve referenced it in the past, and I’ll keep referencing it until every single one of you (or perhaps more than one of you) watch it!

It’s easy to read this quote as a laud to leadership and a rejection of management by contrast. As if management by necessity must be built on command and control. But I reject that line of thinking. Management and leadership address different needs and can be complementary.

To me, the quote contrasts leadership with a particular style of management built on hierarchy and control. This is a style that is antithetical to both building ships and shipping software.

The Valve handbook covers this well.

Hierarchy is great for maintaining predictability and repeatability. It simplifies planning and makes it easier to control a large group of people from the top down, which is why military organizations rely on it so heavily.

But when you’re an entertainment company that’s spent the last decade going out of its way to recruit the most intelligent, innovative, talented people on Earth, telling them to sit at a desk and do what they’re told obliterates 99 percent of their value. We want innovators, and that means maintaining an environment where they’ll flourish.

I anticipate some commenters will point out that, in practice, Valve might not live up to this ideal. I don’t know anything about the inner workings of Valve. I do know that with any human endeavor, there will be failures and successes. And they won’t be distributed evenly, even within a single company. Perhaps they do not live up to these ideals, but that doesn’t change the value of the ideals themselves.

The Valve handbook addresses entertainment companies, but the ideas apply to any company where the nature of the work is creative and intellectual in nature. Or put another way, it applies to any environment where you want your workers to be creative and intellectual.

Even the handbook makes the mistake of mischaracterizing the nature of the work our military does. It assumes that the military gets the best results when folks just do what they’re told.

Leaders such as David Marquet, a former nuclear submarine commander, challenge this idea. He notes that when he stopped giving orders, his crew performed better.

This is not a polemic against managers or management. Rather, this is an encouragement for a style of management that fosters intrinsic motivation.

It’s not easy. There’s a lot of factors that hinder attempts at this style of leadership. All too often companies conflate hierarchy with structure and management with leadership. It’s important to separate and understand these concepts and how to apply them. Especially when you’re a small company reaching the point where you feel the need for more structure and management.

In a follow-up post, I’ll write more about some of these points. I plan to cover what I mean when I say that leadership and management are complementary. I’ll also cover what it means to conflate all these distinct concepts.

In the meanwhile, as you build your next ship, I encourage you to focus on the longing that leads you to build it. What is endless immense sea in your work?

29 Aug 08:03

Photos: The Aftermath of Hurricane Harvey (34 photos)

Hurricane Harvey, the first major hurricane to make landfall in the United States in more than a decade, made landfall on the Texas coast late Friday as a Category 4 storm, destroying homes, overturning vehicles and sinking boats, severing power lines, and forcing tens of thousands of residents to flee. As Harvey, now downgraded to a tropical storm, lingers over Texas, record amounts of rain are predicted, which could spawn even more destruction in the form of catastrophic flooding.

A driver works his way through a maze of fallen utility poles damaged in the wake of Hurricane Harvey, on August 26, 2017, in Taft, Texas. (Eric Gay / AP)
29 Aug 08:01

The Unprecedented Flooding in Houston, in Photos (28 photos)

After Hurricane Harvey made landfall late Friday, the winds calmed, but the rainfall kept up, dropping historic amounts of water on southeastern Texas—with even more predicted in the next few days. Rising floodwaters have forced tens of thousands to flee, overburdening emergency services and filling shelters. So far, at least five deaths have been blamed on the storm. State and local authorities, as well as countless volunteers, have been working hard all weekend to rescue stranded residents and offer assistance to those in need.

Houston Police SWAT officer Daryl Hudeck carries Connie Pham and her 13-month-old son Aiden after rescuing them from their home surrounded by floodwaters from Tropical Storm Harvey on August 27, 2017, in Houston, Texas. (David J. Phillip / AP)
23 Aug 15:03

Referencing .NET Standard Assemblies from both .NET Core and .NET Framework

by Scott Hanselman

Lots of .NET Projects sharing a .NET Standard LibraryI like getting great questions in email but I LOVE getting great questions in email with a complete and clear code repro (reproduction) that's in a git somewhere. Then I can just clone, build (many many bonus points for a clean build) and check out the bug.

I got a great .NET Core question and repro here https://github.com/ScarlettCode/Example. I forked it, fixed it, and submitted a PR. Here's the question and issue and today's fix.

The project has a C# library project (an assembly) that is written to the .NET Standard 2.0. You'll recall that the .NET Standard isn't a runtime or a library in itself, but rather an interface. They are saying that this library will work anywhere that the .NET Standard is supported, like Linux, Mac, and Windows.

Here's that main .NET Standard Library called "Example.Data" written in C#.

Then he had:

  • Windows Forms (WinForms) application in VB.NET using .NET "full" Framework 4.6
  • Console Application also using .NET Framework 4.6
  • Console Application using .NET Core 2.0

Each of these apps is referring to the Example.Data library. The Example.Data library then pulls in a database access library in the form of Microsoft.EntityFrameworkCore.InMemory via NuGet.

WinForms app -> Data Access library -> Some other library. A->B->C where B and C are packages from NuGet.

The .NET Core console builds and runs great. However, when the other projects are run you get this error:

Can't load

Could not load file or assembly
'Microsoft.EntityFrameworkCore, Version=2.0.0.0,
Culture=neutral, PublicKeyToken=adb9793829ddae60'
or one of its dependencies. The system cannot find
the file specified.

Pretty low level error, right? First thing is to check the bin folder (the results of the compile) for a project that doesn't run. Looks like there's no Microsoft.EntityFrameworkCore there. Why not? It's assembly "C" downstream of "A" and "B". EntityFramework's assembly is referred to by the Example.Data assembly...but why didn't it get copied in?

The "full" Framework projects are using the older .csproj format and by default, they use package.config to manage dependencies. The newer projects can reference Packages as first-class references. So we need to tell ALL projects in this solution to manage and restore their packages as "PackageReferences."

I can open up the .csproj file for the Framework projects and add this line within the first <PropertyGroup> like this to change the restore style:

 <RestoreProjectStyle>PackageReference</RestoreProjectStyle>

As Oren wisely says:

"Using .NET Standard requires you to use PackageReference to eliminate the pain of “lots of packages” as well as properly handle transitive dependencies. While you may be able to use .NET Standard without PackageReference, I wouldn’t recommend it."

I can also change the default within VS's Package Management options here in this dialog.

 <RestoreProjectStyle>PackageReference</RestoreProjectStyle> Default Package management format

Hope this helps.



© 2017 Scott Hanselman. All rights reserved.
     
21 Aug 16:54

Gartner : les 32 technologies à forts enjeux stratégiques en 2017

by Thibaut

Comme chaque année, l’institut Gartner a dévoilé son cycle des technologies émergentes de 2017, et nous avons relevé les 32 technologies de demain que l’institut d’analyse considère comme étant porteuses des plus forts enjeux stratégiques.

Comme il nous y a habitué, Gartner a dévoilé son Hype Cycle for Emerging Technologies, caractérisant les innovations technologiques émergentes selon l’espérance qu’on leur accorde sur une échelle temporelle (voir nos article sur le cycle de 2014, celui de 2015 et celui de 2016).

Les forts enjeux stratégiques

Nous avons détaillé chacune de ces technologies qui sont, selon Gartner, à forts enjeux stratégiques :

Parmi les technologies citées dans cette présentation :

1- La réalité virtuelle

La réalité virtuelle consiste à immerger un utilisateur dans une simulation informatique. Cette simulation recrée des environnements imaginaires se basant  sur des éléments visuels, sonores, ou même parfois tactiles.

Cette immersion s’effectue aujourd’hui au travers d’un casque (parfois couplé à des accessoires) qui reproduit, dans l’environnement virtuel, les mouvements réels de l’utilisateur.

2- La réalité augmentée

La réalité augmentée désigne les différentes méthodes qui permettent d’intégrer, de façon réaliste, des objets ou des informations virtuels, dans une séquence d’images réelles.

Des applications dans le gaming ou dans la maintenance industrielle commencent à émerger.

3- La sécurité assurée par logiciel

La sécurité assurée par logiciel est un modèle d’architecture de gestion dans lequel la sécurité de l’information est découplée de chaque composant physique pour être gérée par un logiciel.

Ce dernier a notamment pour fonction de détecter une intrusion, de segmenter le réseau et les contrôles d’accès.

Cette approche s’intègre parfaitement dans les nouvelles infrastructures gérées par logiciel (centres de données SDDC, IaaS, ITaaS…).

4- La gestion du cycle de vie de la donnée

Gartner décrit ici les techniques de gestion de la donnée d’entreprise permettant de la classifier, la stocker, la partager, puis de la retrouver aisément.

Cette gestion nécessite de se coordonner sur une classification, un vocabulaire et un processus de gestion du cycle de vie de la donnée partagés dans l’entreprise.

L’émergence d’outils performants – depuis les systèmes de recherche intuitifs dans les bases de données jusqu’aux algorithmes de Data Mining – permet une meilleure coordination entre les entités d’une entreprise ainsi qu’une utilisation efficiente de la donnée.

5- Les experts conseillers cognitifs

Les experts conseillers cognitifs sont des logiciels permettant d’assister, de conseiller ou de suppléer des métiers spécifiques.

Ces fonctions s’effectuent au travers de mise à disposition de données brutes pertinentes ou d’analyse de données récoltées.

En radiologie, par exemple, des logiciels sont capables de pointer les zones anormales pour faciliter le travail du radiologue. Les logiciels sont au préalable configurés pour analyser tout type de données utiles, y compris les données non structurées.

[fullWidthArticle id= »42046″]

6- Les drones à usage professionnel

Le champ d’application des drones au sein des entreprises est en pleine expansion malgré les développements de régulations dans de nombreux pays.

En comparaison avec les drones destinés aux particuliers, les drones à usages professionnels ont souvent une capacité de charge plus importante, un temps de vol plus long et des capteurs plus précis, améliorant la sécurité du vol.

Ils sont souvent spécialisés sur un usage unique : dernier kilomètre de livraison, premiers secours, surveillance, maintenance d’actifs (par exemple des avions ou éoliennes), prise d’image…

Smart City

Objets connectés et Blockchain : à quoi doit-on s’attendre ?

Au SIdO de Lyon, nous avons pu assister à une très intéressante conférence au sujet […]

287

By Geoffray

7- La Blockchain

La blockchain est une technologie de stockage et de transmission d’information, transparente, sécurisée, et fonctionnant sans organe central de contrôle. La technologie utilise une base de données qui contient l’historique de tous les échanges effectués depuis sa création.

Cette base de données est sécurisée et distribuée (partagée par ses différents utilisateurs), sans intermédiaire (ce qui permet à chacun de vérifier la validité de la chaîne).

La Blockchain est déjà utilisée dans la crypto-monnaie, mais également dans les échanges de toute sorte de biens et services.

8- L’informatique cognitive

L’informatique cognitive consiste à simuler les processus de la pensée humaine dans un modèle informatisé. Elle s’appuie sur des programmes d’auto-apprentissage capables d’acquérir la donnée, traiter l’information, apprendre et transmettre des connaissances.

La capacité de comprendre l’homme (langage, ton, langage corporel) permet d’améliorer la perception du système et la pertinence des données d’entrée. Les méthodes d’apprentissage couplent des méthodes statistiques (Machine Learning) et des méthodes reproduisant le comportement humain (basées sur des probabilités et non sur des règles préconfigurées).

Ainsi, le système ne répond pas par « vrai ou faux », mais par une réponse probabilistique affinée en fonction du contexte et des interactions entre la machine et son utilisateur.

9- Les nanotubes électroniques

Les nanotubes électroniques (ou plus généralement la nanoélectronique) font référence à l’utilisation de tubes – dont la taille est inférieure à 100 nanomètres – dans la conception des composants électroniques, tels que les transistors. Souvent en carbone, silicium ou autres matériaux semi-conducteurs, ces nanotubes ouvrent la voie à de nouveaux circuits, avec des caractéristiques électriques ou mécaniques nouvelles, grâce à leur taille nanoscopique. Ils sont notamment avantageux pour leur miniaturisation et leur faible consommation énergétique.

10- Les véhicules autonomes

Les véhicules autonomes sont des véhicules qui peuvent rouler sans intervention de conducteurs humains grâce à une technologie d’autoguidage. Cette technologie est notamment aidée par une multitude de capteurs, de puissants algorithmes et de  servocommandes (permettant de réaliser les actions commandées par les logiciels), notamment sur le volant, l’accélérateur et le frein.

11- Le Machine Learning

Les technologies de Machine Learning désignent  des algorithmes qui permettent à un système d’adapter ses analyses et son comportement, en se fondant sur l’analyse de données empiriques provenant d’une base de données ou de capteurs. Ces algorithmes utilisent des méthodes statistiques pour apprendre des règles de fonctionnement en se basant sur les données d’apprentissage. Ils appliquent ensuite ces règles à de nouvelles données.

12- Le Deep Learning

Le Deep Learning est un sous-ensemble du Machine Learning (apprentissage par une machine) dans lequel le système s’occupe de définir les caractéristiques à analyser à partir des données, ainsi que d’optimiser l’analyse des données pour une prédiction précise. Pour fonctionner, le Deep Learning utilise un fonctionnement similaire à nos neurones : un réseau neuronal avec un algorithme qui, pour apprendre, ajuste la connexion entre les neurones en fonction  des résultats des prédictions. Le Deep Learning est puissant mais très gourmand en données d’entrées et en puissance de calcul. Pour cela, il est souvent utilisé dans la reconnaissance visuelle, la traduction, l’analyse des comportements clients mais est difficilement généralisable à tout problème.

13- La maison connectée

La maison connectée désigne l’ensemble des objets connectés (thermostats, caméras, capteurs d’environnement…) dans une maison permettant d’apporter confort, sécurité, loisirs, économies d’énergie, et d’en assurer l’intendance plus aisément.

14- Les assistants virtuels

Un assistant virtuel est un logiciel capable d’effectuer, pour un individu, des tâches ou des services basés sur des données du demandeur (caractéristique de l’individu, position géographique…) ainsi que d’une variété de sources en ligne (météo, conditions de circulation, cours des actions, horaires d’utilisation, prix de détail…).

L’assistant, sur le smartphone ou sur un objet physique dédié (enceinte connectée…), prend la forme d’un personnage qui a la capacité d’interagir avec l’utilisateur par la voix ou par le texte.

15- Les plateformes pour l’internet des objets

Les plateformes pour l’internet des objets sont des espaces en ligne de stockage et de traitement des données collectées par les objets connectés.

Elles s’occupent notamment de traiter les données « chaudes »  (gestion du parc d’objets connectés, règles en temps réel de fonctionnement des objets) et « froides » (apprentissages, algorithmes).

16- Les robots intelligents

Les robots intelligents sont des robots capables d’improviser et de prendre des décisions pas obligatoirement prédictibles.

Ces décisions sont dictées par une logique de raisonnement combinée avec une variable d’aléatoire, simulant le raisonnement humain.

17- Edge computing

Le Edge Computing consiste à passer d’une architecture informatique centrée sur le Cloud à une architecture de calcul répartie, notamment au plus près des équipements en bordure de réseau (capteurs, objets connectés).

Automobile

Intel, Toyota et Ericsson s’allient dans le Big Data automobile

Pour exploiter les gigantesques quantités de données générées par les voitures connectées, une poignée d’acteurs […]

21

By Geoffray

Elle consiste par exemple à mettre de l’intelligence dans les objets embarqués.

Cette approche, couplée à un système de Cloud Computing, offre un gain en vitesse de calcul, disponibilité, et stockage, ainsi que la possibilité de pouvoir fonctionner en « local », sans connexion internet.

18- La reconnaissance analytique augmentée

La reconnaissance analytique augmentée désigne les solutions logicielles qui permettent aux utilisateurs de découvrir des corrélations, des relations de cause à effet ou encore d’analyser intelligemment des jeux de données.

Elle permet à chaque utilisateur de pouvoir utiliser des techniques analytiques avancées, sans toutefois faire appel à des connaissances informatiques poussées. Les solutions prennent en charge le regroupement, la préparation, l’intégration, l’analyse et l’affichage des données.

Grâce à cela, les utilisateurs peuvent découvrir, en toute simplicité, des corrélations pertinentes et les appliquer pour des activités stratégiques ou opérationnelles.

19- L’espace de travail connecté (et intelligent)

Grâce notamment à l’internet des objets (IoT), les espaces de travail peuvent bénéficier des données récoltées (localisation, contexte, heure de la journée, projets en cours…) pour améliorer l’efficience des employés et des organisations. Ces espaces peuvent notamment devenir plus modulaire, adaptable, et simplifier les procédures actuelles de partage d’information,  d’authentification, de communication,  de sécurité ou de reporting…

20- Les Chatbots

Les systèmes digitaux conversationnels (Chatbots) sont des algorithmes répondant à des requêtes formulées par l’utilisateur, parfois en utilisant des sources de données externes via des APIs.

Ces requêtes sont formulées à l’aide du langage naturel de l’Homme et ne se basent plus uniquement sur des mots clés comme les moteurs de recherche.

21- Les interfaces neuronales directes

Les interfaces neuronales directes sont des interfaces de communication entre un cerveau et un dispositif externe (un ordinateur, un système électronique…) conçu pour assister, améliorer ou réparer des fonctions humaines de cognition ou d’actions défaillantes.

22- L’affichage volumétrique

L’affichage volumétrique est une technologie d’affichage digital d’un objet virtuel dans un espace physique en 3 dimensions pour rendre ce dernier visible à l’œil nu. Les propriétés électromagnétiques de certains matériaux, comme le graphène, montrent des avancées prometteuse dans le domaine de l’affichage volumétrique.

Partenariats

Google & la NASA présentent leur ordinateur quantique Dwave 2X

Google et la Nasa ont racheté D-Wave, fabricant d’un ordinateur quantique surpuissant baptisé 2X, capable […]

30

By Geoffray

23- Les calculateurs quantiques

Les calculateurs quantiques sont des machines dont les calculs se basent sur les propriétés quantiques de la matière, par exemple sur la superposition et l’intrication d’états quantiques. À la différence d’un calculateur classique qui travaille sur des données binaires (0 et 1), le calculateur quantique travaille sur des qubits dont l’état peut posséder plusieurs valeurs. Ainsi, l’efficacité (rapidité de calcul) du système peut être améliorée.

24- Digital Twin

Digital Twin (« le jumeau numérique ») consiste à modéliser et associer une réplique digitale d’un actif physique réel (par exemple un moteur d’avion, une usine ou un simple objet connecté). Des capteurs sur l’actif permettent de reproduire son évolution physique sur son jumeau virtuel.

Les jumeaux numériques permettent ainsi de suivre l’évolution de l’actif à distance, de simuler ses réactions à son environnement pour planifier au mieux les opérations de maintenance et anticiper les pannes.

Enfin, les jumeaux numériques offrent une interaction multiple des opérateurs sur l’actif (comme un Google Word que plusieurs opérateurs peuvent éditer simultanément).

25- Serverless PaaS

Une infrastructure Serverless est une infrastructure dans laquelle le développeur ne gère pas physiquement des serveurs. Le développeur contrôle toutefois une quantité de logique serveur et définit l’échelle d’exécution (en PaaS*).

Ses applications logicielles s’exécutent dans des « conteneurs éphémères » de services tiers (principalement des acteurs du Cloud : Amazon, Google…). Le développeur paie à l’usage, selon le temps de calcul nécessaire, ce qui améliore l’adéquation entre la puissance requise et les ressources allouées.

Le partage de la gestion des serveurs permet ainsi d’optimiser les coûts de développement et des fonctions, de réduire la complexité opérationnelle (ne pas avoir à gérer les serveurs) et de mieux gérer un niveau d’élasticité des ressources.

26- La 5G

La 5G (cinquième génération de standards pour la téléphonie mobile) offrira des débits plus rapides (~Gb/s), des temps de réaction plus courts, une qualité de service minimale lorsque l’utilisateur est en mouvement, ainsi qu’une adaptabilité des caractéristiques réseau permettant de connecter des objets aux besoins différents (performance, débit, consommation énergétique…).

Avec cette dernière fonctionnalité, la 5G s’intéresse très clairement aux objets connectés, et promet ainsi de connecter 7 trillions d’objets.

27- Les technologies d’amélioration humaine

Les technologies d’amélioration humaine –ou encore l’homme augmenté– se réfèrent à des technologies qui permettent de surmonter les limites actuelles du corps humain avec des moyens naturels ou artificiels.

28- Les composants neuromorphiques

Les composants neuromorphiques sont des puces électroniques qui visent à reproduire, dans le silicium, un réseau de neurones artificiels.

Grâce à une architecture peu énergivore et spécialement adaptée aux algorithmes de Deep Learning, ils permettent de réaliser certaines tâches (comme la reconnaissance de contenus à l’intérieur d’une image ou la reconnaissance vocale) que les processeurs classiques peinent à effectuer mais qui sont très bien exécutées par notre cerveau.

29- L’apprentissage neuronal par renforcement

L’apprentissage par renforcement décrit un type d’apprentissage par algorithme avec comme données d’entrée : des contraintes, des capacités d’actions et un objectif à atteindre. L’algorithme s’entraine seul en explorant différents essais sur ses capacités d’actions.

Intelligence Artificielle

L’intelligence artificielle d’AlphaGo bât encore Lee Sedol

Le joueur de go Lee Sedol s’est incliné une seconde fois d’affilée face au programme d’intelligence […]

13

By Geoffray

L’itération sur l’écart par rapport à l’objectif sur chaque capacité d’actions lui permet ainsi d’apprendre.

Cet apprentissage, lorsque combiné à du Deep Learning, permet cette fois-ci à l’algorithme de reconnaître soi-même son environnement et atteindre des objectifs. C’est en utilisant l’apprentissage par renforcement profond (Deep Reinforcement Learning) qu’Alpha Go a battu le meilleur joueur de Go.

30- L’intelligence artificielle généralisée

L’intelligence artificielle généralisée désigne un système capable d’improviser et de prendre des décisions liées à un domaine sur lequel il n’est pas spécifiquement formé.

Les décisions prises ne sont pas toujours prédictibles mais dictées par une logique de raisonnement combinée avec une variable d’aléatoire, simulant le raisonnement humain.

31- L’impression 4D

L’impression 4D désigne une technologie qui permet d’imprimer, en 3D, des objets dont la forme peut évoluer au cours du temps en fonction des stimulations extérieures.

Grâce à un matériau anisotrope programmable, ce procédé en développement pourrait servir à créer des tissus intelligents, de l’électronique souple, des équipements biomédicaux…

32- La poussière intelligente

La poussière intelligente est un réseau sans fil de minuscules systèmes micro-électromécaniques qui permet de mesurer des grandeurs comme la luminosité, la température ou encore les vibrations.

Ces systèmes sont dispersés dans un environnement donné (air, eau…) pour y effectuer des mesures et renvoyer des informations par radiofréquence.

Le résumé à partager sur vos réseaux

16 Aug 09:44

Exploring refit, an automatic type-safe REST library for .NET Standard

by Scott Hanselman

I dig everything that Paul Betts does. He's a lovely person and a prolific coder. One of his recent joints is called Refit. It's a REST library for .NET that is inspired by Square's Retrofit library. It turns your REST API into a live interface:

public interface IGitHubApi

{
[Get("/users/{user}")]
Task<User> GetUser(string user);
}

That's an interface that describes a REST API that's elsewhere. Then later you just make a RestService.For<YourInterface> and you go to town.

var gitHubApi = RestService.For<IGitHubApi>("https://api.github.com");


var octocat = await gitHubApi.GetUser("octocat");

imageThat's lovely! It is a .NET Standard 1.4 library which means you can use it darn near everywhere. Remember that .NET Standard isn't a runtime, it's a version interface - a list of methods you can use under many different ".NETs." You can use Refit on UWP, Xamarin.*, .NET "full" Frameowrk, and .NET Core, which runs basically everywhere.

Sure, you can make your own HttpClient calls, but that's a little low level and somewhat irritating. Sure, you can look for a .NET SDK for your favorite REST interface but what if it doesn't have one? It strikes a nice balance between the low-level and the high-level.

I'll give an example and use it as a tiny exercise for Refit. I have a service that hosts a realtime feed of my blood sugar, as I'm a Type 1 Diabetic. Since I have a Continuous Glucose Meter that is attached to me and sending my sugar details to a web service called Nightscout running in Azure, I figured it'd be cool to use Refit to pull my sugar info back down with .NET.

The REST API for Nightscout is simple, but doe have a lot of options, query strings, and multiple endpoints. I can start by making a simple interface for the little bits I want now, and perhaps expand the interface later to get more.

For example, if I want my sugars, I would go

https://MYWEBSITE/api/v1/entries.json?count=10

And get back some JSON data like this:

[

{
_id: "5993c4aa8d60c09b63ba1c",
sgv: 162,
date: 1502856279000,
dateString: "2017-08-16T04:04:39.000Z",
trend: 4,
direction: "Flat",
device: "share2",
type: "sgv"
},
{
_id: "5993c37d8d60c09b93ba0b",
sgv: 162,
date: 1502855979000,
dateString: "2017-08-16T03:59:39.000Z",
trend: 4,
direction: "Flat",
device: "share2",
type: "sgv"
}
]

Where "sgv" is serum glucose value, or blood sugar.

Starting with .NET Core 2.0 and the SDK that I installed from http://dot.net, I'll first make a console app from the command line and add refit like this:

C:\users\scott\desktop\refitsugars> dotnet new console

C:\users\scott\desktop\refitsugars> dotnet add package refit

Here's my little bit of code.

  • I made an object shaped like each record. Added aliases for weirdly named stuff like "sgv"
  • COOL SIDE NOTE: I added <LangVersion>7.1</LangVersion> to my project so I could have my public static Main entry point be async. That's new as many folks have wanted to have a "public static async void Main()" equivalent.

After that it's REALLY lovely and super easy to make a quick strongly-typed REST Client in C# for pretty much anything. I could see myself easily extending this to include the whole NightScout diabetes management API without a lot of effort.

using Newtonsoft.Json;

using Refit;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace refitsugars
{
public interface INightScoutApi
{
[Get("/api/v1/entries.json?count={count}")]
Task<List<Sugar>> GetSugars(int count);
}

public class Sugar
{
[JsonProperty(PropertyName = "_id")]
public string id { get; set; }

[JsonProperty(PropertyName = "sgv")]
public int glucose { get; set; }

[JsonProperty(PropertyName = "dateString")]
public DateTime itemDate { get; set; }
public int trend { get; set; }
}

class Program
{
public static async Task Main(string[] args)
{
var nsAPI = RestService.For<INightScoutApi>("https://MYURL.azurewebsites.net");
var sugars = await nsAPI.GetSugars(3);
sugars.ForEach(x => { Console.WriteLine($"{x.itemDate.ToLocalTime()} {x.glucose} mg/dl"); });
}
}
}

And here's the result of the run.

PS C:\Users\scott\Desktop\refitsugars> dotnet run

8/15/2017 10:29:39 PM 110 mg/dl
8/15/2017 10:24:39 PM 108 mg/dl
8/15/2017 10:19:40 PM 109 mg/dl

You should definitely check out Refit. It's very easy and quite fun. The fact that it targets .NET Standard 1.4 means you can use it in nearly all your .NET projects, and it already has creative people thinking of cool ideas.


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2017 Scott Hanselman. All rights reserved.
     
13 Aug 05:49

.NET and WebAssembly - Is this the future of the front-end?

by Scott Hanselman

6 years ago Erik Meijer and I were talking about how JavaScript is/was an assembly language. It turned into an interesting discussion/argument (some people really didn't buy it) but it still kept happening. Currently WebAssembly world is marching forward and is supported in Chrome, Firefox, and in Development in Edge, Opera, and Safari.

"The avalanche has begun, it's too late for the pebbles to vote." - Ambassador Kosh

Today in 2017, WebAssembly is absolutely a thing and you can learn about it at http://webassembly.org. I even did a podcast on WebAssembly with Mozilla Fellow David Bryant (you really should check out my podcast, I'm very proud of it. It's good.)

The classic JavaScript TODO app, written with C# and .NET and Blazor

The image above is from Steve Sanderson's NDC presentation. He's writing the classic client-side JavaScript ToDo application...except he's writing the code in C#.

What is WebAssembly?

"WebAssembly or wasm is a low-level bytecode format for in-browser client-side scripting, evolved from JavaScript." You can easily compile to WebAssembly from C and C++ today...and more languages are jumping in to include WebAssembly as a target every day.

Since I work in open source .NET and since .NET Core 2.0 is cross-platform with an imminent release, it's worth exploring where WebAssembly fits into a .NET world.

Here's some projects I have identified that help bridge the .NET world and the WebAssembly world. I think that this is going to be THE hot space in the next 18 months.

WebAssembly for .NET

Despite its overarching name, this OSS project is meant to consume WASM binary files and execute them from within .NET assemblies. To be clear, this isn't compiling .NET languages' (C#, VB.NET, F#) into WebAssembly, this is for using WebAssembly as if it's any other piece of resuable compiled code. Got an existing WASM file you REALLY want to call from .NET? This is for that.

Interestingly, this project doesn't spin up a V8 or Chakra JavaScript engine to run WASM, instead it reads in the bytecode and converts them to .NET via System.Reflection.Emit. Interesting stuff!

Mono and WebAssembly

One of the great things happening in the larger .NET Ecosystem is that there is more than one ".NET" today. In the past, .NET was a thing that you installed on Windows and generally feared. Today, there's .NET 4.x+ on basically every Windows machine out there, there's .NET Core that runs in Docker, on Mac, Windows, and a dozen Linuxes...even Raspberry Pi, and Mono is another instance of .NET that allows you to run code in dozens of other platforms. There's multiple "instances of .NET" out there in active development.

The Mono Project has two prototypes using Mono and WebAssembly.

The first one uses the traditional full static compilation mode of Mono, this compiled both the Mono C runtime and the Mono class libraries along with the user code into WebAssembly code. It produces one large statically compiled application. You can try this fully statically compiled Hello World here. The full static compilation currently lives here.

So that's a totally statically compiled Hello World...it's all of Mono and your app into Web Assembly. They have another prototype with a difference perspective:

The second prototype compiles the Mono C runtime into web assembly, and then uses Mono’s IL interpreter to run managed code. This one is a smaller download, but comes at the expense of performance. The mixed mode execution prototype currently lives here.

Here they've got much of Mono running in Web Assembly, but your IL code is interpreted. One of the wonderful things about Computer Science - There is more than one way to do something, and they are often each awesome in their own way!

"Blazor" - Experimental UI Framework running .NET in the browser

With a similar idea as the Mono Project's second prototype, Steve Sanderson took yet another "instance of .NET," the six year old open source DotNetAnywhere (DNA) project and compiled it into Web Assembly. DNA was an interpreted .NET runtime written in portable C. It takes standard IL or CIL (Common Intermediate Language) and runs it "on resource-constrained devices where it is not possible to run a full .NET runtime (e.g. Mono)." Clever, huh? What "resource-constrained device do we have here six years later?" Why, it's the little virtual machine that could - the JavaScript VM that your browser already has, now powered by a standard bytecode format called WebAssembly.

To prove the concept, Steve compiles DotNetAnywhere to WASM but then takes it further. He's combined standard programming models that we see on the web with things like Angular, Knockoutjs, or Ember, except rather than writing your web applications' UI in JavaScript, you write in C# - a .NET language.

Here in the middle of some Razor (basically HTML with C# inline) pages, he does what looks like a call to a backend. This is C# code, but it'll run as WASM on the client side within a Blazor app.

@functions {

WeatherForecast[] forecasts;

override protected async Task InitAsync()
{
using (var client = new HttpClient())
{
var json = await client.GetStringAsync(AbsoluteUrl("/api/SampleData/WeatherForecasts"));
forecasts = JsonUtil.Deserialize<WeatherForecast[]>(json);
}
}
}

This would allow a .NET programmer to use the same data models on the client and the server - much like well-factored JavaScript should today - as well as using other .NET libraries they might be familiar or comfortable with.

Why do this insane thing? "To see how well such a framework might work, and how much anyone would care." How far could/should this go? David Fowler already has debugging working (again this is ALL prototypes) in Visual Studio Code. Don't take my word for it, watch the video as Steve presents the concept at the NDC Conference.

Blazor as a prototype has a number of people excited, and there was a Blazor Hackthon recently that produced some interesting samples including a full-blown app.

Other possibilities?

There's lots of other projects that are compiling or transpiling things to JavaScript. Could they be modified to support WebAssembly? You can take F# and compile it to JavaScript with F#'s Fable project, and some folks have asked about WebAssembly.

At this point it's clear that everyone is prototyping and hacking and enjoying themselves.

What do YOU think about WebAssembly?


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     
11 Aug 14:32

Auditing ASP.NET MVC Actions

by Phil Haack

Phil Haack is writing a blog post about ASP.NET MVC? What is this, 2011?

No, do not adjust your calendars. I am indeed writing about ASP.NET MVC in 2017.

It’s been a long time since I’ve had to write C# to put food on the table. My day job these days consists of asking people to put cover sheets on TPS reports. And only one of my teams even uses C# anymore, the rest moving to JavaScript and Electron. On top of that, I’m currently on an eight week leave (more on that another day).

But I’m not completely disconnected from ASP.NET MVC and C#. Every year I spend a little time on a side project I built for a friend. He uses the site to manage and run a yearly soccer tournament.

Every year, it’s the same rigmarole. It starts with updating all of the NuGet packages. Then fixing all the breaking changes from the update. Only then do I actually add any new features. At the moment, the project is on ASP.NET MVC 5.2.3.

I’m not ready to share the full code for that project, but I plan to share some interesting pieces of it. The first piece is a little something I wrote to help make sure I secure controller actions.

The Problem

You care about your users. If not, at least pretend to do so. With that in mind, you want to protect them from potential Cross Site Request Forgery attacks. ASP.NET MVC includes helpers for this purpose, but it’s up to you to apply them.

By way of review, there are two steps to this. The first step is to update the view and add the anti-forgery hidden input to your HTML form via the Html.AntiForgeryToken() method. The second step is to validate that token in the action that receives the form post. Do this by decorating that action method with the [ValidateAntiForgeryToken] attribute.

You also care about your data. If you have actions that modify that data, you may want to ensure that the user is authorized to make that change via the [Authorize] attribute.

This is a lot to track. Especially if you’re in a hurry to build out a site. On this project, I noticed I forgot to apply some of these attributes where they should be placed. When I fixed the few places I happened to notice, I wondered what places did I miss?

It would be tedious to check every action by hand. So I automated it. I wrote a simple controller action that reflects over every controller action. It then displays all the actions that might need one of these attributes.

Here’s a screenshot of it in action.

Screenshot of Site Checker in action

There’s a few important things to note.

Which actions are checked?

The checker looks for all actions that might modify an HTTP resource. In other words, any action that responds to the following HTTP verbs: POST, PUT, PATCH, DELETE. In code, these correspond to action methods decorated with the following attributes: [HttpPost], [HttpPut], [HttpPatch], [HttpDelete] respectively. The presence of these attributes are good indicators that the action method might modify data. Action methods that respond to GET requests should never modify data.

Do all these need to be secured?

No.

For example, it wouldn’t make sense to decorate your LogOn action with [Authorize] as that violates causality. You don’t want to require users to be already authenticated before the log in to your site. That’s just silly sauce.

There’s no way for the checker to understand the semantics of your action method code to determine whether an action should be authorized or not. So it just lists everything it finds. It’s up to you to figure out if there’s any action (no pun intended) required on your part.

How do I deploy it?

All you have to do is copy and paste this SystemController.cs file into your ASP.NET MVC project. It just makes it easier to compile this into the same assembly where your controller actions exist.

Next, make sure there’s a route that’ll hit the Index action of the SystemController. If you have the default route that ASP.NET MVC project templates include present, you would visit this at /system/index.

Be aware that if you accidentally deploy SiteController, it will only responds to local requests (requests from the hosting server itself) and not to public requests. You really don’t want to expose this information to the public. That would be an open invitation to be hacked. You may like being Haacked, it’s no fun to be hacked.

And that’s it.

How’s it work?

I kept all the code in a single file, so it’s a bit ugly, but should be easy to follow.

The key part of the code is how I obtain all the controllers.

var assembly = Assembly.GetExecutingAssembly();

var controllers = assembly.GetTypes()
    .Where(type => typeof(Controller).IsAssignableFrom(type)) //filter controllers
    .Select(type => new ReflectedControllerDescriptor(type));

The first part looks for all types in the currently executing assembly. But notice that I wrap each type with a ReflectedControllerDescriptor. That type contains the useful GetCanonicalActions() method to retrieve all the actions.

It would have been possible for me to get all the action methods without using GetCanonicalActions by calling type.GetMethods(...) and filtering the methods myself. But GetCanonicalActionsis a much better approach since it encapsulates the same logic ASP.NET MVC uses to locate actions.

As such, it handles cases such as when an action method is named differently from the underlying class method via the [ActionName("SomeOtherMethod")] attribute.

What’s Next?

There’s so many improvements we could make (notice how I’m using “we” in a bald attempt to pull you into this?) to this. For example, the code only looks at the HTTP* attributes. But to be completely correct, it should also check the [AcceptVerbs] attribute. I didn’t bother because I never use that attribute, but maybe you have some legacy code that does.

Also, there might be other things you want to check. For example, what about mass assignment attacks? I didn’t bother because I tend to use input models for my action methods. But if you use the [Bind] attribute, you might want this checker to look for issues there.

Well that’s great. I don’t plan to spend a lot of time on this, but I’d be happy to accept your contributions! The source is on GitHub.

Let me know if this is useful to you or if you use something better.

29 Jul 07:36

dotnet sdk list and dotnet sdk latest

by Scott Hanselman

dotnet sdk listCan someone make .NET Core better with a simple global command? Fanie Reynders did and he did it in a simple and elegant way. I'm envious, in fact, because I spec'ed this exact thing out in a meeting a few months ago but I could have just done it like he did and I would have used fewer keystrokes!

Last year when .NET Core was just getting started, there was a "DNVM" helper command that you could use to simplify dealing with multiple versions of the .NET SDK on one machine. Later, rather than 'switching global SDK versions,' switching was simplified to be handled on a folder by folder basis. That meant that if you had a project in a folder with no global.json that pinned the SDK version, your project would use the latest installed version. If you liked, you could create a global.json file and pin your project's folder to a specific version. Great, but I would constantly have to google to remember the format for the global.json file, and I'd constantly go into c:\Program Files\dotnet in order to get a list of the currently installed SDKs. I proposed that Microsoft make a "dotnet sdk list" command and the ability to pin down versions like "dotnet sdk 1.0.4" and even maybe install new ones with "dotnet sdk install 2.1.0" or something.

Fanie did all this for us except the installation part, and his implementation is clean and simple. It's so simple that I just dropped his commands into my Dropbox's Utils folder that I have in my PATH on all my machines. Now every machine I dev on has this extension.

UPDATE: There is both a Windows version and a bash version here.

Note that if I type "dotnet foo" the dotnet.exe driver will look in the path for an executable command called dotnet-foo.* and run it.

C:\Users\scott\Desktop>dotnet foo

No executable found matching command "dotnet-foo"

C:\Users\scott\Desktop>dotnet sdk
No executable found matching command "dotnet-sdk"

He created a dotnet-sdk.cmd you can get on his GitHub. Download his repo and put his command somewhere in your path. Now I can do this:

C:\Users\scott\Desktop>dotnet sdk list

The installed .NET Core SDKs are:
1.0.0
1.0.0-preview2-003131
1.0.0-rc3-004530
1.0.2
1.0.4

Which is lovely, but the real use case is this:

C:\Users\scott\Desktop\fancypants>dotnet --version

1.0.4

C:\Users\scott\Desktop\fancypants>dotnet sdk 1.0.0
Switching .NET Core SDK version to 1.0.0

C:\Users\scott\Desktop\fancypants>dotnet --version
1.0.0

C:\Users\scott\Desktop\fancypants>dir
Volume in drive C is Windows
Directory of C:\Users\scott\Desktop\fancypants

07/26/2017 04:53 PM 47 global.json
1 File(s) 47 bytes

Then if I go "dotnet sdk latest" it just deletes the global.json. Perhaps in a perfect world it should just remove the sdk JSON node in case global.json has been modified, but for now it's great. Without the global.json the dotnet.exe will just use your latest installed SDK.

This works with .NET Core 2.0 as well. This should be built-in, but for now it's a very nice example of a clean extension to dotnet.exe.

Oh, and by the way, he also made a ".net.cmd" so you can do this with all your dotnet.exe commands.

.NET run

Give these commands a try!


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     
29 Jul 07:33

Peachpie - Open Source PHP Compiler to .NET and WordPress under ASP.NET Core

by Scott Hanselman

The Peachpie PHP compiler project joined the .NET Foundation this week and I'm trying to get my head around it. PHP in .NET? PHP on .NET? Under .NET? What compiles to what? Why would I want this? How does it work? Does it feel awesome or does it feel gross?

image

Just drink this in.

C:\Users\scott\Desktop\peachcon> type program.php

<?php

function main()
{
echo "Hello .NET World!";
}

main();

C:\Users\scott\Desktop\peachcon> dotnet run
Hello .NET World!

Just like that. Starting from a .NET SDK (They say 1.1, although I used a 2.0 preview) you just add their templates

dotnet new -i Peachpie.Templates::*

Then dotnet new now shows a bunch of php options.

C:\Users\scott\Desktop\peachcon> dotnet new | find /i "php"

Peachpie console application peachpie-console PHP Console
Peachpie Class library peachpie-classlibrary PHP Library
Peachpie web application peachpie-web PHP Web/Empty

dotnet new peachpie-console for example, then dotnet restore and dotnet run. Boom.

NOTE: I did have to comment out his one line "<Import Project="$(CSharpDesignTimeTargetsPath)" />" in their project file that doesn't work at the command line. It's some hack they did to make things work in Visual Studio but I'm using VS Code. I'm sure it's an alpha-point-in-time thing.

It's really compiling PHP into .NET Intermediate Language!

PHP to .NET

You can see my string here:

Hello .NET World inside a PHP app inside the CLR

But...why? Here's what they say, and much of it makes sense to me.

  1. Performance: compiled code is fast and also optimized by the .NET Just-in-Time Compiler for your actual system. Additionally, the .NET performance profiler may be used to resolve bottlenecks.
  2. C# Extensibility: plugin functionality can be implemented in a separate C# project and/or PHP plugins may use .NET libraries.
  3. Sourceless distribution: after the compilation, most of the source files are not needed.
  4. Power of .NET: Peachpie allows the compiled WordPress clone to run in a .NET JIT'ted, secure and manageable environment, updated through windows update.
  5. No need to install PHP: Peachpie is a modern compiler platform and runtime distributed as a dependency to your .NET project. It is downloaded automatically on demand as a NuGet package or it can be even deployed standalone together with the compiled application as its library dependency.

PHP does have other VMs/Runtimes that are used (beyond just PHP.exe) but the idea that I could reuse code between PHP and C# is attractive, not to mention the "PHP as dependency" part. Imagine if I have an existing .NET shop or project and now I want to integrate something like WordPress?

PHP under ASP.NET Core

Their Web Sample is even MORE interesting, as they've implemented PHP as ASP.NET Middleware. Check this out. See where they pass in the PHP app as an assembly they compiled?

using Peachpie.Web;


namespace peachweb.Server
{
class Program
{
static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseUrls("http://*:5004/")
.UseStartup<Startup>()
.Build();

host.Run();
}
}

class Startup
{
public void ConfigureServices(IServiceCollection services)
{
// Adds a default in-memory implementation of IDistributedCache.
services.AddDistributedMemoryCache();

services.AddSession(options =>
{
options.IdleTimeout = TimeSpan.FromMinutes(30);
options.CookieHttpOnly = true;
});
}

public void Configure(IApplicationBuilder app)
{
app.UseSession();

app.UsePhp(new PhpRequestOptions(scriptAssemblyName: "peachweb"));
app.UseDefaultFiles();
app.UseStaticFiles();
}
}
}

Interesting, but it's still Hello World. Let's run WordPress under PeachPie (and hence, under .NET). I'll run MySQL in a local Docker container for simplicity:

docker run -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=wordpress -p 3306:3306 -d mysql

I downloaded WordPress from here (note they have the "app" bootstrapper" that hosts .NET and then runs WordPress) restore and run.

WordPress under .NET Core

It's early and it's alpha - so set your expectations appropriately - but it's surprisingly useful and appears to be under active development.

What do you think?

Be sure to explore their resources at http://www.peachpie.io/resources and watch their video of WordPress running on .NET. It's all Open Source, in the .NET Foundation, and the code is up at https://github.com/iolevel/ and you can get started here: http://www.peachpie.io/getstarted


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!


© 2017 Scott Hanselman. All rights reserved.
     
21 Jul 06:54

Monospaced Programming Fonts with Ligatures

by Scott Hanselman

Animation of how ligature fonts change as you typeTypographic ligatures are when multiple characters appear to combine into a single character. Simplistically, when you type two or more characters and they magically attach to each other, you're using ligatures that were supported by your OS, your app, and your font.

I did a blog post in 2011 on using OpenType Ligatures and Stylistic Sets to make nice looking wedding invitations. Most English laypeople aren't familiar with ligatures as such and are impressed by them! However, if your language uses ligatures as a fundamental building block, this kind of stuff is old hat. Ligatures are fundamental to Arabic script and when you're typing it up you'll see your characters/font change and ligatures be added as you type. For example here is ل ا with a space between them, but this is لا the same two characters with no space. Ligatures kicked in.

OK, let's talk programming. Picking a programming font is like picking a religion. No matter what you pick someone will say you're wrong. Most people will agree at least that monospaced fonts are ideal for reading code and that both of you who use proportionally spaced fonts are destined for hell, or at the very least, purgatory.

Beyond that, there's some really interesting programming fonts that have ligature support built in. It's important that you - as programmers - understand and remember that ligatures are just a view on the bytes that are your code. If you custom make a font that makes the = equals site a poop emoji, that's between you and your font. The same thing applies to ligatures. Your code is the same.

Three of the most interesting and thoughtful monospaced programming fonts with ligatures are Fira Code, Monoid, and Hasklig. I say "thoughtful" but that's what I really mean - these folks have designed these fonts with programming in mind, considering spacing, feel, density, pleasantness, glance-ability, and a dozen other things that I'm not clever enough to think of.

I'll be doing screenshots (and coding) in the free cross-platform Visual Studio Code. Go to your User Settings (Ctrl-,) or File | Preferences, and add your font name and turn on ligatures if you want to follow along. Example:

// Place your settings in this file to overwrite the default settings
{
    "editor.fontSize": 20,
    "editor.fontLigatures": true,
    "editor.fontFamily": "Fira Code"
}

Most of these fonts have dozens and dozens of ligature combinations and there is no agreement for "make this a single glyph" or "use ligatures for -> but not ==> so you'll need to try them out with YOUR code and make a decision for yourself. My sample code example can't be complete and how it looks and feels to you on your screen is all that matters.

Here's my little sample. Note the differences.

// FIRA CODE

object o;
if (o is int i || (o is string s &&
int.TryParse(s, out i)) { /* use i */ }
var x = 0xABCDEF;
-> --> ==> != === !== && ||<=<
</><tag> http://www.hanselman.com
<=><!-- HTML Comment -->
i++; #### ***

Fira Code

Fira Code

There's so much here. Look at how "www" turned into an interesting glyph. Things like != and ==> turn into arrows. HTML Comments are awesome. Double ampersands join together.

I was especially impressed by the redefined hex "x". See how it's higher up and smaller than var x?

Monoid

Monoid

Monoid prides itself on being crisp and readable on retina displays as well as at 9pt on low-res displays. I frankly can't understand how tiny font people can function. It gives me a headache to even consider programming at anything less than 14 to 16pt and I am usually around 20pt. And my vision is fine. ;)

image

Monoid's goal is to be sleek and precise and the designer has gone out of their way to make sure there's no confusion between any two characters.

Hasklig

Hasklig takes the Source Code Pro font and adds ligatures. As you can tell by the name, it's great in Haskell, as for a while a number of Haskell people were taking to using single character (tiny) Unicode glyphs like ⇒ for things like =>. Clearly this was a problem best solved by ligatures.

Hasklig

Do any of you use programming fonts with ligatures? I'm impressed with Fira Code, myself, and I'm giving it a try this month.


Sponsor: Thanks to Redgate! A third of teams don’t version control their database. Connect your database to your version control system with SQL Source Control and find out who made changes, what they did, and why. Learn more


© 2017 Scott Hanselman. All rights reserved.
     
18 Jul 08:30

13 hours debugging a segmentation fault in .NET Core on Raspberry Pi and the solution was...

by Scott Hanselman

Debugging is a satisfying and special kind of hell. You really have to live it to understand it. When you're deep into it you never know when it'll be done. When you do finally escape it's almost always a DOH! moment.

I spent an entire day debugging an issue and the solution ended up being a checkbox.

NOTE: If you get a third of the way through this blog post and already figured it out, well, poop on you. Where were you after lunch WHEN I NEEDED YOU?

I wanted to use a Raspberry Pi in a tech talk I'm doing tomorrow at a conference. I was going to show .NET Core 2.0 and ASP.NET running on a Raspberry Pi so I figured I'd start with Hello World. How hard could it be?

You'll write and build a .NET app on Windows or Mac, then publish it to the Raspberry Pi. I'm using a preview build of the .NET Core 2.0 command line and SDK (CLI) I got from here.

C:\raspberrypi> dotnet new console

C:\raspberrypi> dotnet run
Hello World!
C:\raspberrypi> dotnet publish -r linux-arm
Microsoft Build Engine version for .NET Core

raspberrypi1 -> C:\raspberrypi\bin\Debug\netcoreapp2.0\linux-arm\raspberrypi.dll
raspberrypi1 -> C:\raspberrypi\bin\Debug\netcoreapp2.0\linux-arm\publish\

Notice the simplified publish. You'll get a folder for linux-arm in this example, but could also publish osx-x64, etc. You'll want to take the files from the publish folder (not the folder above it) and move them to the Raspberry Pi. This is a self-contained application that targets ARM on Linux so after the prerequisites that's all you need.

I grabbed a mini-SD card, headed over to https://www.raspberrypi.org/downloads/ and downloaded the latest Raspbian image. I used etcher.io - a lovely image burner for Windows, Mac, or Linux - and wrote the image to the SD Card. I booted up and got ready to install some prereqs. I'm only 15 min in at this point. Setting up a Raspberry Pi 2 or Raspberry Pi 3 is VERY smooth these days.

Here's the prereqs for .NET Core 2 on Ubuntu or Debian/Raspbian. Install them from the terminal, natch.

sudo apt-get install libc6 libcurl3 libgcc1 libgssapi-krb5-2 libicu-dev liblttng-ust0 libssl-dev libstdc++6 libunwind8 libuuid1 zlib1g

I also added an FTP server and ran vncserver, so I'd have a few ways to talk to the Raspberry Pi. Yes, I could also SSH in but I have a spare monitor, and with that monitor plus VNC I didn't see a need.

sudo apt-get pure-ftpd

vncserver

Then I fire up Filezilla - my preferred FTP client - and FTP the publish output folder from my dotnet publish above. I put the files in a folder off my ~\Desktop.

FTPing files

Then from a terminal I

pi@raspberrypi:~/Desktop/helloworld $ chmod +x raspberrypi

(or whatever the name of your published "exe" is. It'll be the name of your source folder/project with no extension. As this is a self-contained published app, again, all the .NET Core runtime stuff is in the same folder with the app.

pi@raspberrypi:~/Desktop/helloworld $ ./raspberrypi 

Segmentation fault

The crash was instant...not a pause and a crash, but it showed up as soon as I pressed enter. Shoot.

I ran "strace ./raspberrypi" and got this output. I figured maybe I missed one of the prerequisite libraries, and I just needed to see which one and apt-get it. I can see the ld.so.nohwcap error, but that's a historical Debian-ism and more of a warning than a fatal.

strace on a bad exe in Linux

I used to be able to read straces 20 years ago but much like my Spanish, my skills are only good at Chipotle. I can see it just getting started loading libraries, seeking around in them, checking file status,  mapping files to memory, setting memory protection, then it all falls apart. Perhaps we tried to do something inappropriate with some memory that just got protected? We are dereferencing a null pointer.

Maybe you can read this and you already know what is going to happen! I did not.

I run it under gdb:

pi@raspberrypi:~/Desktop/WTFISTHISCRAP $ gdb ./raspberrypi 

GNU gdb (Raspbian 7.7.1+dfsg-5+rpi1) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
This GDB was configured as "arm-linux-gnueabihf".
"/home/pi/Desktop/helloworldWRONG/./raspberrypi1": not in executable format: File truncated
(gdb)

Ok, sick files?

I called Peter Marcu from the .NET team and we chatted about how he got it working and compared notes.

I was using a Raspberry Pi 2, he a Pi 3. Ok, I'll try a 3. 30 minutes later, new SD card, new burn, new boot, pre-reqs, build, FTP, run, SAME RESULT - segfault.

Weird.

Maybe corruption? Here's a thread about Corrupted Files on Raspbian Jesse 2017-07-05! That's the version I have. OK, I'll try the build of Raspbian from a week before.

30 minutes later, burn another SD card, new boot, pre-reqs, build, FTP, run, SAME RESULT - segfault.

BUT IT WORKS ON PETER'S MACHINE.

Weird.

Maybe a bad nuget.config? No.

Bad daily .NET build? No.

BUT IT WORKS ON PETER'S MACHINE.

Ok, I'll try Ubuntu Mate for Raspberry Pi. TOTALLY different OS.

30 minutes later, burn another SD card, new boot, pre-reqs, build, FTP, run, SAME RESULT - segfault.

What's the common thread here? Ok, I'll try from another Windows machine.

SAME RESULT - segfault.

I call Peter back and we figure it's gotta be prereqs...but the strace doesn't show we're even trying to load any interesting libraries. We fail FAST.

Ok, let's get serious.

We both have Raspberry Pi 3s. Check.

What kind of SD card does he have? Sandisk? Ok,  I'll use Sandisk. But disk corruption makes no sense at that level...because the OS booted!

What did he burn with? He used Win32diskimager and I used Etcher. Fine, I'll bite.

30 minutes later, burn another SD card, new boot, pre-reqs, build, FTP, run, SAME RESULT - segfault.

He sends me HIS build of a HelloWorld and I FTP it over to the Pi. SAME RESULT - segfault.

Peter is freaking out. I'm deeply unhappy and considering quitting my job. My kids are going to sleep because it's late.

I ask him what he's FTPing with, and he says WinSCP. I use FileZilla, ok, I'll try WinSCP.

WinSCP's New Session dialog starts here:

SFTP is Default

I say, WAIT. Are you using SFTP or FTP? Peter says he's using SFTP so I turn on SSH on the Raspberry Pi and SFTP into it with WinSCP and copy over my Hello World.

IT FREAKING WORKS. IMMEDIATELY.

Hello World on a Raspberry Pi

BUT WHY.

I make a folder called Good and a folder called BAD. I copy with FileZilla to BAD and with WinSCP to GOOD. Then I run a compare. Maybe some part of .NET Core got corrupted? Maybe a supporting native library?

pi@raspberrypi:~/Desktop $ diff --brief -r helloworld/ helloworldWRONG/

Files helloworld/raspberrypi1 and helloworldWRONG/raspberrypi1 differ

Wait, WHAT? The executable are different? One is 67,684 bytes and the bad one is 69,632 bytes.

Time for a  visual compare.

All the ODs are gone

At this point I saw it IMMEDIATELY.

0D is CR (13) and 0A is LF (10). I know this because I'm old and I've written printer drivers for printers that had both carriages and lines to feed. Why do YOU know this? Likely because you've transferred files between Unix and Windows once or thrice, perhaps with FTP or Git.

All the CRs are gone. From my binary file.

Why?

I went straight to settings in FileZilla:

Treat files without extensions as ASCII files

See it?

Treat files without extensions as ASCII files

That's the default in FileZilla. To change files that are just chilling, minding their own business, as ASCII, and then just randomly strip out carriage returns. What could go wrong? And it doesn't even look for CR LF pairs! No, it just looks for CRs and strips them. Classy.

In retrospect I should have used known this, but it wasn't even the switch to SFTP, it was the switch to an FTP program with different defaults.

This bug/issue whatever burned my whole Monday. But, it'll never burn another Monday, Dear Reader, because I've seen it before now.

FAIL FAST FAIL OFTEN my friends!

Why does experience matter? It means I've failed a lot in the past and it's super useful if I remember those bugs because then next time this happens it'll only burn a few minutes rather than a day.

Go forth and fail a lot, my loves.

Oh, and FTP sucks.


Sponsor: Thanks to Redgate! A third of teams don’t version control their database. Connect your database to your version control system with SQL Source Control and find out who made changes, what they did, and why. Learn more



© 2017 Scott Hanselman. All rights reserved.
     
17 Jul 16:12

Stanley Robotics révolutionne le stationnement à l’aéroport

by Geoffray

La startup parisienne Stanley Robotics a levé 3,6 millions d’euros à la fin du mois de Mai 2017. Son produit, le robot Stan est un convoyeur autonome destiné à automatiser le stationnement des véhicules personnels dans les parkings. A la clé, d’importants gains d’espace et de couts pour les sociétés exploitantes de ces lieux, notamment dans les aéroports.

Je n’avais encore jamais eu l’occasion d’écrire sur Stanley RoboticsNon pas que j’ignorais son existence –loin de là– mais on en savait encore trop peu sur le fonctionnement du système pour justifier tout le bien que je pense de cette innovation.

Personne n’ignore plus que je suis féru de robotique servicielle B2B (chacun ses passions, hein)… et Stanley Robotics cristallise toutes les raisons pour lesquelles vous devriez l’être autant autant que moi.

Stanley Robotics, robot stationnement autonome

On a beaucoup parlé des navettes autonomes Arma de Navya ou EZ 10 d’Eazymile pour les visites de sites fermés, ou du robot de Dispatch Robotics pouvant circuler sur les pistes cyclable et des drones du programme Amazon Prime Air, pour faire chuter le coût de la livraison du dernier kilomètre.

Le robot Stan de Stanley Robotics a un peu de tout ça, avec un facteur cool en plus : ses applications concrètes se sont concrétisées bien plus vite que 90% des autres robots B2B2C jusqu’ici, et le bénéfice client est évident.

Le robot Stanley Robotics

Le robot Stan développé par la société est l’élément central du service de voiturier autonome actuellement en test à l’aéroport de Roissy Charles de Gaulle.

Les fonds levés récemment par Stanley Robotics [auprès d’Elaia Partners, du fond Ville de demain de BPI France et d’Idinvest Partners] vont contribuer à permettre le déploiement de davantage de ces robots dans d’autres zones aéroportuaires, où le trafic aérien devrait progresser énormément dans les prochaines années.

Imaginez que bientôt, vous vous rendrez à l’aéroport en voiture, laisserez votre véhicule sur une place identifiée et conserverez les clés avec vous avant d’aller tranquillement prendre votre vol.

Pendant ce temps, le robot Stan ira le ranger en sécurité avant de le ramener à l’heur convenue :

« Il suffit au conducteur de garer sa voiture dans un box dédié à l’entrée du parking, puis de s’enregistrer sur un automate à proximité. Une fois le véhicule isolé du public, le robot se met en position, repère les pneus, s’ajuste à la longueur de la voiture, la soulève, puis la transfère », explique la startup à L’Usine Nouvelle.

Fonctionnement et bénéfices

Evidemment, ce service de voiturier autonome facilite l’arrivée et le départ des voyageurs, qui peuvent stationner au plus près de leur destination.

Evénements

[Innorobo] Navya, le véhicule autonome qui vous emmène en ballades

Dans les allées du salon Innorobo qui se tient actuellement à la Cité Internationale de Lyon, la […]

13

By Geoffray

Ils retrouveront leur véhicule au même endroit à leur retour s’ils ont renseigné les références de leur vol retour.

J’adore le concept, tout autant que l’idée que la plupart des clients n’auront probablement jamais l’intuition que c’est un robot qui va garer leur véhicule ailleurs pendant leur absence 🙂

Mais le bénéfice économique le plus concret est à chercher du coté des sociétés gestionnaires de parkings. Sans allers-venues du public, sans cages d’escaliers ou sans lumière (etc) le service autonome de stanley Robotics permet d’optimiser le stationnement longue durée et ainsi de stocker jusqu’à 50% de véhicules en plus ! Le robot est 100% élecrique et se recharge tout seul.

D’après la startup, sa solution permettrait de repenser entièrement la conception de nouveaux parkings, où la présence du public serait interdite car réservée aux robots de stationnement.

Les contraintes réglementaires applicables à ces lieux seraient alors totalement revues et certainement allégées, divisant par 2 le coût d’une place de parking et multipliant de ce fait la rentabilité de ces espaces.

Stanley Robotics, le robot stationnement autonome

Les journaliste du Parisien ont fait l’expérience « coté voyageur » et semblent conquis. Evidemment, la plus grosse part de ce marché est à l’étranger et le traffic aérien mondial devrait bondir.

En témoignent ces indicateurs :

  • Le traffic aérien mondial a progressé de 6,3% en 2016 – lire ici
  • ADP rehausse ses prévisions de traffic en 2017 – lire ici
  • Air France-KLM a enregistré une hausse de 8,5 % de son trafic passagers en avril 2017 – lire ici

Autant d’arguments qui laissent penser que nous allons encore entendre parler de Stanley Robotics dans les prochains mois 😉

photos © Stanley Robotics

17 Jul 11:40

Pragmatic Functional Programming

The move to functional programming began, in earnest, about a decade ago. We saw languages like Scala, Clojure, and F# start to attract attention. This move was more than just the normal “Oh cool, a new language!” enthusiasm. There was something real driving it – or so we thought.

Moore’s law told us that the speed of computers would double every 18 months. This law held true from the 1960s until 2000. And then it stopped. Cold. Clock rates reached 3ghz, and then plateaued. The speed of light limit had been reached. Signals could not propagate across the surface of the chip fast enough to allow higher speeds.

So the hardware designers changed their strategy. In order to get more throughput, they added more processors (cores). In order to make room for those cores they removed much of the cacheing and pipelining hardware from the chips. Thus, the processors were a bit slower than before; but there were more of them. Throughput increased.

I got my first dual core machine 8 years ago. Two years later I got a four core machine. And so the proliferation of the cores had begun. And we all understood that this would impact software development in ways that we couldn’t imagine.

One of our responses was to learn Functional Programming (FP). FP strongly discourages changing the state of a variable once initialized. This has a profound effect upon concurrency. If you can’t change the state of a variable, you can’t have a race condition. If you can’t update the value of a variable, you can’t have a concurrent update problem.

This, of course, was thought to be the solution to the multi-core problem. As cores proliferated, concurrency, NAY!, simultaneity would become a significant issue. FP ought to provide the programming style that would mitigate the problems of dealing with 1024 cores in a single processor.

So everyone started learning Clojure, or Scala, or F#, or Haskell; because they knew the freight train was on the tracks heading for them, and they wanted to be prepared when it arrived.

But the freight train never came. Six years ago I got a four core laptop. I’ve had two more since then. The next laptop I get looks like it will be a four core laptop too. Are we seeing another plateau?

As an aside, I watched a movie from 2007 last night. The heroine was using a laptop, viewing pages on a fancy browser, using google, and getting text messages on her flip phone. It was all too familiar. Oh, it was dated – I could see that the laptop was an older model, that the browser was an older version, and the flip phone was a far cry from the smart phones of today. Still – the change wasn’t as dramatic as the change from 2000 to 2011 would have been. And not nearly as dramatic as the change from 1990 - 2000 would have been. Are we seeing a plateau in the rate of computer and software technology?

So, perhaps, FP is not as critical a skill as we once thought. Maybe we aren’t going to be inundated with cores. Maybe we don’t have to worry about chips with 32,768 cores on them. Maybe we can all relax and go back to updating our variables again.

I think that would be a mistake. A big one. I think it would be as big a mistake as rampant use of goto. I think it would be as dangerous as abandoning dynamic dispatch.

Why? We can start with the reason we got interested in the first place. FP makes concurrency much safer. If you are building a system with lots of threads, or processes, then using FP will strongly reduce the issues you might have with race conditions and concurrent updates.

Why else? Well, FP is easier to write, easier to read, easier to test, and easier to understand. Now I imagine that a few of your are waving your hands and shouting at the screen. You’ve tried FP and you have found it anything but easy. All those maps and reduces and all the recursion – especially the tail recursion are anything but easy. Sure. I get it. But that’s just a problem with familiarity. Once you are familiar with those concepts – and it doesn’t take long to develop that familiarity – programming gets a lot easier.

Why does it get easier? Because you don’t have to keep track of the state of the system. The state of variables can’t change; so the state of the system remains unaltered. And it’s not just the system that you don’t have to keep track of. You don’t need to keep track of the state of a list, or the state of a set, or the state of a stack, or a queue; because these data structures cannot be changed. When you push an element onto a stack in an FP language, you get a new stack, you don’t change the old one. This means that the programmer has to juggle fewer balls in the air at the same time. There’s less to remember. Less to keep track of. And therefore the code is much simpler to write, read, understand, and test.

So what FP language should you use? My favorite is Clojure. The reason is that Clojure is absurdly simple. It’s a dialect of Lisp, which is a beautifully simple language. Here, let me show you.

Here’s a function in Java: f(x);

Now, to turn this into a function in Lisp, you simply move the first parenthesis to the left: (f x).

Now you know 95% of Lisp, and you know 90% of Clojure. That silly little parentheses syntax really is just about all the syntax there is to these languages. They are absurdly simple.

Now I know, maybe you’ve seen Lisp programs before and you don’t like all those parentheses. And maybe you don’t like CAR and CDR and CADR, etc. Don’t worry. Clojure has a bit more punctuation than Lisp, so there are fewer parentheses. Clojure also replaced CAR and CDR and CADR with first and rest and second. What’s more, Clojure is built on the JVM, and allows complete access to the full Java library, and any other Java framework or library you care to use. The interoperability is quick and easy. And, better still, Clojure allows full access to the OO features of the JVM.

“But wait!” I hear you say. “FP and OO are mutually incompatible!” Who told you that? That’s nonsense! Oh, it’s true that in FP you cannot change the state of an object; but so what? Just as pushing an integer onto a stack gives you a new stack, when you call a method that adjusts the value of an object, you get a new object instead of changing the old one. This is very easy to deal with, once you get used to it.

But back to OO. One of the features of OO that I find most useful, at the level of software architecture, is dynamic polymorphism. And Clojure provides complete access to the dynamic polymorphism of Java. Perhaps an example would explain this best.

(defprotocol Gateway
  (get-internal-episodes [this])
  (get-public-episodes [this]))

The code above defines a polymorphic interface for the JVM. In Java this interface would look like this:

public interface Gateway {
	List<Episode> getInternalEpisodes();
	List<Episode> getPublicEpisodes();
}

At the JVM level the byte-code produced is identical. Indeed, a program written in Java would implement the interface just as if it was written in Java. By the same token a Clojure program can implement a Java interface. In Clojure that looks like this:

(deftype Gateway-imp [db]
  Gateway
  (get-internal-episodes [this]
    (internal-episodes db))

  (get-public-episodes [this]
    (public-episodes db)))

Note the constructor argument db, and how all the methods can access it. In this case the implementations of the interface simply delegate to some local functions, passing the db along.

Best of all, perhaps, is the fact that Lisp, and therefore Clojure, is (wait for it) Homoiconic, which means that the code is data that the program can manipulate. This is easy to see. The following code: (1 2 3) represents a list of three integers. If the first element of a list happens to be a function, as in: (f 2 3) then it becomes a function call. Thus, all function calls in Clojure are lists; and lists can be directly manipulated by the code. Thus, a program can construct and execute other programs.

The bottom line is this. Functional programming is important. You should learn it. And if you are wondering what language to use to learn it, I suggest Clojure.

08 Jul 07:36

URLs are UI

by Scott Hanselman

imageWhat a great title. "URLs are UI." Pithy, clear, crisp. Very true. I've been saying it for years. Someone on Twitter said "this is the professional quote of 2017" because they agreed with it.

Except Jakob Nielsen said it in 1999. And Tim Berners-Lee said "Cool URIs don't change" in 1998.

So many folks spend time on their CSS and their UX/UI but still come up with URLs that are at best, comically long, and at worst, user hostile.

Search Results that aren't GETs - Make it easy to share

Even non-technical parent or partner things URLs are UI? How do I know? How many times has a relative emailed you something like this:

"Check out this house we found!
https://www.somerealestatesite.com/
homes/for_sale/
search_results.asp"

That's not meant to tease non-technical relative! It's not their fault! The URL is the UI for them. It's totally reasonable for them to copy-paste from the box that represents where they are and give it to you so you can go there too!

Make it a priority that your website supports shareable URLs.

URLs that are easy to shorten - Can you easily shorten a URL?

I love Stack Overflow's URLs. Here's an example: https://stackoverflow.com/users/6380/scott-hanselman 

The only thing that matters there is the 6380. Try it https://stackoverflow.com/users/6380 or https://stackoverflow.com/users/6380/fancy-pants also works. SO will even support this! http://stackoverflow.com/u/6380.

Genius. Why? Because they decided it matters.

Here's another https://stackoverflow.com/questions/701030/whats-the-significance-of-oct-12-1999 again, the text after the ID doesn't matter. https://stackoverflow.com/questions/701030/

This is a great model for URLs where you want a to use a unique ID but the text/title in the URL may change. I use this for my podcasts so https://hanselminutes.com/587/brandon-bouier-on-the-defense-digital-service-and-deploying-code-in-a-war-zone is the same as https://hanselminutes.com/587.

Unnecessarily long or unintuitive URLs - Human Readable and Human Guessable

Sometimes if you want context to be carried in the URL you have to, well, carry it along. There was a little debate  on Twitter recently about URLs like this https://fabrikam.visualstudio.com/_projects. What's wrong with it? The _ is not intuitive at all. Why not https://fabrikam.visualstudio.com/projects? Because obscure technical reason. In fact, all the top level menu items for doing stuff in VSTS start with _. Not /menu/ or /action or whatever. My code is https://fabrikam.visualstudio.com/_git/FabrikamVSO and I clone from here https://fabrikam.visualstudio.com/DefaultCollection/_git/FabrikamVSO. That's weird. Where did Default Collection come from? Why can't I just add a ".git" extension to my project's URL and clone that? Well, maybe they want the paths to be nice in the URL.

Nope. https://fabrikam.visualstudio.com/_git/FabrikamVSO?path=%2Fsrc%2Fsetup%2Fcleanup.local.ps1&version=GBmaster&_a=contents is a file. Compare that to https://github.com/shanselman/TinyOS/blob/master/readme.md at GitHub. Again, I am sure there is a good, and perhaps very valid technical reason. But another valid reason is very frank. URLs weren't a UX priority.

Same with OneDrive https://onedrive.live.com/?id=CD0633A7367371152C%21172&cid=CD06A73371152C vs. DropBox https://www.dropbox.com/home/Games

As a programmer, I am sympathetic. As a user, I have zero sympathy. Now I have to remember that there is a _ and it's a thing.

I proposed this. URLs are rarely a tech problem They are an organizational willpower problem. You care a lot about the evocative 2meg jpg hero image on your website. You change fonts, move CSS around ad infinitum, and agonize over single pixels. You should also care about your URLs.

SIDE NOTE: Yes, I am fully aware of my own hypocrisy with this issue. My blog software was written by a bunch of us in 2002 and our URLs are close to OK, but their age is showing. I need to find a balance between "Cool URLs don't change" and "should I change totally uncool URLs." Ideally I'd change my blog's URLs to be all lowercase, use hyphens for spaces instead of CamelCase, and I'd hide the technology. No need (other than 17 year old historical technical ones) to have .aspx or .php at the end of your URL. It's on my list.

What is your advice, Dear Reader for good URLs?


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!


© 2017 Scott Hanselman. All rights reserved.
     
16 Jun 16:30

How to reference a .NET Core library in WinForms - Or, .NET Standard Explained

by Scott Hanselman

I got an interesting email today. The author said "I have a problem consuming a .net core class library in a winforms project and can't seem to find a solution." This was interesting for a few reasons. First, it's solvable, second, it's common, and third, it's a good opportunity to clear a few things up with a good example.

To start, I emailed back with "precision questioning." I needed to assert my assumptions and get a few very specific details to make sure this was, in fact, possible. I said. "What library are you trying to use? What versions of each side (core and winforms)? What VS version?"

The answer was "I am working with VS2017. The class library is on NETCoreApp 1.1 and the app is a Winforms project on .NET Framework 4.6.2."

Cool! Let's solve it.

Referencing a .NET Core library from WinForms (running .NET Full Framework)

Before we parse this question. Let's level-set.

.NET is this big name. It's the name for the whole ecosystem, but it's overloaded in such a way that someone can say "I'm using .NET" and you only have a general idea of what that means. Are you using it on mobile? in docker? on windows?

Let's consider that ".NET" as a name is overloaded and note that there are a few "instances of .NET"

  • .NET (full) Framework - Ships with Windows. Runs ASP.NET, WPF, WinForms, and a TON of apps on Windows. Lots of businesses depend on it and have for a decade. Super powerful. Non-technical parent maybe downloads it if they want to run paint.net or a game.
  • .NET Core - Small, fast, open source, and cross-platform. Runs not only on Windows but also Mac and a dozen flavors of Linux.
  • Xamarin/Mono/Unity - The .NET that makes it possible to write apps in C# or F# and run them on everything from an iPad to cheap Android phone to a Nintendo Switch.

All of these runtimes are .NET. If you learn C# or F# or VB, you're a .NET Programmer. If you do a little research and google around you can write code for Windows, Mac, Linux, Xbox, Playstation, Raspberry Pi, Android, iOS, and on and on. You can run apps on Azure, GCP, AWS - anywhere.

What's .NET Standard?

.NET Standard isn't a runtime. It's not something you can install. It's not an "instance of .NET."  .NET Standard is an interface - a versioned list of APIs that you can call. Each newer version of .NET Standard adds more APIs but leaves older platforms/operating systems behind.

The runtimes then implement this standard. If someone comes out with a new .NET that runs on a device I've never heard of, BUT it "implements .NET Standard" then I just learned I can write code for it. I can even use my existing .NET Standard libraries. You can see the full spread of .NET Standard versions to supported runtimes in this table.

Now, you could target a runtime - a specific .NET - or you can be more flexible and target .NET Standard. Why lock yourself down to a single operating system or specific version of .NET? Why not target a list of APIs that are supported on a ton of platforms?

The person who emailed me wanted to "run a .NET Core Library on WinForms." Tease apart that statement. What they really want is to reuse code - a dll/library specifically.

When you make a new library in Visual Studio 2017 you get these choices. If you're making a brand new library that you might want to use in more than one place, you'll almost always want to choose .NET Standard.

.NET Standard isn't a runtime or a platform. It's not an operating system choice. .NET Standard is a bunch of APIs.

Pick .NET Standard

Next, check properties and decide what version of .NET Standard you need.

What version of .NET Standard?

The .NET Core docs are really quite good, and the API browser is awesome. You can find them at https://docs.microsoft.com/dotnet/ 

The API browser has all the .NET Standard APIs versioned. You can put the version in the URL if you like, or use this nice interface. https://docs.microsoft.com/en-us/dotnet/api/?view=netstandard-2.0

API Browser

You can check out .NET Standard 1.6, for example, and see all the namespaces and methods it supports. It works on Windows 10, .NET Framework 4.6.1 and more. If you need to make a library that works on Windows 8 or an older .NET Framework like 4.5, you'll need to choose a lower .NET Standard version. The table of supported platforms is here.

From the docs - When choosing a .NET Standard version, you should consider this trade-off:

  • The higher the version, the more APIs are available to you.
  • The lower the version, the more platforms implement it.

In general, we recommend you to target the lowest version of .NET Standard possible. The goal here is reuse. You can also check out the Portability Analyzer and run it on your existing libraries to see if the APIs you need are available.

.NET Portability Analyzer

.NET Standard is what you target for your libraries, and the apps that USE your library target a platform.

Diagram showing .NET Framework, Core, and Mono sitting on top the base of .NET Standard

I emailed them back briefly, "Try making the library netstandard instead."

They emailed back just a short email, "Yes! That did the trick!"


Sponsor: Big thanks to Raygun! Don't rely on your users to report the problems they experience. Automatically detect, diagnose and understand the root cause of errors, crashes and performance issues in your web and mobile apps. Learn more.


© 2017 Scott Hanselman. All rights reserved.
     
14 Jun 11:10

Months of Deadly Anti-Government Protests in Venezuela (30 photos)

Beginning on April 1, anti-government demonstrators have staged daily protests across Venezuela that continue to devolve into violent clashes with riot police, leaving thousands arrested, hundreds injured, and 66 dead. Opposition activists are protesting against the government of President Nicolas Maduro, blaming him for a crippling economic crisis that has caused widespread food shortages for years. The head of the Venezuelan military has warned troops not to commit "atrocities" against protesters, while Maduro’s government continues to work toward rewriting the constitution, defying those accusing him of clinging to power.

An opposition demonstrator wears a gas mask in a clash with police during the "Towards Victory" protest against the government of President Nicolas Maduro in Caracas on June 10, 2017. (Federico Parra / AFP / Getty)
14 Jun 11:08

Event Hubs Auto-Inflate, take control of your scale

by Shubha Vijayasarathy

Azure Event Hubs is a hyper-scalable telemetry ingestion service that can ingest millions of events per second. It gives a distributed streaming platform with low latency and configurable time retention, which enables you to ingress massive amounts of telemetry into cloud and read data from multiple applications using publish-subscribe semantics.

Event Hubs lets you scale with Throughput Units (TUs). TUs are variable reserved capacities and the component that you would purchase. A single TU entitles you to 1MB/second or 1000 events/second ingress and 2MB/second or 2000 events/second egress. This capacity has to be reserved/purchased when you create an Event Hubs namespace.

This reservation is good when you have a steady and predictable usage that is not bound to change. Many Event Hubs customers commonly increase their usage of Event Hubs after onboarding to the service. For greater data transfer, you had to increase your predetermined TUs manually. Well, not anymore!

Event Hubs is launching the new Auto-Inflate feature, which enables you to scale-up your TUs automatically to meet your usage needs. This simple opt-in feature gives you the control to prevent throttling when, data ingress rates exceed your pre-determined TUs and when your egress rates exceed your set TUs.

By enabling the Auto-Inflate on your namespace, you can limit the number of TUs you want to scale-up to on your namespace. This simple configuration lets you start small on your TUs and scales-up as you grow your data in Event Hubs. With no changes to your existing setup, this cost-effective value add feature gives you more control based on your usage needs.

This feature is now available in all Azure regions and you can enable on your existing Event Hubs. The article Enable auto-inflate on your namespace, describes the auto-inflate (or scale-up) feature in detail.

Next Steps?

Learn how you can enable this feature on your namespace - Enable auto-inflate on your namespace

Use ARM to enable the scale-up feature

Onboard to Azure Event Hubs

Start enjoying this feature, available today.

If you have any questions or suggestions, leave us a comment below.

14 Jun 05:36

Deployment Slots Preview for Azure Functions

by Daria Grigoriu

The Azure Functions Consumption Plan now includes a preview of deployment slots. Deployment slots are a valuable component of a cloud ready application development lifecycle. The feature enables development and production environment separation to isolate critical production workloads. At the same time deployment slots create a natural bridge between development and production where the next version of a Function App staged in a deployment slot can become the production version with a simple platform managed swap action. For more information on the deployment slots concept as used in the context of the broader App Service platform please see this documentation article.

You can explore the deployment slots preview via Azure Portal. Each Function App includes a view of deployment slots. The preview requires a one-time opt-in for each Function App available under the Function App’s Settings tab. Opt-out is not available, simply delete deployment slots if no longer necessary.

After preview opt-in the Function App secrets will be updated. Please copy the new secrets under the Manage node for each function. You can add a deployment slot under the Slots view. For the Consumption Plan you can include 1 other slot in addition to production.

Each deployment slot can be treated as a standalone Function App with its own URL, its own content, and differentiated configuration. That means even input and output bindings can be different and the non-production version can be evolved independently without production dependencies if this is a requirement for your specific workload. You can designate configuration elements such as App Settings or Connection Strings as slot specific to make sure they will continue to be associated with the same slot after swap: e.g. a production slot will continue to point to the same production specific storage account.

To swap a non-production deployment slot to production use the Swap action under the Overview tab for your Function App.

You can select the swap type as direct swap or a swap with preview where destination slot configuration is applied to the source deployment slot to allow validation before the swap is finalized. You can also see a configuration diff to make sure you are aware and can react to how configuration elements are impacted by the swap action.

The deployment slots preview will continue to evolve in the journey to general availability. There are some current limitations such as a single instance scale for non-production deployment slots. If your production Function App is running at large scale this limitation may result in a timeframe where throughput is decreased as the platform re-adjusts the scale after swap. For any questions or issues to share with the engineering team regarding the deployment slots preview please use the Azure Functions MSDN forum.

Follow us on Twitter for product updates and community news @AzureFunctions.

04 Jun 18:51

Scenes From the Moscow Metro (24 photos)

Moscow’s underground transit system is now more than 80 years old, and carries up to 9 million passengers through more than 200 stations every day. Most of the architecture and decor was built decades ago, meant to be a showcase for Soviet artists, ideals, and icons. The system is now modernizing, in part, preparing for the 2018 World Cup, which will be hosted in Russia. Several Reuters photographers have captured images of the varied and unique Moscow Metro stations, as well as the workers and passengers underground, over the past year.

People gather at Ploschad Revolyutsii (Revolution Square) metro station in Moscow, Russia, on March 6, 2016. (Maxim Zmeyev / Reuters)
04 Jun 18:01

Desperate Migrants Risk Everything in Deadly Mediterranean Crossings (26 photos)

Getty Images photographer Chris McGrath recently spent about two weeks with crew members aboard the Migrant Offshore Aid Station (MOAS) Phoenix vessel as they patrolled the Mediterranean between Italy and Libya. During several missions, the crew rescued hundreds of migrants, some from capsized vessels—and recovered dozens of bodies from the sea. The United Nations estimates that more than 65,000 migrants have arrived in Europe this year so far from Africa and the Middle East,  about one-third as many as arrived in 2016 by the same date. The large part of that decrease is due to tighter restrictions nearly cutting off migrants entering through Greece. As migrants move to other, more dangerous routes, the death toll has climbed to more than 1,500 so far this year—more than 2016, despite the significant decrease in overall numbers.

Refugees and migrants swim towards a rescue craft as a rescue crew member of the Migrant Offshore Aid Station (MOAS) Phoenix vessel pulls a man on board after a wooden boat bound for Italy carrying more than 500 people capsized on May 24, 2017, off Lampedusa, Italy. (Chris McGrath / Getty)
03 Jun 07:13

Hacker, Hack Thyself

by Jeff Atwood

We've read so many sad stories about communities that were fatally compromised or destroyed due to security exploits. We took that lesson to heart when we founded the Discourse project; we endeavor to build open source software that is secure and safe for communities by default, even if there are thousands, or millions, of them out there.

However, we also value portability, the ability to get your data into and out of Discourse at will. This is why Discourse, unlike other forum software, defaults to a Creative Commons license. As a basic user on any Discourse you can easily export and download all your posts right from your user page.

Discourse Download All Posts

As a site owner, you can easily back up and restore your entire site database from the admin panel, right in your web browser. Automated weekly backups are set up for you out of the box, too. I'm not the world's foremost expert on backups for nothing, man!

Discourse database backup download

Over the years, we've learned that balancing security and data portability can be tricky. You bet your sweet ASCII a full database download is what hackers start working toward the minute they gain any kind of foothold in your system. It's the ultimate prize.

To mitigate this threat, we've slowly tightened restrictions around Discourse backups in various ways:

  • Administrators have a minimum password length of 15 characters.

  • Both backup creation and backup download administrator actions are formally logged.

  • Backup download tokens are single use and emailed to the address of the administrator, to confirm that user has full control over the email address.

The name of the security game is defense in depth, so all these hardening steps help … but we still need to assume that Internet Bad Guys will somehow get a copy of your database. And then what? Well, what's in the database?

  • Identity cookies

    Cookies are, of course, how the browser can tell who you are. Cookies are usually stored as hashes, rather than the actual cookie value, so having the hash doesn't let you impersonate the target user. Furthermore, most modern web frameworks rapidly cycle cookies, so they are only valid for a brief 10 to 15 minute window anyway.

  • Email addresses

    Although users have reason to be concerned about their emails being exposed, very few people treat their email address as anything particularly precious these days.

  • All posts and topic content

    Let's assume for the sake of argument that this is a fully public site and nobody was posting anything particularly sensitive there. So we're not worried, at least for now, about trade secrets or other privileged information being revealed, since they were all public posts anyway. If we were, that's a whole other blog post I can write at a later date.

  • Password hashes

    What's left is the password hashes. And that's … a serious problem indeed.

Now that the attacker has your database, they can crack your password hashes with large scale offline attacks, using the full resources of any cloud they can afford. And once they've cracked a particular password hash, they can log in as that user … forever. Or at least until that user changes their password.

⚠️ That's why, if you know (or even suspect!) your database was exposed, the very first thing you should do is reset everyone's password.

Discourse database password hashes

But what if you don't know? Should you preemptively reset everyone's password every 30 days, like the world's worst bigco IT departments? That's downright user hostile, and leads to serious pathologies of its own. The reality is that you probably won't know when your database has been exposed, at least not until it's too late to do anything about it. So it's crucial to slow the attackers down, to give yourself time to deal with it and respond.

Thus, the only real protection you can offer your users is just how resistant to attack your stored password hashes are. There are two factors that go into password hash strength:

  1. The hashing algorithm. As slow as possible, and ideally designed to be especially slow on GPUs for reasons that will become painfully obvious about 5 paragraphs from now.

  2. The work factor or number of iterations. Set this as high as possible, without opening yourself up to a possible denial of service attack.

I've seen guidance that said you should set the overall work factor high enough that hashing a password takes at least 8ms on the target platform. It turns out Sam Saffron, one of my Discourse co-founders, made a good call back in 2013 when he selected the NIST recommendation of PBKDF2-HMAC-SHA256 and 64k iterations. We measured, and that indeed takes roughly 8ms using our existing Ruby login code on our current (fairly high end, Skylake 4.0 Ghz) servers.

But that was 4 years ago. Exactly how secure are our password hashes in the database today? Or 4 years from now, or 10 years from now? We're building open source software for the long haul, and we need to be sure we are making reasonable decisions that protect everyone. So in the spirit of designing for evil, it's time to put on our Darth Helmet and play the bad guy – let's crack our own hashes!

We're gonna use the biggest, baddest single GPU out there at the moment, the GTX 1080 Ti. As a point of reference, for PBKDF2-HMAC-SHA256 the 1080 achieves 1180 kH/s, whereas the 1080 Ti achieves 1640 kH/s. In a single video card generation the attack hash rate has increased nearly 40 percent. Ponder that.

First, a tiny hello world test to see if things are working. I downloaded hashcat. I logged into our demo at try.discourse.org and created a new account with the password 0234567890; I checked the database, and this generated the following values in the hash and salt database columns for that new user:

hash
93LlpbKZKficWfV9jjQNOSp39MT0pDPtYx7/gBLl5jw=
salt
ZWVhZWQ4YjZmODU4Mzc0M2E2ZDRlNjBkNjY3YzE2ODA=

Hashcat requires the following input file format: one line per hash, with the hash type, number of iterations, salt and hash (base64 encoded) separated by colons:

type   iter  salt                                         hash  
sha256:64000:ZWVhZWQ4YjZmODU4Mzc0M2E2ZDRlNjBkNjY3YzE2ODA=:93LlpbKZKficWfV9jjQNOSp39MT0pDPtYx7/gBLl5jw=  

Let's hashcat it up and see if it works:

./h64 -a 3 -m 10900 .\one-hash.txt 0234567?d?d?d

Note that this is an intentionally tiny amount of work, it's only guessing three digits. And sure enough, we cracked it fast! See the password there on the end? We got it.

sha256:64000:ZWVhZWQ4YjZmODU4Mzc0M2E2ZDRlNjBkNjY3YzE2ODA=:93LlpbKZKficWfV9jjQNOSp39MT0pDPtYx7/gBLl5jw=:0234567890

Now that we know it works, let's get down to business. But we'll start easy. How long does it take to brute force attack the easiest possible Discourse password, 8 numbers – that's "only" 108 combinations, a little over one hundred million.

Hash.Type........: PBKDF2-HMAC-SHA256  
Time.Estimated...: Fri Jun 02 00:15:37 2017 (1 hour, 0 mins)  
Guess.Mask.......: ?d?d?d?d?d?d?d?d [8]  

Even with a top of the line GPU that's … OK, I guess. Remember this is just one hash we're testing against, so you'd need one hour per row (user) in the table. And I have more bad news for you: Discourse hasn't allowed 8 character passwords for quite some time now. How long does it take if we try longer numeric passwords?

?d?d?d?d?d?d?d?d?d [9]
Fri Jun 02 10:34:42 2017 (11 hours, 18 mins)

?d?d?d?d?d?d?d?d?d?d [10]
Tue Jun 06 17:25:19 2017 (4 days, 18 hours)

?d?d?d?d?d?d?d?d?d?d?d [11]
Mon Jul 17 23:26:06 2017 (46 days, 0 hours)

?d?d?d?d?d?d?d?d?d?d?d?d [12]
Tue Jul 31 23:58:30 2018 (1 year, 60 days)  

But all digit passwords are easy mode, for babies! How about some real passwords that use at least lowercase letters, or lowercase + uppercase + digits?

Guess.Mask.......: ?l?l?l?l?l?l?l?l [8]  
Time.Estimated...: Mon Sep 04 10:06:00 2017 (94 days, 10 hours)

Guess.Mask.......: ?1?1?1?1?1?1?1?1 [8] (-1 = ?l?u?d)  
Time.Estimated...: Sun Aug 02 09:29:48 2020 (3 years, 61 days)  

A brute force try-every-single-letter-and-number attack is not looking so hot for us at this point, even with a high end GPU. But what if we divided the number by eightby putting eight video cards in a single machine? That's well within the reach of a small business budget or a wealthy individual. Unfortunately, dividing 38 months by 8 isn't such a dramatic reduction in the time to attack. Instead, let's talk about nation state attacks where they have the budget to throw thousands of these GPUs at the problem (1.1 days), maybe even tens of thousands (2.7 hours), then … yes. Even allowing for 10 character password minimums, you are in serious trouble at that point.

If we want Discourse to be nation state attack resistant, clearly we'll need to do better. Hashcat has a handy benchmark mode, and here's a sorted list of the strongest (slowest) hashes that Hashcat knows about benchmarked on a rig with 8 Nvidia GTX 1080 GPUs. Of the things I recognize on that list, bcrypt, scrypt and PBKDF2-HMAC-SHA512 stand out.

My quick hashcat results gave me some confidence that we weren't doing anything terribly wrong with the Discourse password hashes stored in the database. But I wanted to be completely sure, so I hired someone with a background in security and penetration testing to, under a signed NDA, try cracking the password hashes of two live and very popular Discourse sites we currently host.

I was provided two sets of password hashes from two different Discourse communities, containing 5,909 and 6,088 hashes respectively. Both used the PBKDF2-HMAC-SHA256 algorithm with a work factor of 64k. Using hashcat, my Nvidia GTX 1080 Ti GPU generated these hashes at a rate of ~27,000/sec.

Common to all discourse communities are various password requirements:

  • All users must have a minimum password length of 10 characters.
  • All administrators must have a minimum password length of 15 characters.
  • Users cannot use any password matching a blacklist of the 10,000 most commonly used passwords.
  • Users can choose to create a username and password or use various third party authentication mechanisms (Google, Facebook, Twitter, etc). If this option is selected, a secure random 32 character password is autogenerated. It is not possible to know whether any given password is human entered, or autogenerated.

Using common password lists and masks, I cracked 39 of the 11,997 hashes in about three weeks, 25 from the ████████ community and 14 from the ████████ community.

This is a security researcher who commonly runs these kinds of audits, so all of the attacks used wordlists, along with known effective patterns and masks derived from the researcher's previous password cracking experience, instead of raw brute force. That recovered the following passwords (and one duplicate):

007007bond
123password
1qaz2wsx3e
A3eilm2s2y
Alexander12
alexander18
belladonna2
Charlie123
Chocolate1
christopher8
Elizabeth1
Enterprise01
Freedom123
greengrass123
hellothere01
I123456789
Iamawesome
khristopher
l1ghthouse
l3tm3innow
Neversaynever
password1235
pittsburgh1
Playstation2
Playstation3
Qwerty1234
Qwertyuiop1
qwertyuiop1234567890
Spartan117
springfield0
Starcraft2
strawberry1
Summertime
Testing123
testing1234
thecakeisalie02
Thirteen13
Welcome123

If we multiply this effort by 8, and double the amount of time allowed, it's conceivable that a very motivated attacker, or one with a sophisticated set of wordlists and masks, could eventually recover 39 × 16 = 624 passwords, or about five percent of the total users. That's reasonable, but higher than I would like. We absolutely plan to add a hash type table in future versions of Discourse, so we can switch to an even more secure (read: much slower) password hashing scheme in the next year or two.

bcrypt $2*$, Blowfish (Unix)  
  20273 H/s

scrypt  
  886.5 kH/s

PBKDF2-HMAC-SHA512  
  542.6 kH/s 

PBKDF2-HMAC-SHA256  
 1646.7 kH/s 

After this exercise, I now have a much deeper understanding of our worst case security scenario, a database compromise combined with a professional offline password hashing attack. I can also more confidently recommend and stand behind our engineering work in making Discourse secure for everyone. So if, like me, you're not entirely sure you are doing things securely, it's time to put those assumptions to the test. Don't wait around for hackers to attack you — hacker, hack thyself!

[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
03 Jun 06:59

Visual Studio and IIS Error: Specified argument was out of the range of valid values. Parameter name: site

by Scott Hanselman

I got a very obscure and obtuse error running an ASP.NET application under Visual Studio and IIS Express recently. I'm running a Windows 10 Insiders (Fast Ring) build, so it's likely an issue with that, but since I was able to resolve the issue simply, I figured I'd blog it for google posterity .

I would run the ASP.NET app from within Visual Studio and get this totally useless error. It was happening VERY early in the bootstrapping process and NOT in my application. It pretty clearly is happening somewhere in the depths of IIS Express, perhaps in a configurator in HttpRuntime.

Specified argument was out of the range of valid values.
Parameter name: site

I fixed it by going to Windows Features and installing "IIS Hostable Web Core," part of Internet Information Services. I did this in an attempt to "fix whatever's wrong with IIS Express."

Turn Windows Features on or off

That seems to "repair" IIS Express. I'll update this post if I learn more, but hopefully if you got to this post, this fixed it for you also.


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!


© 2017 Scott Hanselman. All rights reserved.
     
22 May 11:32

Exploring the preconfigured browser-based Linux Cloud Shell built into the Azure Portal

by Scott Hanselman

At BUILD a few weeks ago I did a demo of the Azure Cloud Shell, now in preview. It's pretty fab and it's built into the Azure Portal and lives in your browser. You don't have to do anything, it's just there whenever you need it. I'm trying to convince them to enable "Quake Mode" so it would pop-up when you click ~ but they never listen to me. ;)

Animated Gif of the Azure Cloud Shell

Click the >_ shell icon in the top toolbar at http://portal.azure.com. The very first time you launch the Azure Cloud Shell it will ask you where it wants your $home directory files to be persisted. They will live in your own Storage Account. Don't worry about cost, remember that Azure Storage is like pennies a gig, so assuming you're storing script files, figure it's thousandths of pennies - a non-issue.

Where do you want your account files persisted to?

It's pretty genius how it works, actually. Since you can setup an Azure Storage Account as a regular File Share (sharing to Mac, Linux, or Windows) it will just make a file share and mount it. The data you save in the ~/clouddrive is persistent between sessions, the sessions themselves disappear if you don't use them.

Now my Azure Cloud Shell Files are available anywhere

Today it's got bash inside a real container. Here's what lsb_release -a says:

scott@Azure:~/clouddrive$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.1 LTS
Release:        16.04
Codename:       xenial

Looks like Ubuntu xenial inside a container, all managed by an orchestrator within Azure Container Services. The shell is using xterm.js to make it all possible inside the browser. That means you can run vim, top, whatever makes you happy. Cloud shells include vim, emacs, npm, make, maven, pip, as well as docker, kubectl, sqlcmd, postgres, mysql, iPython, and even .NET Core's command line SDK.

NOTE: Ctrl-v and Ctrl-c do not function as copy/paste on Windows machines [in the Portal using xterm.js], please us Ctrl-insert and Shift-insert to copy/paste. Right-click copy paste options are also available, however this is subject to browser-specific clipboard access

When you're in there, of course the best part is that you can ssh into your Linux VMs. They say PowerShell is coming soon to the Cloud Shell so you'll be able to remote Powershell in to Windows boxes, I assume.

The Cloud Shell has the Azure CLI (command line interface) built in and pre-configured and logged in. So I can hit the shell then (for example) get a list of my web apps, and restart one. Here I'm getting the names of my sites and their resource groups, then restarting my son's hamster blog.

scott@Azure:~/clouddrive$ az webapp list -o table
ResourceGroup               Location          State    DefaultHostName                             AppServicePlan     Name
--------------------------  ----------------  -------  ------------------------------------------  -----------------  ------------------------
Default-Web-WestUS          West US           Running  thisdeveloperslife.azurewebsites.net        DefaultServerFarm  thisdeveloperslife
Default-Web-WestUS          West US           Running  hanselmanlyncrelay.azurewebsites.net        DefaultServerFarm  hanselmanlyncrelay
Default-Web-WestUS          West US           Running  myhamsterblog.azurewebsites.net             DefaultServerFarm  myhamsterblog


scott@Azure:~/clouddrive$ az webapp restart -n myhamsterblog -g "Default-Web-WestUS"

Pretty cool. I'm going to keep exploring, but I like the way the Azure Portal is going from a GUI and DevOps dashboard perspective, but it's also nice to have a CLI preconfigured whenever I need it.


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Watch Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Watch now!


© 2017 Scott Hanselman. All rights reserved.
     
17 May 05:06

How I accidentally stopped a global Wanna Decryptor ransomware attack

by Ars Staff

I’ve finally found enough time between e-mails and Skype calls to write up the crazy events that occurred over Friday, which was supposed to be part of my week off. You’ve probably read about the Wanna Decryptor (aka WannaCrypt or WCry) fiasco on several news sites, but I figured I’d tell my story.

I woke up at around 10am and checked onto the UK cyber threat sharing platform where I had been following the spread of the Emotet banking malware, something that seemed incredibly significant until today. There were a few of your usual posts about various organisations being hit with ransomware, but nothing significant... yet. I ended up going out to lunch with a friend, meanwhile the WannaCrypt ransomware campaign had entered full swing.

When I returned home at about 2:30, the threat sharing platform was flooded with posts about various NHS systems all across the country being hit, which was what tipped me off to the fact this was something big. Although ransomware on a public sector system isn’t even newsworthy, systems being hit simultaneously across the country is. (Contrary to popular belief, most NHS employees don’t open phishing e-mails, which suggested that something to be this widespread it would have to be propagated using another method.)

Read 29 remaining paragraphs | Comments

16 May 15:48

How to use Azure Functions with IoT Hub message routing

by Nicole Berdy

I get a lot of asks for new routing endpoints in Azure IoT Hub, and one of the more common asks is to be able to route directly to Azure Functions. Having the power of serverless compute at your fingertips is very powerful and allows you to do all sorts of amazing things with your IoT data.
 
(Quick refresher: back in December 2016 we released message routing in IoT Hub. Message routing allows customers to setup automatic routing of events to different systems, and we take care of all of the difficult implementation architecture for you. Today you can configure your IoT Hub to route messages to your backend processing services via Service Bus queues, topics, and Event Hubs as custom endpoints for routing rules.)

One quick note: if you want to trigger an Azure Function on every message sent to IoT Hub, you can do that already! Just use the Event Hubs trigger and specify IoT Hub's built-in Event Hub-compatible endpoint as the trigger in the function. You can get the IoT Hub built-in endpoint information in the portal under Endpoints –> Events:

Endpoints

Here’s where you enter that information when setting up your Function:

Name function

If you’re looking to do more than that, read on.

I have good news and I have bad news. The bad news first: this blog post is not announcing support for Functions as a custom endpoint in IoT Hub (it's on the backlog). The good news is that it's really easy to use an intermediate service to trigger your Azure Function to fire!

Let's take the scenario described in the walkthrough, Process IoT Hub device-to-cloud messages using routes. In the article, a device occasionally sends a critical alert message that requires different processing from the telemetry messages, which comprise the bulk of the traffic through IoT Hub. The article routes messages to a Service Bus queue added to the IoT hub as a custom endpoint. When I demo the message routing feature to customers, I use a Logic app to read from the queue and further process messages, but we can just as easily use an Azure Function to run some custom code. I'm going to assume you've already run through the walkthrough and have created a route to a Service Bus queue, but if you want a quick refresher on how to do that you can jump straight to documentation here. This post will be waiting when you get back!

First, create a Function App in the Azure Portal:

Create function

Next, create a Function to read data off your queue. From the quickstart page, click on “Create your own custom function” and select the template “ServiceBusQueueTrigger-CSharp”:

Functions templates

Follow the steps to add your Service Bus queue connection information to the function, and you’re done setting up the trigger. Now you can use the power of Azure Functions to trigger your custom message processing code whenever there's a new message on the queue.
 
Service Bus is billed per million operations and it doesn't add an appreciable amount to the cost of your IoT solution. For example, if I send all messages from my 1 unit S1 SKU IoT hub (400k messages/day) through to a function in this manner, I pay less than $0.05 USD for the intermediate queue if I use a Basic SKU queue. I’m not breaking the bank there.
 
That should tide you over until we have first-class support for routing to Azure Functions in IoT Hub. In the meantime, you can read more about message routes on the developer guide, and learn more about  As always, please continue to submit your suggestions through the Azure IoT User Voice forum or join the Azure IoT Advisors Yammer group.

15 May 08:43

BUILD 2017 Conference Rollup for .NET Developers

by Scott Hanselman

The BUILD Conference was lovely this last week, as was OSCON. I was fortunate to be at both. You can watch all the interviews and training sessions from BUILD 2017 on Channel 9.

Here's a few sessions that you might be interested in.

Scott Hunter, Kasey Uhlenhuth, and I had a session on .NET Standard 2.0 and how it fit into a world of .NET Core, .NET (Full) Framework, and Mono/Xamarin.

One of the best demos, IMHO, in this talk, was taking an older .NET 4.x WinForms app, updating it to .NET 4.7 and automatically getting HiDPI support. Then we moved it's DataSet-driven XML Database layer into a shared class library that targeted .NET Standard. Then we made a new ASP.NET Core 2.0 application that shared that new .NET Standard 2.0 library with the existing WinForms app. It's a very clear example of the goal of .NET Standard.

.NET Core 2.0 Video

Then, Daniel Roth and I talked about ASP.NET Core 2.0

ASP.NET Core 2.0 Video

Maria Naggaga talked about Support for ASP.NET Core. What's "LTS?" How do you balance purchased software that's supported and open source software that's supported?

Support for ASP.NET and .NET - What's an LTS?

Mads Torgersen and Dustin Campbell teamed up to talk about the Future of C#!

The Future of C#

David Fowler and Damian Edwards introduced ASP.NET Core SignalR!

SignalR for .NET Core

There's also a TON of great 10-15 min short BUILD videos like:

As for announcements, check these out:

And best of all...All .NET Core 2.0 and .NET Standard 2.0 APIs are now on http://docs.microsoft.com at https://docs.microsoft.com/en-us/dotnet

Enjoy!


Sponsor: Test your application against full-sized database copies. SQL Clone allows you to create database copies in seconds using MB of storage. Create clones instantly and test your application as you develop.


© 2017 Scott Hanselman. All rights reserved.
     
11 May 11:14

Visual Studio 2017 Tools for Azure Functions

by Andrew B Hall - MSFT

Visual Studio 2017 Tools for Azure Functions are now available as part of the Azure development workload starting in the Visual Studio 2017 15.3 release. These tools:

  • Enable creating pre-compiled C# functions that bring better cold start performance than script based functions, and opens the entire eco-system of Visual Studio tools for class libraries including code analysis, unit testing, complete IntelliSense, 3rd party extensions, etc.
  • Uses WebJobs attributes to declare function bindings directly in the C# code rather than the separate function.json file.

clip_image002

Getting Started

To get started:

To create a new project, choose File -> New Project, and the Azure Functions project type

image

This will create an empty project which contains the following files:

  • host.json enables configuring the function host
  • local.settings.json which stores setting information such as connection strings used for running the function on the development machine. Note: For all trigger types except HTTP, you need to set the value of AzureWebJobsStorage to a valid Azure Storage account connection string.

To add a function to the application right click the project and choose “Add Item”, then choose the “Azure Function” item template. This will launch the Azure Function dialog that enables you to choose the type of function you want, and enter any relevant binding information. For example, in the dialog below, the queue trigger asks you for the name of the connection string to the storage queue, and the name of the queue (path).

image

This generates a new class that has the following elements:

  • A static Run method, that is attributed with [FunctionName] attribute. The [FunctionName] attribute indicates that the method is the entry for an Azure Function.
  • The first parameter has a QueueTrigger attribute, this is what indicates is a queue trigger function (and takes the binding information as parameters to the attribute. In this case the name of the queue and the connection string’s setting name)

Once you have a function, local development works like you would expect. You can run and debug it locally, add NuGet packages, create unit tests, and anything else you would do for a class library.

clip_image008

To publish a Function project to Azure directly from Visual Studio, right click the project and choose “Publish”. On the publish page, you can either create a new Function App in Azure or publish to an existing one. Note: even though the Folder option is currently appears, it’s not intended for use with Azure Functions at this time.

clip_image010

It’s also possible to configure continuous delivery using Visual Studio Team Services.

Question and Answer

I installed Visual Studio 2017 15.3 and the Azure development workload, but I don’t see the Azure Functions project type or am receiving an error trying to build or run a function app:  While pulled in automatically by the Azure development workload, Azure Function tools are distributed via the Visual Studio gallery which gives us the flexibility to update them as needed to react to changes on the Azure side which don’t always happen on the Visual Studio schedule.  If for some reason the tools don’t get automatically updated from the gallery, in Visual Studio, go to Tools | Extensions and Updates, and look at the “Updates” tab.  If it shows an update is available for “Azure Functions and Web Job Tools” manually update them by clicking the “Update” button.

How can I file issues or provide feedback on these tools? You can file issues or provide feedback on the Azure Functions GitHub repo by prefixing them with [Visual Studio]

Are these targeting .NET Standard 2.0 as outlined in the roadmap post? At this time the Functions Runtime does not yet support .NET Standard libraries.  So these are .NET Standard class library projects, but the build target is set to .NET 4.6.1.  In the future, when the Functions runtime support .NET Standard you will simply need to change the target framework.

I have existing functions written as .csx scripts, how do I port those to the new precompiled project type? To convert a .csx file into a new function you will need to move the Run method into a class, remove #load, and replace #r with assembly or project to project references (see complete steps).

What about support for F# functions? It will be possible to create Azure Functions in Visual Studio using F# in a future update, but support is not included in this release.

What is the plan for the Visual Studio 2015 tools? The Visual Studio 2015 tooling was an initial preview that got us a lot of great feedback and we learned a lot from them. Given our pivot to pre-compiled functions with the intent to focus on .NET Standard 2.0, we have dependencies that only exist in Visual Studio 2017 Update 3 and beyond, so there are no plans to release any future updates for Visual Studio 2015. Once the Functions runtime supports .NET Core, it will be possible to work with Azure Functions in Visual Studio Code as well as Visual Studio 2017 if you prefer or are unable to upgrade to Visual Studio 2017.

Conclusion

We’re very happy to be releasing our first version of support tools for Azure Function development in Visual Studio, so please let us know how they work for you. You can do that below, in the Azure Functions GitHub repo, or via twitter at @AndrewBrianHall and @AzureFunctions.

11 May 09:54

Introducing Azure Functions Runtime preview

by Andrew Westgarth

Customers have embraced Azure Functions because it allows them to focus on application innovation rather than infrastructure management. The simplicity of the Functions programming model that underpins the service, has been key to enable this. This model that allows developers to build event-driven solutions and easily bind their code to other services, while using their favorite developer tools, has good utility even outside the cloud.

Today we are excited to announce the preview of Azure Functions Runtime that brings the simplicity and power of Azure Functions to on-premises.

Azure Functions Runtime overview

This runtime provides a new way for customers to take advantage of the Functions programming model on-premises. Built on the same open source roots that Azure Functions service is built on, Azure Functions Runtime can be deployed on-premises and provides a near similar development experience as the cloud service.

  • Harness unused compute power: It provides a cheap way for customers to perform certain tasks such as harnessing the compute power of on-premises PCs to run batch processes overnight, leveraging devices on the floor to conditionally send data to the cloud, and so on.
  • Future-proof your code assets: Customers who want to experience Functions-as-a-Service even before committing to the cloud, would also find this runtime very useful.  The code assets they build on-premises can easily be translated to cloud when they eventually move.

The runtime essentially consists of two pieces, the Management Role and the Worker Role.  As the names suggest, these two are for managing and executing functions code respectively. You can scale out your Functions by installing the Worker Role on multiple machines, and take advantage of spare computing power.

Azure Functions Runtime

Management Role

The Azure Functions Runtime Management Role provides a host for the management of your Functions on-premises.

  • It hosts the Azure Functions Runtime Portal in which you can develop your functions in the same way as in Azure. 
  • It is responsible for distributing functions across multiple Functions workers. 
  • It provides an endpoint that allows you to publish your functions from Microsoft Visual Studio, Team Foundation Server, or Visual Studio Team Services.

Management Role

Worker Role

The Azure Functions Runtime Worker Role is where the functions code executes. You can deploy multiple Worker Roles throughout your organization and this is a key way in which customers can make use of spare compute power.

Requirements

The Azure Functions Runtime Worker Role is deployed in a Windows Container. As such it requires that the host machine is running Windows Server 2016 or Windows 10 Creators Update.

How do I get started?

Please download the Azure Functions Runtime installer.

For details, please see the Azure Functions Runtime documentation.

We would love to hear your feedback, questions, comments about this runtime through our regular channels including Forums, StackOverFlow, or Uservoice.