Shared posts

20 Nov 18:28

Why I prefer spaces to tabs

Coding convention discussions are always fun. I just had one of them on the weekly Orchard meeting, where I’ve joked that spaces are objectively superior to tabs, by which I meant that there are objective arguments in favor of spaces that I find subjectively compelling..

The first argument for me is that a good programming font has to be a monospace font. Tabs are weird in that context because they are the only useful characters that are a different width than everything else. If, for whatever reason, you have to replace a space with another character, nothing else will move. If you replace a tab with anything else, the rest of the line will move. Additionally, if you change the tab width in your editor settings, the code will look different to you than to its author.

The second argument is that what you align is not always line beginnings. There are legitimate cases where the beginning of a line is aligned with some boundary inside the previous line. A typical example is a list of arguments for a method that is too long to fit on one line:

It is a legitimate preference to want the additional lines of parameters to align with the opening parenthesis. Tabs can’t do that consistently.

That’s my opinion. Pick your own. As always, what’s important is to set the rules for the project and stick to them…

26 Jul 09:41

Feature Toggles

Feature toggles are a powerful technique, but like most powerful techniques there is much to learn in order to use them well. And like so much in the software world, there's precious little documentation on how to work with them. This frustrated my colleague Pete Hodgson, and he decided to scratch that itch. The result is an article that I think will become the definitive article on feature toggles.

We're going to release this article in installments over the next few weeks. The first installment sketches out an overview story of how a team uses feature toggles in a project, and introduces various terms that explain how these toggles work: toggle points, toggle router, toggle context, and toggle configuration.

10 Jul 06:18

The Programmer's Oath

In order to defend and preserve the honor of the profession of computer programmers,

I Promise that, to the best of my ability and judgement:

  1. I will not produce harmful code.

  2. The code that I produce will always be my best work. I will not knowingly allow code that is defective either in behavior or structure to accumulate.

  3. I will produce, with each release, a quick, sure, and repeatable proof that every element of the code works as it should.

  4. I will make frequent, small, releases so that I do not impede the progress of others.

  5. I will fearlessly and relentlessly improve my creations at every opportunity. I will never degrade them.

  6. I will do all that I can to keep the productivity of myself, and others, as high as possible. I will do nothing that decreases that productivity.

  7. I will continuously ensure that others can cover for me, and that I can cover for them.

  8. I will produce estimates that are honest both in magnitude and precision. I will not make promises without certainty.

  9. I will never stop learning and improving my craft.

06 Jun 15:04

Functional style follow-up

by ericlippert

Thanks to everyone who came out to my beginner talk on using functional style in C# on Wednesday. I had a great time and we had a capacity crowd. The video will be posted in a couple of weeks; I’ll put up a link when I have it.

A number of people asked questions that we did not have time to get to:

What is the advantage of using C# over a functional language like F#? How would you choose one or the other for a new project?

This is a bit of a “which is better, a gorilla or a shark?” questions, but I’ll give it a shot.

C# is primarily an OO language which supports programming in a functional style. F# is primarily a functional language which supports programming in an OO style. They’re both great languages; which I would choose for a given task would depend on factors like how familiar is the team with the given language, how much code do we have in one or the other language already, and so on. Both are good general-purpose programming languages. F# tends to be used in domains that have a lot of scientific or financial computations.

Can you recommend some good books related to C# functional programming?

“Real World Functional Programming” by Tomas Petricek and Jon Skeet is quite good. You can find some of the tutorial material from this book online here. It is very pragmatic.

A very advanced book I like on functional programming is Chris Okasaki’s book “Purely Functional Data Structures”. You should have a basic grasp of the ML language before you try to read this book. It is very academic.

How does functional programming work when interacting with databases?

When querying a database, functional programming works extraordinarily well. LINQ is entirely based on the idea of constructing immutable query objects that can be turned into efficient database queries. (I only showed “in memory” LINQ examples, but it works against databases as well.)

Things get a little more difficult when you consider database updates. Updating a database is a side effect on a massive amount of external state, and so very non-functional-style. The key to doing database updates in functional style is to take a more relaxed view of functional languages. Rather than saying “I’m going to eliminate all mutations and all statefulness”, rather say “I’m going to use functional style to make mutations and statefulness apparent and obvious so that I can understand them.”

What are some issues involving casting, interfaces and mutability?

A common technique for making a return value seem immutable is to have a method that returns IEnumerable<int>, but actually return an int[]. The developer reasons that the caller cannot mutate the array because IEnumerable<int> exposes no mutation methods, and therefore they can hand out the same array over and over. However nothing is stopping the caller from casting the sequence back to an array. Now, you can easily say that if it hurts when you do that, don’t do that. But I would rather hand out an ReadOnlyCollection<int> or the like, that wraps the mutable class rather than casting it.

Now, read-only collections have problems of their own. The underlying collection is still mutable, and if you mutate it then all the code using the read-only wrapper will see the change!

My preferred solution is to use immutable collections and builders. Use the builder when you need a mutable collection, and return the immutable collection. Immutable collections truly are immutable; they are not wrappers around something mutable.

Can you explain async and await in parallel programming?

That’s a huge topic. Start here for some gentle introductions.

Are functional programming and OOP here to stay? Or will one win over the other?

These are both very successful programming styles with a long history and bright futures. I see them moving together; many languages now have both OOP and functional features. C#, F#, OCaml, Scala, JavaScript and many more.


02 Jun 11:53

Monitor madness, part two

by ericlippert

In the previous exciting episode I ended on a cliffhanger; why did I put a loop around each wait? In the consumer, for example, I said:

    while (myQueue.IsEmpty)
      Monitor.Wait(myLock); 

It seems like I could replace that “while” with an “if”. Let’s consider some scenarios. I’ll consider just the scenario for the loop in the consumer, but of course similar scenarios apply mutatis mutandis for the producer.

Scenario one: Everything is awesome, everything is cool when you’re part of a team. Suppose the consumer is moved from its wait state to the ready state because the producer has put something on the queue. Now the queue is definitely no longer empty, and we are ready to enter the monitor again. Suppose we fail to do so right away due to a race with the producer. The producer might enter the monitor again and put more stuff on the queue, but eventually the queue will fill up, the producer will put itself into the wait state, and then the consumer is then the only thread left attempting to get into the monitor. Success is guaranteed, and there seems to be no need to check to see if the queue is empty; if we managed to re-enter the monitor it was because something was put on the queue. The loop is unnecessary.

Scenario two: Some other thread got ahold of myLock and for reasons of its own decided to pulse the monitor. That thread is not the producer, so it did not ensure that the queue was non-empty. The consumer must be defensive and say “re-entering the monitor is not a guarantee that my desired condition was met, therefore I must check again.” If it is by design that a third thread can pulse the monitor then there needs to be a loop; if it is not by design then the existence of such a third thread is a bug in the program. If we can assume that there is no such third thread then we don’t need a loop.

Scenario three: The producer genuinely did put something on the queue, and at some time after that, the consumer re-entered the monitor. But between those two events, a third thread won the race and correctly removed the item from the queue for reasons of its own. Again, if that’s a by-design scenario then the consumer has to be willing to check the condition again. If it’s not a by-design scenario then there’s no need for a loop.

So let’s suppose there are only two threads, guaranteed, producer and consumer, that access this lock object and party on this queue. Our second and third scenarios do not apply, so the loop is unnecessary, right? Unfortunately there is a fourth scenario:

Scenario four: Everything is terrible! One time in a hundred billion runs a waiting thread wakes up and goes to the ready state even if it was never pulsed. Suppose we have no loop, and this rare event happens. A possible ordering of events is:

  • The consumer enters the monitor, checks the queue, it is empty, it puts itself to bed.
  • While the producer is running around looking for work, not touching the queue, the consumer thread spuriously wakes up, re-enters the monitor, and without looping, continues running, assuming the queue is non-empty. The queue code produces an unhandled exception and the consumer thread dies a horrible death.

In a world where spurious wakeups are a possibility, you have to always check your conditions in a loop. See, the loop mitigates the terrible scenario; if a thread wakes up spuriously then it checks its condition again, and goes back to sleep if it is not met.

Are spurious wakeups a possibility in C#? This is a surprisingly hard question. Let me list some facts.

Fact one: Spurious wakeups are known to be a rare but observable possibility when using condition variables (a locking mechanism very similar to what we’ve been discussing in this series) on operating systems that use POSIX threads. In particular, on linux when a process is signaled there is a race condition. The choices faced by the designers of linux were, I gather, (1) allow the race to cause spurious wakeups, (2) allow the race to cause some wakeups to be lost; clearly unacceptable, the consumer would never come back and eventually the queue would fill up, or (3) create an implementation with unacceptably high performance costs.

Fact two: Spurious wakeups are similarly documented as being a problem with Windows condition variables. “Condition variables are subject to spurious wakeups […] you should recheck a predicate (typically in a while loop) after a sleep operation returns.”

Fact three: The Java documentation states

“A thread can also wake up without being notified, interrupted, or timing out, a so-called spurious wakeup. While this will rarely occur in practice, applications must guard against it […] waits should always occur in loops.”

Apparently the designers of Java explicitly endorse the theory that spurious wakeups are a real thing.

Fact four: Joe Duffy notes in “Concurrent Programming on Windows” that the claim that Windows suffers from spurious wakeups is somewhat histrionic:

“[…] threads must be resilient to something called spurious wake-ups […] This is not because the implementation will actually do such things […] but rather due to the fact that there is no guarantee around when a thread that has been awakened will become scheduled. Condition variables are not fair. It’s possible – and even likely – that another thread will acquire the associated lock and make the condition false again before the awakened thread has a chance to reacquire the lock and return to the critical region.”

Basically, Joe is saying here that in many situations our “scenario three” is likely.

Fact five: The documentation for Monitor.Wait() says nothing about spurious wakeups or always waiting in a loop.

Fact six: Apparently the CLR does not actually use condition variables as its mechanism for implementing monitors, and therefore reasoning from the shortcomings of condition variables to the shortcomings of C# locks is poor reasoning. We really ought to examine the mechanisms the CLR actually uses if we want to know if they are subject to this problem. No, I’m not going to; see Stephen Cleary’s comment below for some links.

Fact seven: Many expert C# programmers like Jon Skeet (UPDATE: see comments!) and Joseph Albahari recommend always waiting in a loop. And some static analyzers look for missing loops around waits and flag them as a bad code smell; using a loop is a cheap and safe way to make such analyzers stop complaining.

Spurious wakeups in C# seem to be somewhat mythical beasts; people are afraid of them without ever having encountered one in the wild.

So what would I do here?

Well, the first thing I would do is of course not write programs that shared memory across threads! It’s a terrible thing to do! Look at me; I’m a pretty smart guy and I cannot tell you whether to write if or while without writing a seven-item list of pros and cons that thoroughly contradicts itself, makes false analogies and rests upon appeals to authority and the absence of warnings in documentation! This would be a pretty weak foundation upon which to base a coding decision that has real consequences.

If I had to write a program that shared memory across threads then I would use the highest level tool in my toolbox. I would use a thread safe collection written by experts in this case. (Of course that simply begs the question; the expert must know how to do so safely using lower-level mechanisms! I presume they know better than I do.)  If for some reason that was unavailable then I would use a higher-level construct for signaling, like an auto reset event, or a reader-writer lock, or whatever.

Were I forced to write code like this that uses monitors at a low level then I would grit my teeth, embrace cargo-cultism, put a banana in my ear, and write the loop even without being able to give a solid justification for why doing so keeps the alligators away.


01 Jun 11:40

Exploring the new .NET "dotnet" Command Line Interface (CLI)

by Scott Hanselman

I've never much liked the whole "dnvm" and "dnu" and "dnx" command line stuff in the new ASP.NET 5 beta bits. There's reasons for each to exist and they were and they have been important steps, both organizationally and as aids to the learning process.

My thinking has always been that when a new person sits down to learn node, python, ruby, golang, whatever, for the most part their experience is something like this. It should be just as easy - or easier - to use .NET.

This is just a psuedocode. Don't sweat it too much.

apt-get install mylang #where mylang is some language/runtime

#write or generate a foo.fb hello world program
mylang foo #compiles and runs foo

I think folks using and learning .NET should have the same experience as with Go or Ruby.

  • Easy To Get - Getting .NET should be super easy on every platform.
    • We are starting to do this with http://get.asp.net and we'll have the same for .NET Core alone, I'm sure.
  • Easy Hello World - It should be easy to create a basic app and build from there.
    • You can "dotnet new" and get hello world. Perhaps more someday?
  • Easy Compile and Run
    • Just "dotnet run" and it compiles AND executes
  • Real .NET
    • Fast, scalable, native speed when possible, reliable

I've been exploring the (very early but promising) work at https://github.com/dotnet/cli that will ship next year sometime.

IMPORTANT NOTE: This toolchain is [today] independent from the DNX-based .NET Core + ASP.NET 5 RC bits. If you are looking for .NET Core + ASP.NET 5 RC bits, you can find instructions on the http://get.asp.net/.

Once I installed the "dotnet" cli, I can do this:

>dotnet new

>dotnet restore
>dotnet run

Imagine with me, when you combine this with the free Visual Studio Code editor which runs on Mac, Windows, and Linux, you've got a pretty interesting story. Open Source .NET that runs everywhere, easily.

Here is a longer command line prompt that includes me just typing "dotnet" at the top to get a sense of what's available.

C:\Users\Scott\Desktop\fabulous>dotnet

.NET Command Line Interface
Usage: dotnet [common-options] [command] [arguments]

Arguments:
[command] The command to execute
[arguments] Arguments to pass to the command

Common Options (passed before the command):
-v|--verbose Enable verbose output

Common Commands:
new Initialize a basic .NET project
restore Restore dependencies specified in the .NET project
compile Compiles a .NET project
publish Publishes a .NET project for deployment (including the runtime)
run Compiles and immediately executes a .NET project
repl Launch an interactive session (read, eval, print, loop)
pack Creates a NuGet package

C:\Users\Scott\Desktop\fabulous>dotnet new
Created new project in C:\Users\Scott\Desktop\fabulous.

C:\Users\Scott\Desktop\fabulous>dotnet restore
Microsoft .NET Development Utility CoreClr-x64-1.0.0-rc1-16231

CACHE https://www.myget.org/F/dotnet-core/api/v3/index.json
CACHE https://api.nuget.org/v3/index.json
Restoring packages for C:\Users\Scott\Desktop\fabulous\project.json
Writing lock file C:\Users\Scott\Desktop\fabulous\project.lock.json
Restore complete, 947ms elapsed

NuGet Config files used:
C:\Users\Scott\AppData\Roaming\NuGet\nuget.config
C:\Users\Scott\Desktop\nuget.config
C:\Users\Scott\Desktop\fabulous\nuget.config

Feeds used:
https://www.myget.org/F/dotnet-core/api/v3/flatcontainer/
https://api.nuget.org/v3-flatcontainer/

C:\Users\Scott\Desktop\fabulous>dotnet run
Hello World!

Note that I ran dotnet restore once before on another projects so that output was not very noisy this time.

Native Compilation of .NET applications

This is cool, but things get REALLY compelling when we consider native compilation. That literally means our EXE becomes a native executable on a platform that doesn't require any external dependencies. No .NET. It just runs and it runs fast.

It's early days, and right now per the repro it's just hello world and a few samples but essentially when you do "dotnet compile" you get this, right, but it requires the .NET Core Runtime and all the supporting libraries. It JITs when it runs like the .NET you know and love.

.NET Core Compiled EXE

But if you "dotnet compile --native" you run it through the .NET Native chain and a larger EXE pops out. But that EXE is singular and native and just runs.

Native compiled .NET Core EXE

Again, early days, but hugely exciting. Here's the high-level engineering plan on GitHub that you can explore.

Related Projects

There are many .NET related projects on GitHub.


Sponsor: Big thanks to Redgate for sponsoring the feed this week! Have you got SQL fingers? Try SQL Prompt and you’ll be able to write, refactor, and reformat SQL effortlessly in SSMS and Visual Studio. Find out more with a 28 day free trial!



© 2016 Scott Hanselman. All rights reserved.
     
26 May 13:12

Using Redis as a Service in Azure to speed up ASP.NET applications

by Scott Hanselman

Microsoft Azure has a Redis Cache as a Service. There's two tiers. Basic is a single cache node, and Standard is as a complete replicated Cache (two nodes, with automatic failover). Microsoft manages automatic replication between the two nodes, and offers a high-availability SLA. The Premium tier can use up to a half-terabyte of RAM and tens of thousands of client connections and be clustered and scaled out to even bigger units. Sure, I could manage your own Redis in my own VM if I wanted to, but this is SAAS (Software as a Service) that I don't have to think about - I just use it and the rest is handled.

I blogged about Redis on Azure last year but wanted to try it in a new scenario now, using it as a cache for ASP.NET web apps. There's also an interesting open source Redis Desktop Manager I wanted to try out. Another great GUI for Redis is Redsmin.

For small apps and sites I can make a Basic Redis Cache and get 250 megs. I made a Redis instance in Azure. It takes a minute or two to create. It's SSL by default. I can talk to it programmatically with something like StackExchange.Redis or ServiceStack.Redis or any of a LOT of other great client libraries.

However, there's now great support for caching and Redis in ASP.NET. There's a library called Microsoft.Web.RedisSessionStateProvider that I can get from NuGet:

Install-Package Microsoft.Web.RedisSessionStateProvider 

It uses the StackExchange library under the covers, but it enables ASP.NET to use the Session object and store the results in Redis, rather than in memory on the web server. Add this to your web.config:

<sessionState mode="Custom" customProvider="FooFoo">

<providers>
<add name="MySessionStateStore"
type="Microsoft.Web.Redis.RedisSessionStateProvider"
host="hanselcache.redis.cache.windows.net"
accessKey="THEKEY"
ssl="true"
port="1234" />
</providers>
</sessionState>

Here's a string from ASP.NET Session stored in Redis as viewed in the Redis Desktop Manager. It's nice to use the provider as you don't need to change ANY code.

ASP.NET Session stored in a Redis Cache

You can turn off SSL and connect to Azure Redis Cache over the open internet but you really should use SSL. There's instructions for using Redis Desktop Manager with SSL and Azure Redis. Note the part where you need a .pem file which is the Azure Redis Cache SSL public key. You can get that SSL key here as of this writing.

Not only can you use Redis for Session State, but you can also use it for a lightning fast Output Cache. That means caching full HTTP responses. Setting it up in ASP.NET 4.x is very similar to the Session State Provider:

Install-Package Microsoft.Web.RedisOutputCacheProvider 

Now when you use [OutputCache] attributes in MVC Controllers or OutputCache directives in Web Forms like <%@ OutputCache Duration="60" VaryByParam="*" %> the responses will be handled by Redis. With a little thought about how your query strings and URLs work, you can quickly take an app like a Product Catalog, for example, and make it 4x or 10x faster with caching. It's LOW effort and HIGH upside. I am consistently surprised even in 2015 how often I see folks going to the database on EVERY HTTP request when the app's data freshness needs just doesn't require the perf hit.

You can work with Redis directly in code, of course. There's docs for .NET, Node.js, Java and Python on Azure. It's a pretty amazing project and having it be fully managed as a service is nice. From the Azure Redis site:

Perhaps you're interested in Redis but you don't want to run it on Azure, or perhaps even on Linux. You can run Redis via MSOpenTech's Redis on Windows fork. You can install it from NuGet, Chocolatey or download it directly from the project github repository. If you do get Redis for Windows (super easy with Chocolatey), you can use the redis-cli.exe at the command line to talk to the Azure Redis Cache as well (of course!).

It's easy to run a local Redis server with redis-server.exe, test it out in development, then change your app's Redis connection string when you deploy to Azure. Check it out. Within 30 min you may be able to configure your app to use a cache (Redis or otherwise) and see some really significant speed-up.


Sponsor: Big thanks to my friends at Octopus Deploy for sponsoring the feed this week. Build servers are great at compiling code and running tests, but not so great at deployment. When you find yourself knee-deep in custom scripts trying to make your build server do something it wasn't meant to, give Octopus Deploy a try.


© 2015 Scott Hanselman. All rights reserved.
     
29 Jan 07:01

WallabyJS is a slick and powerful test runner for JavaScript in your IDE or Editor

by Scott Hanselman

I was reminded by a friend to explore WallabyJS this week. I had looked at WallabyJS a while back when it was less mature but I hadn't installed a more recent version. WOW. It's coming along nicely and is super-powerful. You should check it out if you write JavaScript. It's also super fast, for these reasons:

Wallaby.js is insanely fast, because it only executes tests affected by your code changes and runs your tests in parallel.

WallabyJS has plugins for the IntelliJ platform, Visual Studio, Atom, and more recently, there's preview support for Visual Studio Code and Sublime Text support is coming soon.

It supports supports TypeScript, CoffeeScript, and ES7. Wallaby supports jasmine for running tests but you can plug in your own testing framework and assertion library as you like.

Installing WallabyJS for Visual Studio Code is very easy now that Code supports extensions.

Installing WallabyJS on Visual Studio Code

Once you've installed the extension it will download what's needed and bootstrap WallabyJS. I did have a small issue installing, but and uninstall/reinstall fixed it, so it may have been just a blip.

Visual Studio Code running WallabyJS

If you want to see it in action quickly without much setup, just clone their Calculator sample at

git clone https://github.com/wallabyjs/calculator-sample.git

Do note that it's not totally obvious once you've installed WallabyJS that you have to "start" its server manually...for now.

Starting WallabyJS

Once it has started, it's mostly automatic and runs tests as you type and save. You can access all WallabyJS's commands with hotkeys or from the Visual Studio Code command palette.

WallabyJS Commands in VS Code

It's great to see a powerful tool like this working in Visual Studio Code. Remember you can get VSCode (now open source!) for any platform here code.visualstudio.com and you can get WallabyJS at their main site.


Sponsor: Big thanks to my friends at Redgate for sponsoring the feed this week.Check out their amazing FREE eBook! Discover 52 tips to improve your .NET performance: Our new eBook features dozens of tips and tricks to boost .NET performance. With contributions from .NET experts around the world, you’ll have a faster app in no time. Download your free copy.



© 2016 Scott Hanselman. All rights reserved.
     
28 Jan 06:53

Review: littleBits Gadgets and Gizmos electronics kits for STEM kids

by Scott Hanselman

GGK_Box_Everything_600x400_v-2I love posting about STEM (Science, Technology, Engineering, and Mathematics) and some of the great resources, products, and software that we can use to better prepare the next generation of little techies.

Here's some previous posts I've done on the topics of STEM, kids, programming, and learning with young people:

The 8 year old (recently 7, now barely 8) has been playing with littleBits lately and having a blast. He loved SnapCircuits so littleBits seemed like a reasonable, if slightly higher-level, option.

SnapCircuits boldly has kids as young as three or four creating circuitry from a simple light and switch all the way up to a solar-powered radio or a burglar/door alarm. It doesn't hide the complexities of volts and amps and includes low-level components like resistors. Frankly, I wish my first EE (Electrical Engineering) class in college was taught with SnapCircuits.

LittleBits (usually a lowercase L) jumps up a layer of abstraction and includes motors, motion detectors, LED arrays, and lots more. There are also specific kits for specific interests like a littleBits Musical Electronics Synth Kit and a littleBits Smart Home Kit that include specific littleBits that extend the base kit.

littleBits1

The key to littleBits is their magic magnet that makes it basically impossible to do something wrong or hurt yourself. The genius here is that the magnet only goes one way (because: magnets) and the connector underlying transmits both power and data.

You start with a power bit, then add an "if" statement like a switch, then move to do a "do" statement like a motor or light or whatever. In just about 20 minutes my 8 year old was able to take a LEGO custom Star Wars Blaster and add totally new functionality like lights and sounds..

The 8 year old wanted to show his Star Wars Blaster/Fan combo made with @littlebits #video

A video posted by Scott Hanselman (@shanselman) on


One of the aspects of littleBits that I think is powerful but that wasn't immediately obvious to me is that you shouldn't be afraid to use glue or more permanent attachments with your projects. I initially tried to attach littleBits with rubber bands and strings but realized that they'd smartly included "glue dots" and Velcro as well as 3M adhesive pads. Once we stopped being "afraid" to use these stickers and adhesives, suddenly little projects became semi-permanent technical art installations.

We got the "Gizmos & Gadgets" kit which is a little spendy, but it includes 15 bits that enables you to do basically anything. The instructions are great and we a had remote-controlled robot that could drive around the room running within an hour. It's a great setup, a fun kit, and something that kids 8-14 will use all the time.

Here are some fantastic examples of other Star Wars related littleBits projects for you to explore:

*Amazon links are referral links on my blog. Click them and share them to support the blog and the work I do, writing this blog on my own time. Thanks!


Sponsor: Big thanks to Wiwet for sponsoring the feed this week. Build responsive ASP.NET web apps quickly and easily using C# or VB for any device in 1 minute. Wiwet ASP.Net templates are integrated into Visual Studio for ease of use. Get them now at Wiwet.com.



© 2016 Scott Hanselman. All rights reserved.
     
28 Jan 06:50

ASP.NET 5 is dead - Introducing ASP.NET Core 1.0 and .NET Core 1.0

by Scott Hanselman

Naming is hard.

There are only two hard things in Computer Science: cache invalidation and naming things. - Phil Karlton

It's very easy to armchair quarterback and say that "they should have named it Foo and it would be easy" but very often there's many players involved in naming things. ASP.NET is a good 'brand' that's been around for 15 years or so. ASP.NET 4.6 is a supported and released product that you can get and use now from http://get.asp.net.

UPDATE NOTE: This blog post is announcing this change. It's not done or released yet. As of the date/time of this writing, this work is just starting. It will be ongoing over the next few months.

However, naming the new, completely written from scratch ASP.NET framework "ASP.NET 5" was a bad idea for a one major reason: 5 > 4.6 makes it seem like ASP.NET 5 is bigger, better, and replaces ASP.NET 4.6. Not so.

So we're changing the name and picking a better version number.

Reintroducing ASP.NET Core 1.0 and .NET Core 1.0

  • ASP.NET 5 is now ASP.NET Core 1.0.
  • .NET Core 5 is now .NET Core 1.0.
  • Entity Framework 7 is now Entity Framework Core 1.0 or EF Core 1.0 colloquially.

Why 1.0? Because these are new. The whole .NET Core concept is new. The .NET Core 1.0 CLI is very new. Not only that, but .NET Core isn't as complete as the full .NET Framework 4.6. We're still exploring server-side graphics libraries. We're still exploring gaps between ASP.NET 4.6 and ASP.NET Core 1.0.

ASP.NET Core 1.0

Which to choose?

To be clear, ASP.NET 4.6 is the more mature platform. It's battle-tested and released and available today. ASP.NET Core 1.0 is a 1.0 release that includes Web API and MVC but doesn't yet have SignalR or Web Pages. It doesn't yet support VB or F#. It will have these subsystems some day but not today.

We don't want anyone to think that ASP.NET Core 1.0 is the finish line. It's a new beginning and a fork in the road, but ASP.NET 4.6 continues on, released and fully supported. There's lots of great stuff coming, stay tuned!


Sponsor: Big thanks to Wiwet for sponsoring the feed this week. Build responsive ASP.NET web apps quickly and easily using C# or VB for any device in 1 minute. Wiwet ASP.Net templates are integrated into Visual Studio for ease of use. Get them now at Wiwet.com.



© 2016 Scott Hanselman. All rights reserved.
     
15 Jan 09:55

Automatic code formatter released to GitHub

by Immo Landwerth [MSFT]

Code formatting can be a huge pain point. In this post, Jared Parsons, software developer on the Roslyn team, showcases a tool that takes care of it.

We have just released the code formatting tool we use to automatically format our code to the prescribed coding guidelines. This tool is based on Roslyn and will work on any C# project.

We strongly believe that having a consistent style greatly increases the readability and maintainability of a code base. The individual style decisions themselves are subjective, the key is the consistency of the decisions. As such the .NET team has a set of style guidelines for all C# code written by the team.

Unfortunately these style decisions weren't defined from the beginning of .NET and as a result a good number of legacy code bases didn't follow them. A coding style loses its benefits if it's not followed throughout the project and many of these non-conformant code bases represented some of our most core BCL projects.

This was a long standing issue on the team that just never quite met the bar for fixing. Why waste a developer's time editing thousands and thousands of C# files when there are all of these real bugs to fix?

This changed when the decision was made to open source the code. The internal team was used to the inconsistencies but did we want to force it on our developer community? What would our contribution guidelines look like in a world where some of our most core libraries had a different style than the one we wanted to recommend? That didn't seem like a great story for customers and we decided to fix it with tooling.

Luckily the Roslyn project presented us with the tools to fix this problem. Its core features include a full fidelity parse tree and a set of APIs to safely manipulate it syntactically or semantically. This is exactly what we needed to migrate all of that legacy source into a single unified style.

The tool itself is very simple to use; point it at a project or solution and it will systematically convert all of the code involved into the prescribed coding style. The process is very fast, taking only a few seconds for most projects and up to a couple of minutes for very large ones. It can even be run repeatedly on them to ensure that a consistent style is maintained over the course of time.

Take for example how easy it was to fix up the Regex code base which has been around since 1.0:

codeformatter.exe System.Text.RegularExpressions.csproj

That took code with old standards and documentation like the following from Regex.cs:

/// <devdoc>
/// Returns
/// the GroupNameCollection for the regular expression. This collection contains the
/// set of strings used to name capturing groups in the expression.
/// </devdoc>
public String[] GetGroupNames() {
    String[] result;

    if (capslist == null) {
        int max = capsize;
        result = new String[max];

        for (int i = 0; i < max; i++) {
            result[i] = Convert.ToString(i, CultureInfo.InvariantCulture);
        }
    }
    else {
        result = new String[capslist.Length];

        System.Array.Copy(capslist, 0, result, 0, capslist.Length);
    }

    return result;
}

And converted it to the following:

/// <summary>
/// Returns the GroupNameCollection for the regular expression. This collection contains the
/// set of strings used to name capturing groups in the expression.
/// </summary>
public String[] GetGroupNames()
{
    String[] result;

    if (_capslist == null)
    {
        int max = _capsize;
        result = new String[max];

        for (int i = 0; i < max; i++)
        {
            result[i] = Convert.ToString(i, CultureInfo.InvariantCulture);
        }
    }
    else
    {
        result = new String[_capslist.Length];

        System.Array.Copy(_capslist, 0, result, 0, _capslist.Length);
    }

    return result;
}

Right now the tool is a library with a simple command line wrapper. Going forward we will be looking to support a number of other scenarios including working inside of Visual Studio. Why should developers ever think about mindless style edits when the tool can just fix up the style differences as you code?

Note: This tool has the potential to cause a lot of churn on a code base. Best practice would be to communicate the desire to update the style with the project owner before sending them a large style PR.

Summary

This tool is now on GitHub in the codeformatter repo. You can download precompiled binaries as well. We appreciate any feedback from the community or even better some contributions!

08 Jan 10:17

Best practices for private config data and connection strings in configuration in ASP.NET and Azure

by Scott Hanselman

Image Copyright Shea Parikh / getcolorstock.com - used under licenseA reader emailed asking how to avoid accidentally checking in passwords and other sensitive data into GitHub or source control in general. I think it's fair to say that we've all done this once or twice - it's a rite of passage for developers old and new.

The simplest way to avoid checking in passwords and/or connection strings into source control is to (no joke) keep passwords and connection strings out of your source.

Sounds condescending or funny, but it's not, it's true. You can't check in what doesn't exist on disk.

That said, sometimes you just need to mark a file as "ignored," meaning it's not under source control. For some systems that involves externalizing configuration values that may be in shared config files with a bunch of non-sensitive config data.

ASP.NET 4.6 secrets and connection strings

Just to be clear, how "secret" something is is up to you. If it's truly cryptographically secret or something like a private key, you should be looking at data protection systems or a Key Vault like Azure Key Vault. Here we are talking about medium business impact web apps with API keys for 3rd party web APIs and connection strings that can live in memory for short periods. Be smart.

ASP.NET 4.6 has web.config XML files like this with name/value pairs.

<appSettings>      

<add key="name" value="someValue" />
<add key="name" value="someSECRETValue" />
</appSettings>

We don't want secrets in there! Instead, move them out like this:

<appSettings file="Web.SECRETS.config">      

<add key="name" value="someValue" />
</appSettings>

Then you just put another appSettings section in that web.secrets.config file and it gets merged at runtime.

NOTE: It's worth pointing out that the AppSettings technique also works for Console apps with an app.config.

Finally, be sure to add Web.secrets.config (or, even better, make it *.secrets and use a unique extension to identify your sensitive config.

This externalizing of config also works with the <connectionStrings> section, except you use the configSource attribute like this:

<connectionStrings configSource="secretConnectionStrings.config">

</connectionStrings>

Connection Strings/App Secrets in Azure

When you're deploying a web app to Azure (as often these apps are deployed from source/GitHub, etc) you should NEVER put your connection strings or appSettings in web.config or hard code them.

Instead, always use the Application Settings configuration section of Web Apps in Azure.

Application Settings and Secrets in Azure

These collection strings and name value pairs will automatically be made available transparently to your website so you don't need to change any ASP.NET code. Considered them to have more narrow scope than what's in web.config, and the system will merge the set automatically.

Additionally they are made available as Environment Variables, so you can Environment.GetEnvironmentVariable("APPSETTING_yourkey") as well. This works in any web framework, not just ASP.NET, so in PHP you just getenv('APPSETTING_yourkey") as you like.

The full list of database connection string types and the prepended string used for environment variables is below:

  • If you select “Sql Databases”, the prepended string is “SQLAZURECONNSTR_”
  • If you select “SQL Server” the prepended string is “SQLCONNSTR_”
  • If you select “MySQL” the prepended string is “MYSQLCONNSTR_”
  • If you select “Custom” the prepended string is “CUSTOMCONNSTR_”

ASP.NET 5

ASP.NET 5 has the concept of User Secrets or User-Level Secrets where the key/value pair does exist in a file BUT that file isn't in your project folder, it's stored in your OS user profile folder. That way there's no chance it'll get checked into source control. There's a secret manager (it's all beta so expect it to change) where you can set name/value pairs.

ASP.NET also has very flexible scoping rules in code. You can have an appSettings, then an environment-specific (dev, test, staging, prod) appSettings, then User Secrets, and then environment variables. All of this is done via code configuration and is, as I mentioned, deeply flexible. If you don't like it, you can change it.

var builder = new ConfigurationBuilder()

.AddJsonFile("appsettings.json")
.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

if (env.IsDevelopment())
{
// For more details on using the user secret store see http://go.microsoft.com/fwlink/?LinkID=532709
builder.AddUserSecrets();
}

builder.AddEnvironmentVariables();
Configuration = builder.Build();

So, in conclusion:

  • Don't put private stuff in code.
    • Seems obvious, but...
  • Avoid putting private stuff in common config files
    • Externalize them AND ignore the externalized file so they don't get checked in
  • Consider using Environment Variables or User-level config options.
    • Keep sensitive config out of your project folder at development time

I'm sure I missed something. What are YOUR tips, Dear Reader?

Resources

Image Copyright Shea Parikh - used under license from http://getcolorstock.com


Sponsor: Big thanks to Infragistics for sponsoring the blog this week! Responsive web design on any browser, any platform and any device with Infragistics jQuery/HTML5 Controls.  Get super-charged performance with the world’s fastest HTML5 Grid - Download for free now!



© 2016 Scott Hanselman. All rights reserved.
     
07 Jan 08:01

Audi renforce son partenariat avec la start-up française Global Bioenergies

by Elisabeth Studer
audi-inlineAudi vient d’annoncer en début de semaine avoir renforcé son partenariat dans les biocarburants avec la société Global Bioenergies. Un nouvel accord a été signé autour de la technologie de production d’essence bio-sourcée développée par la jeune start-up. Les deux sociétés travaillent conjointement depuis 2014 pour le développement d’un carburant, l’isooctane, produit par Global Bioenergies à […]
03 Jan 09:53

Zopfli Optimization: Literally Free Bandwidth

by Jeff Atwood

In 2007 I wrote about using PNGout to produce amazingly small PNG images. I still refer to this topic frequently, as seven years later, the average PNG I encounter on the Internet is very unlikely to be optimized.

For example, consider this recent Perry Bible Fellowship cartoon.

Saved directly from the PBF website, this comic is a 800 × 1412, 32-bit color PNG image of 671,012 bytes. Let's save it in a few different formats to get an idea of how much space this image could take up:

BMP 24-bit 3,388,854
BMP 8-bit 1,130,678
GIF 8-bit, no dither 147,290
GIF 8-bit, max dither 283,162
PNG 32-bit 671,012

PNG is a win because like GIF, it has built-in compression, but unlike GIF, you aren't limited to cruddy 8-bit, 256 color images. Now what happens when we apply PNGout to this image?

Default PNG 671,012
PNGout 623,859 7%

Take any random PNG of unknown provenance, apply PNGout, and you're likely to see around a 10% file size savings, possibly a lot more. Remember, this is lossless compression. The output is identical. It's a smaller file to send over the wire, and the smaller the file, the faster the decompression. This is free bandwidth, people! It doesn't get much better than this!

Except when it does.

In 2013 Google introduced a new, fully backwards compatible method of compression they call Zopfli.

The output generated by Zopfli is typically 3–8% smaller compared to zlib at maximum compression, and we believe that Zopfli represents the state of the art in Deflate-compatible compression. Zopfli is written in C for portability. It is a compression-only library; existing software can decompress the data. Zopfli is bit-stream compatible with compression used in gzip, Zip, PNG, HTTP requests, and others.

I apologize for being super late to this party, but let's test this bold claim. What happens to our PBF comic?

Default PNG 671,012
PNGout 623,859 7%
ZopfliPNG 585,117 13%

Looking good. But that's just one image. We're big fans of Emoji at Discourse, let's try it on the original first release of the Emoji One emoji set – that's a complete set of 842 64×64 PNG files in 32-bit color:

Default PNG 2,328,243
PNGout 1,969,973 15%
ZopfliPNG 1,698,322 27%

Wow. Sign me up for some of that.

In my testing, Zopfli reliably produces 3 to 8 percent smaller PNG images than even the mighty PNGout, which is an incredible feat. Furthermore, any standard gzip compressed resource can benefit from Zopfli's improved deflate, such as jQuery:

Or the standard compression corpus tests:

gzip -­9 kzip Zopfli
Alexa­ 10k 128mb 125mb 124mb
Calgary 1017kb 979kb 975kb
Canterbury 731kb 674kb 670kb
enwik8 36mb 35mb 35mb

(Oddly enough, I had not heard of kzip – turns out that's our old friend Ken Silverman popping up again, probably using the same compression bag of tricks from his PNGout utility.)

But there is a catch, because there's always a catch – it's also 80 times slower. No, that's not a typo. Yes, you read that right.

gzip -­9 5.6s
7­zip ­mm=Deflate ­mx=9 128s
kzip 336s
Zopfli 454s

Gzip compression is faster than it looks in the above comparsion, because level 9 is a bit slow for what it does:

Time Size
gzip -1 11.5s 40.6%
gzip -2 12.0s 39.9%
gzip -3 13.7s 39.3%
gzip -4 15.1s 38.2%
gzip -5 18.4s 37.5%
gzip -6 24.5s 37.2%
gzip -7 29.4s 37.1%
gzip -8 45.5s 37.1%
gzip -9 66.9s 37.0%

You decide if that whopping 0.1% compression ratio difference between gzip -7and gzip -9 is worth the doubling in CPU time. In related news, this is why pretty much every compression tool's so-called "Ultra" compression level or mode is generally a bad idea. You fall off an algorithmic cliff pretty fast, so stick with the middle or the optimal part of the curve, which tends to be the default compression level. They do pick those defaults for a reason.

PNGout was not exactly fast to begin with, so imagining something that's 80 times slower (at best!) to compress an image or a file is definite cause for concern. You may not notice on small images, but try running either on a larger PNG and it's basically time to go get a sandwich. Or if you have a multi-core CPU, 4 to 16 sandwiches. This is why applying Zopfli to user-uploaded images might not be the greatest idea, because the first server to try Zopfli-ing a 10k × 10k PNG image is in for a hell of a surprise.

However, remember that decompression is still the same speed, and totally safe. This means you probably only want to use Zopfli on pre-compiled resources, which are designed to be compressed once and downloaded millions of times – rather than a bunch of PNG images your users uploaded which may only be viewed a few hundred or thousand times at best, regardless of how optimized the images happen to be.

For example, at Discourse we have a default avatar renderer which produces nice looking PNG avatars for users based on the first letter of their username, plus a color scheme selected via the hash of their username. Oh yes, and the very nice Roboto open source font from Google.

We spent a lot of time optimizing the output avatar images, because these avatars can be served millions of times, and pre-rendering the whole lot of them, given the constraints of …

  • 10 numbers
  • 26 letters
  • ~250 color schemes
  • ~5 sizes

… isn't unreasonable at around 45,000 unique files. We also have a centralized https CDN we set up to to serve avatars (if desired) across all Discourse instances, to further reduce load and increase cache hits.

Because these images stick to shades of one color, I reduced the color palette to 8-bit (actually 128 colors) to save space, and of course we run PNGout on the resulting files. They're about as tiny as you can get. When I ran Zopfli on the above avatars, I was super excited to see my expected 3 to 8 percent free file size reduction and after the console commands ran, I saw that saved … 1 byte, 5 bytes, and 2 bytes respectively. Cue sad trombone.

(Yes, it is technically possible to produce strange "lossy" PNG images, but I think that's counter to the spirit of PNG which is designed for lossless images. If you want lossy images, go with JPG or another lossy format.)

The great thing about Zopfli is that, assuming you are OK with the extreme up front CPU demands, it is a "set it and forget it" optimization step that can apply anywhere and will never hurt you. Well, other than possibly burning a lot of spare CPU cycles.

If you work on a project that serves compressed assets, take a close look at Zopfli. It's not a silver bullet – as with all advice, run the tests on your files and see – but it's about as close as it gets to literally free bandwidth in our line of work.

[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
28 Dec 13:41

Les voitures autonomes respectent-elles trop le code de la route ?

by Pierre-Laurent Ribault
GoogleLexusAvec les kilomètres qui s’accumulent pour les prototypes de véhicules autonomes, s’accumule aussi l’expérience et une tendance inattendue apparaît au grand dam des ingénieurs qui créent le logiciel qui anime ces automobiles sans conducteur : le nombre d’accidents dans lesquels elles sont impliquées est plus important que pour le reste des véhicules, mais elles ne sont jamais […]
24 Dec 07:47

Le premier site Web fête ses 25 ans

by webmaster@futura-sciences.com (Futura-Sciences)
Il y a un quart de siècle, Tim Berners-Lee, le « père » du World Wide Web, mettait en ligne le tout premier site Web. Vingt-cinq ans plus tard, on approche du milliard de sites Web et l'on recense plus de trois milliards d’internautes.
21 Dec 06:52

Your Star Wars spoiler zone: Ars fully discusses The [REDACTED] Awakens

by Ars Staff

On some galaxies far, far away, it'd be a bad idea for a reputable news outlet to dedicate an entire article to spoiling and excavating the secrets of a four-day-old movie. But not this one. Star Wars: The Force Awakens will likely cross a record-smashing $245 million threshold for opening weekend numbers—meaning, many of you have likely seen the film. (Heck, you might already be quoting it.)

As of today, most of Ars' staff has seen the film in our respective cities, as well—catching up with our very lucky Episode VII critic Tiffany Kelly—and we have lots of thoughts to offer on the other side of the veritable awakening. We're going full spoiler on this one; the first blurb, which you can see below on an average computer monitor, is kinda-sorta spoiler-free, in case you clicked on this like a real masochist, but this page has been organized from "least spoiled" to "most spoiled," so the lower you scroll, the deeper you'll get.

We're not kidding. Lotsa spoilers below. You've been so warned.

Read 45 remaining paragraphs | Comments

19 Dec 09:29

C# 6 Feature Review: Expression-Bodied Function Members

by Jimmy Bogard

In the last post, I looked at auto-property enhancements, with several comments pointing out some nicer usages. I recently went through the HtmlTags codebase, C# 6-ifying “all the things”, and auto property and expression bodied function members were used pretty much everywhere. This is a large result of the codebase being quite tightly defined, with small objects and methods doing one thing well.

Expression-bodied function members can work for both methods and properties. If you have a method with one statement, and that statement can be represented as an expression:

public string GetFullName() {
  return FirstName + " " + LastName;
}

Or a getter-only properties/indexers with a body that can be represented as an expression:

public string FormalName {
  get { return FirstName[0] + ". " + LastName; }
}

You can collapse those down into something that looks like a cross between a member declaration and a lambda expression:

public string GetFullName() => FirstName + " " + LastName;

public string FormalName => FirstName[0] + ". " + LastName;

This seems to work really well when the expression can fit neatly on one line. In my refactorings, I did have places where it didn’t look so hot:

public Accessor Prepend(PropertyInfo property)
{
  return
    new PropertyChain(new IValueGetter[] {
     new PropertyValueGetter(property), 
     new PropertyValueGetter(InnerProperty)});
}

After:

public Accessor Prepend(PropertyInfo property) => 
    new PropertyChain(new IValueGetter[] {
     new PropertyValueGetter(property), 
     new PropertyValueGetter(InnerProperty)});

If the original expression looked better on multiple lines formatted out, the new expression-bodied way will look a bit weirder as you don’t have that initial curly brace. Refactoring tools try to put everything on the one line, making it pretty difficult to understand what’s going on. However, some code I have refactored down very nicely:

public class ArrayIndexer : Accessor
{
  private readonly IndexerValueGetter _getter;

  public ArrayIndexer(IndexerValueGetter getter)
  {
    _getter = getter;
  }

  public string FieldName => _getter.Name;
  public Type PropertyType => _getter.ValueType;
  public PropertyInfo InnerProperty => null;
  public Type DeclaringType => _getter.DeclaringType;
  public string Name => _getter.Name;
  public Type OwnerType => DeclaringType;

  public void SetValue(object target, object propertyValue) => _getter.SetValue(target, propertyValue);

  public object GetValue(object target) => _getter.GetValue(target);

It was a bit of work to go through the entire codebase, so I wouldn’t recommend that approach for actual production apps. However, it’s worth it if you’re already looking at some code and want to clean it up.

Verdict: Useful often; refactor code as needed going forward

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

19 Dec 09:24

C# 6 Feature Review: Auto-Property Enhancements

by Jimmy Bogard

With the release of Visual Studio 2015 came the (final) release of the Roslyn C# compiler and C# 6. This latest version of C#’s feature list seems to be…less than exciting, but it’s important to keep in mind that before Roslyn, none of these new features would have ever made it into a release. It was simply too hard to add a feature in C#, so higher impact/value features made it in while minor annoyances/enhancements would be deferred, indefinitely.

I’m less concerned about the new features themselves than how these features would actually enhance my code. With ReSharper, it’s pretty easy to see what the impact of these new features would be.

In this series, I’ll be looking at a long-lived codebase, AutoMapper, and walk through features of C#, their potential utility/impact, and when you should use them. First up are the auto-property enhancements.

Initializers for auto-properties

The auto-property enhancements mainly center around making fields and properties on an even playing field. In a lot of my AutoMapper code, I have field initializers:

public class TypeMap {
  private readonly IList<Action<object, object>> _afterMapActions
    = new List<Action<object, object>>();
  private readonly IList<Action<object, object>> _beforeMapActions
    = new List<Action<object, object>>();
  private readonly ThreadSafeList<PropertyMap> _propertyMaps
    = new ThreadSafeList<PropertyMap>();

Property initializers would instead look like this:

public class TypeMap {
  private IList<Action<object, object>> AfterMapActions { get; }
    = new List<Action<object, object>>();
  private IList<Action<object, object>> BeforeMapActions { get; }
    = new List<Action<object, object>>();
  private ThreadSafeList<PropertyMap> PropertyMaps { get; }
    = new ThreadSafeList<PropertyMap>();

It looks…rather strange to me. The only place in my code where I could find that I could use property auto-property initializers were places I had existing private fields in place. And I’m not terribly keen on converting private fields into private properties. Properties are my window into exposing state to the others, otherwise I use private fields to encapsulate state. And no I’m not one of those weirdos that leaves off access modifiers, waaaaaay too easy to screw up.

Verdict: Use sparingly

Getter-only automatic properties

One pattern I found all over the place in AutoMapper are immutable properties. In C# 5 and earlier, this involved creating a private readonly field and a get-only property. I have this all over the place:

public TypePair(Type sourceType, Type destinationType)
{
  _sourceType = sourceType;
  _destinationType = destinationType;
}

private readonly Type _sourceType;
private readonly Type _destinationType;

public Type SourceType
{
  get { return _sourceType; }
}

public Type DestinationType
{
  get { return _destinationType; }
}

Not *horrible* but it would be nice to encapsulate the idiom of an immutable property in the language. This is exactly what getter-only automatic properties do:

public TypePair(Type sourceType, Type destinationType)
{
  SourceType = sourceType;
  DestinationType = destinationType;
}

public Type SourceType { get; }
public Type DestinationType { get; }

I’m only allowed to set the property in a constructor or initializer, and cannot set it anywhere else inside or outside my class. It has the same effect as a readonly field, just encapsulated. I use this pattern now all over the place.

Verdict: Adopt immediately and replace obsolete pattern

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

15 Dec 20:31

Announcing TypeScript 1.7

by Gaurav Seth [MSFT]

Today, we are thrilled to announce the release of TypeScript 1.7 along with the availability of Visual Studio 2015 Update 1. This release enables async/await by default for ECMAScript 6 (ES6) targets. It also adds support for polymorphic 'this' typing, proposed ECMAScript 2016 exponentiation syntax, and ES6 module targeting. For a complete change list, check out our roadmap on GitHub.

As always, you can get your hands on TypeScript 1.7 for Visual Studio 2015 Update 1, Visual Studio 2013, on npm, or straight from the source.

Async/Await for ES6 targets

With the 1.7 release, TypeScript now supports Async functions for targets that have ES6 generator support enabled (e.g. node.js v4 and above). Functions can now be prefixed with the async keyword designating it as an asynchronous function. The await keyword can then be used to stop execution until an async function's promise is fulfilled. Following is a simple example:

"use strict";
// printDelayed is a 'Promise<void>'
async function printDelayed(elements: string[]) {
    for (const element of elements) {
        await delay(200);
        console.log(element);
    }
}
 
async function delay(milliseconds: number) {
    return new Promise<void>(resolve => {
        setTimeout(resolve, milliseconds);
    });
}
 
printDelayed(["Hello", "beautiful", "asynchronous", "world"]).then(() => {
    console.log();
    console.log("Printed every element!");
});

We are working on bringing async/await support in TypeScript for other targets, including a breadth of browsers, which might not have ES6 generators support. For more information on current implementation of async/await and how to use it, see our previous blog post.

Polymorphic this Typing

After much community discussion and feedback, TypeScript 1.7 adds a new polymorphic this type. A this type can be used in classes and interfaces to represent some type that is a subtype of the containing type (rather than the containing type itself). This feature makes patterns such as hierarchical fluent APIs much easier to express.

interface Model {
    setupBase(): this;
}
 
interface AdvancedModel extends Model {
    setupAdvanced(): this;
}
 
declare function createModel(): AdvancedModel;
newModel = newModel.setupBase().setupAdvanced(); // fluent style works

For a deep dive on this typing, checkout the TypeScript Wiki.

As a part of supporting the feature, TypeScript 1.7 has made changes in inferring the type from this. In a class, the type of the value this will be inferred to the this type, and subsequent assignments from values of the original type can fail. As a workaround, you could add a type annotation for this. A code sample with recommended work around, along with a list of other potentially breaking changes is available at GitHub.

ES6 Module Emitting

TypeScript 1.7 adds es6 to the list of options available for the --module flag and allows you to specify the module output when targeting ES6. This provides more flexibility to target exactly the features you want in specific runtimes. For example, it is now a breeze to target Node.js v4 and beyond, which doesn't support ES6 modules (but does support several other ES6 features).

//tsconfig.json targeting node.js v4 and beyond
{
    "compilerOptions": {
        "module": "commonjs",
        "target": "es6"
    }
}

ES7 Exponentiation

Finally, a little syntactic sugar! The ECMAScript committee recently moved the Exponentiation Operator proposal to stage 3. So we decided it was ready for TypeScript to adopt, and added support for it in TypeScript 1.7.

let squared = 2 ** 2;  // same as: 2 * 2
let cubed = 2 ** 3;  // same as: 2 * 2 * 2
let num = 2;
num **= 2; // same as: num = num * num;

Say goodbye to Math.pow()!

What's Next?

We are excited to announce all the new improvements we've made in this release, and as always, we would love to hear your feedback. Everything we do is easily viewable on our Github. If you're interested in weighing in on the future of TypeScript, we encourage you to check out our existing issues, throw us a pull request, or just come hang out with the team on gitter.

15 Dec 20:20

What about Async/Await?

by Polita

We’ve heard your feedback that you’re excited about async/await in TypeScript. Async/await allows developers to write to asynchronous code flows as if they were synchronous, removing the need for registering event handlers or writing separate callback functions. You may have seen similar patterns in C#. TypeScript’s async/await pattern makes use of Promises, much like C#’s async/await pattern leverages Tasks. Promises are objects that represent ongoing asynchronous actions and are a built-in feature of ECMAScript 6 (ES6). TypeScript’s async/await is implemented as proposed for ES2016 (aka ES7).

We’re happy to announce that you can already use async/await today if you’re targeting Node.js v4 or later! In this post, we’ll show you how and give you an update on async/await’s progress.

How async/await works

JavaScript is single-threaded and sequential: once your function starts running, it can’t be interrupted before it runs to completion. For most tasks, this is exactly what the developer expects and wants. However, when an asynchronous task (such as a call to a web service) is running, it’s more efficient to allow the rest of the JavaScript to continue running while you wait for the task to return. Async/await allows you to call asynchronous methods much the same way you’d call a synchronous method, but without blocking for the asynchronous operations to complete.

For example, in the code below, main awaits on the result of the asynchronous function ping. Because main awaits, it’s declared as an async function. The ping function awaits on the delay function in a loop, so it’s declared as async as well. The delay function calls setTimeout to return a Promise after a certain amount of time. When setTimeout’s promise returns, you’ll see ‘ping’ on the console.

async function main() {
 await ping();
}

async function ping() {
 for (var i = 0; i < 10; i++) {
  await delay(300);
  console.log("ping");
 }
}

function delay(ms: number) {
 return new Promise(resolve => setTimeout(resolve, ms));
}

main();

 

TypeScript uses ES6 generators to implement the ability to re-enter a function at a given point when an asynchronous call returns. Generators use the yield keyword to tell the JavaScript runtime when control is being given up while a function waits for something to happen. When that something happens, the JavaScript runtime then continues the function execution from where control was yielded. For the sample above, the TypeScript compiler emits the below ES6 JavaScript for the ping function.

function ping() {
 return __awaiter(this, void 0, Promise, function* () {
  for (var i = 0; i < 10; i++) {
   yield delay(300);
   console.log("ping");
  }
 });
}

The__awaiter function wraps the function body, including the yield statement, in a Promise that executes the function as a generator.

Trying it out with Node.js

Starting with nightly builds, TypeScript 1.7 now supports async/await for ES6 targets. You can install the latest nightly build of TypeScript using npm install typescript@next and try it with Node.js v4 or beyond, which has support for ES6 generators. Here’s how your tsconfig.json would look like:

"compilerOptions": {
  "target": "ES6",
  "module": "commonjs"
}

The compiled JavaScript output can then run in Node.js:

If you’re targeting Node.js v4 or later, try out async/await today. We’ve created a more complex sample using the GitHub API to asynchronously retrieve history on a repo. You can find the source on the TypeScriptSamples repo and run it in Node.js. We’d love to hear your feedback and if you find issues, please let us know. One other thing to keep in mind: Node.js doesn’t yet support ES6 modules, so choose CommonJS module output when compiling your TypeScript, as indicated in the tsconfig.json above.

Next steps

We know that many TypeScript developers want to write async/await code for browsers as well as for Node.js and understand that using a temporary solution based on additional transpiliation layer is not optimal either from a developer workflow perspective or due to the performance impact it causes with the extra compilation overhead. To target the breadth of browsers, we need to rewrite ES6 generator functions into ES5-executable JavaScript using a state machine. It’s a big challenge that requires significant changes across the compiler, but we’re working on it. Stay tuned; we’ll keep you up to date on our progress!

 

15 Dec 19:20

Announcing TypeScript 1.6

by Jonathan Turner [MS]

Today, we're happy to announce the release of TypeScript 1.6.  This release adds support for React/JSX, class expressions, and a rich set of new capabilities in the type system. It also provides stricter type checking for object literals.

You can download TypeScript 1.6 for Visual Studio 2015, Visual Studio 2013, on npm, or as source.

React/JSX

Designed with feedback from React experts and the React team, we've built full-featured support for React typing and the JSX support in React.  Below, you can see TypeScript code happily coexisting with JSX syntax within a single file with the new .tsx extension. This allows React developers to intermingle HTML-like syntax with TypeScript code.

Our goal was to make it feel natural to work with React/JSX and to have all the type-checking and autocomplete capabilities of TypeScript.  This allows you a rich editing experience for working with React and JSX when using VS, VS Code, and Sublime. 

Class expressions

This release also makes it possible to write class expressions, as we continue to round out the ES6 support in TypeScript.  Similar to class declarations, class expressions allow you to create new classes.  Unlike class declarations, you can use class expressions wherever you use an expression.  For example, you can now create a class and use it in your extends clause.

class StateHandler extends class { reset() { return true; } } {
   constructor() {
     super();
   }
}

var g = new StateHandler();
g.reset();

This class can be anonymous and still has all the same capabilities of class declarations. 

User defined type guards

In earlier versions of TypeScript, you could use if statements to narrow the type. For example, you could use:

if (typeof x === "number") { … }

This helped type information flow into common ways of working with types at runtime (inspired by some of the other projects doing typechecking of JS). While this approach is powerful, we wanted to push it a bit further.  In 1.6, you can now create your own type guard functions:

interface Animal {name: string; }
interface Cat extends Animal { meow(); }

function isCat(a: Animal): a is Cat {
  return a.name === 'kitty';
}

var x: Animal;

if(isCat(x)) {
  x.meow(); // OK, x is Cat in this block
}

This allows you to work with not only typeof and instanceof checks, which need a type that JavaScript understands, but now you can work with interfaces and do custom analysis.  Guard functions are denoted by their “a is X” return type, which returns boolean and signals to the compiler if what the expected type now is.

Intersection types

Common patterns in JS that haven’t been easy to express in TypeScript are mixins and extending existing classes with new methods.  To help with this, we’re adding a new type operator ‘&’ that will combine two types together.  While it was possible to do this before by creating a new interface that inherited from two other types, this tended to be clunky and couldn’t be used in the case you wanted to use it most: combining generic types.

This new & operator, called intersection, creates anonymous combinations of types.

function extend<T, U>(first: T, second: U): T & U {
  let result = <T & U> {};
  for (let id in first) {
    result[id] = first[id];
  }

  for (let id in second) {
    if (!result.hasOwnProperty(id)) {
      result[id] = second[id];
    }
  }
  return result;
}

var x = extend({ a: "hello" }, { b: 42 });
x.a; // works
x.b; // works 

Abstract classes

A long-standing feature request for TypeScript has been supporting abstract classes.  Similar in some ways to interfaces, abstract classes give you a way of creating a base class, complete with default implementations, that you can build from with the intention of it never being used directly outside of the class hierarchy. 

abstract class A {
  foo(): number { return this.bar(); }
  abstract bar(): number;
}

var a = new A();  // error, Cannot create an instance of the abstract class 'A'

class B extends A {
  bar() { return 1; }
}

var b = new B();  // success, all abstracts are defined

Generic type aliases

Leading up to TypeScript 1.6, type aliases were restricted to being simple aliases that shortened long type names.  Unfortunately, without being able to make these generic, they had limited use.  We now allow type aliases to be generic, giving them full expressive capability.

type switcharoo<T, U> = (u: U, t:T)=>T;
var f: switcharoo<number, string>;
f("bob", 4);

Potentially-breaking changes

Along with all the new features, we also spent some time tightening and fixing up a few areas of the system to generally work better, help you catch more errors, and be closer to how other tools work.  In this release we now require that an object literal should be matched directly.  We’ve also updated the module resolution logic to work more closely to what you would expect for the type of output module you selected.  You can read the full list of potential breaking changes with their mitigations.

Object literal strictness

We’ve made object compatibility stricter to help catch a class of common bugs.  You can learn more about this in the 1.6 beta post.

Improvements to module resolution

We’ve changed module resolution when doing CommonJS output to work more closely to how Node does module resolution.  If a module name is non-relative, we now follow these steps to find the associated typings:

  1. Check in node_modules for <module name>.d.ts
  2. Search node_modules\<module name>\package.json for a typings field
  3. Look for node_modules\<module name>\index.d.ts
  4. Then we go one level higher and repeat the process

Please note: when we search through node_modules, we assume these are the packaged node modules which have type information and a corresponding .js file.  As such, we resolve only .d.ts files (not .ts file) for non-relative names.

Previously, we  treated all module names as relative paths, and therefore we would never properly look in node_modules.  If you prefer the previous behavior, you can use the compiler flag “--moduleResolution classic”.  We will continue to improve module resolution, including improvements to AMD, in upcoming releases.

Looking ahead

We’re excited to reveal all the new improvements to TypeScript and to hear your feedback.  There’s lots to come, with more ES6 and ES7 features ahead.  And we’re always open if you want to jump in.

15 Dec 19:11

Announcing TypeScript 1.6 Beta: React/JSX, better error checking, and more

by Jonathan Turner [MS]

Today, we’re making a beta of the upcoming TypeScript 1.6 available.  There are a bunch of new features coming in the 1.6 release, and we wanted to give you a preview of these features and time to give us feedback.  

You can get this for Visual Studio 2015, Visual Studio 2013, NPM, and source.

React/JSX support

One of the key philosophies of TypeScript is to let you write TypeScript anywhere you can develop using JavaScript.  While we’ve worked with teams such as Dojo, Aurelia, and Angular to ensure using TypeScript is as easy as using JavaScript, there was still an important library that that presented a difficulty for TypeScript developers: React.  This was due to the lack of support for JSX, a popular way of writing DOM and native components in JS.  React heavily leverages JSX in everyday code.  Unfortunately, the syntax for JSX conflicted with the cast syntax that TypeScript already used.

Refactoring JSX members in TypeScript (click to watch animation)

In 1.6, we’ve introduced a new .tsx file extension.  This extension does two things: it enables JSX inside of TypeScript files, and it makes the new ‘as’ operator the default way to cast.  With this, we’ve added full support for working with React/JSX in TypeScript in a type-safe way. 

Catching more errors

Starting with 1.6, we’re tightening up some of our object checking rules.  In the beta, you’ll see that objects need to match more closely when they are assigned.  For example:

var x: { foo: number };
x = { foo: 1, baz: 2 };  // Error, excess property 'baz', but not caught before 1.6

var y: { foo: number, bar?: number };
y = { foo: 1, baz: 2 };  // Error, excess or misspelled property 'baz', also not caught before 1.6

When fields are optional, it’s easy to accidentally pass mistyped fields or miss when a refactoring has left excess fields.  This change has already helped to find dozens (if not hundreds) of real-world bugs in early adopter code.  As part of this change, we found bugs in DefinitelyTyped (including the tests that are used to validate the hand-written .d.ts files):

Examples of the errors caught with the new rules

While this change may show where bugs have been hiding previously, you may not be ready to tackle a new set of compiler errors in your existing code right away, or you may not want to change the way your code behaves.  There’s a good list of ways to work around this assignment check, depending on the capability you want your code to have.

EDIT: You can also suppress this warning by passing the --suppressExcessPropertyErrors compiler option.

Improved module resolution

Previously, TypeScript used a hybrid approach for resolving both CommonJS and RequireJS.  Unfortunately, this meant that module resolution didn’t feel at home in either style, and instead made you structure projects in a way that didn’t feel natural for that particular module loader.

We’re working on a set of improvements to make module resolution more natural.  In the 1.6 beta, you’ll see the start of that work.  We invite you to read more on what we’re working on and send us your feedback.

And more

We’ve continued to improve ES6 support with added support for class expressions and the start of support for generators.  You can see the full list of new features on our roadmap.  We’d love to hear your feedback.

12 Dec 21:00

Microsoft to give back some of the free OneDrive storage it’s taking away

by Peter Bright

Microsoft will be giving back some, but not all, of the OneDrive storage that it was planning to take away from users of its cloud storage service.

In early November, the company made a surprising announcement to OneDrive users in two parts. First, the unlimited storage that came with Office 365 subscriptions was being cut back to 1TB. Second, the free storage tier was cut from 15GB to 5GB, and the 15GB bonus that comes from syncing your camera roll with OneDrive was also removed.

The change was unsurprisingly unpopular. The OneDrive UserVoice site, used by Microsoft to solicit feature requests and feedback, quickly recorded a new top complaint: more than 70,000 votes for the storage to be reinstated, dwarfing every other suggestion on the site.

Read 5 remaining paragraphs | Comments

09 Dec 19:50

Microsoft open-sources Live Writer, beloved but abandoned blogging tool

by Peter Bright

Another day, another "Microsoft open-sources something" story. At the weekend it was the Chakra JavaScript engine. This time, it's Live Writer, the blogging tool that provides offline, WYSIWYG editing of blog posts, and can publish directly to WordPress, Blogger, and other blogging platforms.

Live Writer hasn't been significantly updated since 2012 but still retains a loyal fan base. For writers who don't trust authoring directly within their content management system, the combination of familiar word processor-like interface and seamlessly integrated publishing is a compelling one.

The lack of maintenance, however, threatened to render the tool useless. The most pressing concern is Blogger. Google is switching Blogger from an old authentication system to OAuth 2. Live Writer only supports the old system and will never include OAuth 2 support. Although Google has extended the availability of the old method to ensure that Live Writer continues to work, it will not do so indefinitely, posing a problem for users of the app.

Read 3 remaining paragraphs | Comments

07 Dec 21:38

The 2015 Christmas List of Best STEM Toys for your little nerds and nerdettes

by Scott Hanselman

My 8 year old (recently 7, they grow so fast) asked recently, "are we nerds yet?" Being a nerd doesn't have the negative stigma it once did. A nerd is a fan, and everyone should be enthusiastic about something. You might be a gardening nerd or a woodworking nerd. In this house, we are Maker Nerds. We've been doing some 3D Printing lately, and are trying to expand into all kinds of makings.

NOTE: We're gearing up for another year of March Is For Makers coming soon in March of 2016. Now is a great time for you to catch up on March 2015's content!

Here's a Christmas List of things that I've either personally purchased, tried for a time, or borrowed from a friend. These are great toys and products for kids of all genders and people of all ages.

Snap Circuits

Snap Circuits

I love Snap Circuits and have talked about them before on my blog. We quickly outgrew the 30 parts in the Snap Circuits Jr. Even though it has 100 projects, I recommend you get the Snap Circuits SC-300 that has 60 parts and 300 projects, or do what we did and just get the Snap Circuits Extreme SC-750 that has 80+ parts and 750 projects. I like this one because it includes a computer interface (via your microphone jack, so any old computer will work!) as well as a Solar Panel.

Dremel 3D Printer

We still use our Dremel 3D Printer at least two or three times a week. We're printing a quadcopter, making Minecraft Chess sets, and creating gifts for the family.

Minecraft 3D Printed Chess Set

Here's some of my 3D Printing posts so far:

It's been extremely reliable. Some folks complain that the Dremel system and software is proprietary, but it's very easy to use. Additionally, if you really don't like their custom software, companies like Simplify3D have Dremel support built right in. You can also use third party filament like Proto-pasta with great success. We even extended the Dremel with a custom 3D printed spool adapter for Proto-pasta and upgraded nozzle and build plate. It's been fantastically reliable and I recommend the Dremel highly.

littleBits Electronics Gizmos and Gadgets

LittleBits are a more expensive than Snap Circuits, but they operate at a higher level of abstraction. While Snap Circuits will teach you about resistors and current and voltage, litlteBits is more oriented towards System Thinking. The littleBits Electronics Gizmos & Gadgets kit is massive and has kept my kids entertained for the last few weeks. It includes motors, wheels, lights, switches, servos, buzzers even a remote control. In fact, the remote control lets you remote any signal and make any gadget you come up with a wireless one.

littleBits

LittleBits also has a LEGO compatibility system which, while a little persnickety, has allowed the kids to create remote controlled LEGO cars in minutes. They are very expandable and everything is modular. You can build more with additional kits, or you can get just one sensor or that one motor that you need.

The HP Stream 11.6 Laptop

First, let's be serious. The HP Stream is a $199 laptop with an 11.6" screen. Surprisingly, you can get a 13.3" screen for just $210. But on the real, it's not for office workers. It's not even for you. It's for the kids in your life. It's a good, solid, beginner laptop for kids. 2 gigs of ram, and a very modest 1.6 Ghz processor with just a 1366x768 screen, it runs Windows 10 pretty well, in fact and even includes Office 365 Personal for a year (that's Word, Excel, etc).

HP Stream 11.6" Laptop

I've even heard a parent call the HP Stream the "Minecraft Laptop." My sons took a week-long summer school Minecraft class with a room filled with these little laptops and they did just fine. It has just a 32gig SSD for a hard drive, but for <$20 you can add and drop in a 64gig SD Card and tell Windows 10 to put downloaded apps onto the SD Card directly.

This is a great machine for <$200 that you can feel comfortable giving to an 8 year old or tween and teach them how to code.

Raspberry Pi (any kind!)

Little boys on the Raspberry Pi

Every STEM house should have a Raspberry Pi or six! We've got 4? Or 5? They end up living inside robots, or taped to the garage door, or running SCUMMVM Game Emulators, or powering DIY GameBoys.

I recommend a complete Raspberry Pi Kit when you're just getting started as it guarantees you'll be up and running in minutes. They include the SD Card (acts as a hard drive), a power supply, a case, etc. All you need to provide is a USB Keyboard and Mouse. I ended up getting a cheap Mini USB wired keyboard and cheap USB wired mouse for simplicity.

Raspberry Pis will give you back as much as you can put into them. While you can treat it as a very low-powered browser or basic machine, you should really explore the breadth of projects you can make with a Raspberry Pi. Sure, the kids can learn Scratch or Python, but they can also build Raspberry Pi Robots or run a version of Windows 10 and play with C#. They can add their own electronics, lights, sounds, make radios, and more.

If you want to save money, get just a Raspberry Pi alone for <$40 and use a micro-USB Cell Phone Power Supply, and whatever electronics you have around the house. Once I took a local kid to Goodwill (a thrift store) and we found the power supply, mouse, keyboard, AND LCD Monitor all in the electronics junk pile of the store for $25 total.

OWI Robotic Arm Edge

The OWI Robotic Arm Edge isn't a kit but it's a reasonably priced robotic arm to get kids thinking in terms of command and control and multiple dimensions. OWI also has a cool 3in1 robot RC kit if you prefer driving robots around and more "rebuildability."

OWI Robotic Arm Edge

What educational toys do YOU recommend this holiday season?

FYI: These Amazon links are referral links. When you use them I get a tiny percentage. It adds up to taco money for me and the kids! I appreciate you - and you appreciate me-  when you use these links to buy stuff.


Sponsor: Big thanks to Infragistics for sponsoring the feed this week. Responsive web design on any browser, any platform and any device with Infragistics jQuery/HTML5 Controls.  Get super-charged performance with the world’s fastest HTML5 Grid - Download for free now!



© 2015 Scott Hanselman. All rights reserved.
     
06 Dec 19:18

Microsoft to open source Chakra, the JavaScript heart of its Edge browser

by Peter Bright

Block diagram of Chakra's design. (credit: Microsoft)

At JSConf in Florida today, Microsoft announced that it is open sourcing Chakra, the JavaScript engine used in its Edge and Internet Explorer browsers. The code will be published to the company's GitHub page next month.

Microsoft is calling the version it's open sourcing ChakraCore. This is the complete JavaScript engine—the parser, the interpreter, the just-in-time compiler, and the garbage collector  along with the API used to embed the engine into applications (as used in Edge). This will have the same performance and capabilities, including asm.js and SIMD support, as well as cutting-edge support for new ECMAScript 2015 language features like the version found in Microsoft's Windows 10 browser.

There are some small differences, however, between ChakraCore and Chakra as ships in Windows 10. The full Chakra includes the glue between the JavaScript engine and the browser's HTML engine, and similarly, glue between the JavaScript engine and the Universal Windows Platform. Neither of these are part of ChakraCore. Chakra also has diagnostic APIs that use COM and hence are Windows-specific. These won't be in ChakraCore either. Instead, a new set of diagnostic APIs will be developed and eventually integrated into the full Chakra.

Read 6 remaining paragraphs | Comments

13 Nov 19:17

Microsoft building data centers in Germany that US government can’t touch

by Glyn Moody

(credit: Carlarocaoporto)

Microsoft has launched a new kind of cloud service in Germany where user data is controlled by a "data trustee" operating under German law. Microsoft is unable to access user data without the permission of the data trustee or the customer, even if it is instructed to do so by the US government. If permission is granted by the data trustee, Microsoft will still only do so under its supervision. The idea behind the new data trustee-based cloud services is presumably to address European concerns that the NSA and other US agencies could demand access to any user data stored using Microsoft's current cloud services.

According to Microsoft's press release, the data trustee for the new German cloud offerings is T-Systems, a subsidiary of the giant telecom company Deutsche Telekom. Timotheus Höttges, Deutsche Telekom's CEO, is quoted as saying: "Microsoft is pioneering a new, unique, solution for customers in Germany and Europe. Now, customers who want local control of their data combined with Microsoft’s cloud services have a new option, and I anticipate it will be rapidly adopted."

Two new data centres are being built: one in Frankfurt am Main, the other in Magdeburg. Both will offer Azure, Office 365, and the Dynamics CRM Online cloud services from the second half of 2016. The two locations will be connected by a private network, separate from the Internet, in order to ensure that data never leaves Germany as it moves between them—for example, to provide automatic backups. Microsoft says the new offering is aimed particularly at European companies and organisations working with sensitive data, such as those in the finance and health sectors.

Read 4 remaining paragraphs | Comments

05 Nov 21:20

Steve Wozniak pense que la conduite sera interdite dans 20 ans

by Flavien Robert
swSteve Wozniak est l’un des cofondateurs d’Apple. Lors du Gartner Symposium/ITxpo de Gold Coast en Australie, le scientifique a livré sa vision du paysage automobile dans les 20 prochaines années. Les constructeurs automobiles devront bientôt faire avec un petit nouveau. La marque à la pomme prépare en effet son arrivée sur quatre roues. Pour Apple, […]
03 Nov 13:06

When would you use & on a bool?

by ericlippert

UPDATE: A commenter points out that today is the 200th anniversary of the birth of George Boole; I had no idea when I scheduled this article that it would be so apropos. Happy birthday George Boole!


Here’s a little-known and seldom-used fact about C# operators: you can apply the & and | operators to bools, not just to integers. The & and | operators on bools differ from && and || in only one way: both operators always “eagerly” evaluate both operands. This is in marked contrast to the “lazily” computed evaluation of the && and || operators, which only evaluate their right hand argument if needed. Why on earth would you ever want to evaluate the right hand side if you didn’t need to? Why have this operation at all on bools?

A few reasons come to mind. First, sometimes you want to do two operations, and know whether both of them succeeded:

bool totalSuccess = First() & Second();

If you want both operations to happen regardless of whether the first succeeded then using && would be wrong. (And similarly if you want to know if either succeeded, you’d use | instead of ||.)

Though this code is correct, I don’t like it. I don’t like expressions that are useful for their side effects like this; I’d prefer to see one effect per statement:

bool firstSucceeded = First();
bool secondSucceeded = Second();
bool totalSuccess = firstSucceeded & secondSucceeded;

(Also, the original code seems harder to debug; I might want to know when debugging or testing which of the operations succeeded. And of course I am not a super big fan of the “success code” pattern to begin with, but that’s another story.)

But still here we have the & operator instead of the && operator. What’s the compelling benefit of using & here instead of &&?

Think about it this way. Suppose you wish to write this code:

bool totalSuccess = firstSucceeded && secondSucceeded;
...

but you don’t get the && operator. In fact, all you get is:

  • if statements of the form if(bool) where the body is a goto
  • non-conditional goto statements
  • assignment of literals to variables and variables to variables.

Well, that’s pretty straightforward:

bool totalSuccess;
if (firstSucceeded) goto CONSEQUENCE;
totalSuccess = false;
goto DONE;
CONSEQUENCE: totalSuccess = secondSucceeded;
DONE: ...

But this is the situation that C# is actually in; the C# code must be translated into IL, and IL has no && instruction. It has conditional branches, unconditional branches, and assignments, so C# generates the IL equivalent of that code every time you use &&. (And similarly for ||.)

That’s a lot of code! But there is an IL instruction for & and |, so the code generation there is very straightforward and very small.

What are the consequences of the much larger code generation? First of all, the executable is a few bytes larger. Larger code means that less code fits into the processor cache, which means more cache misses at jit time.

The jitter has an optimizer of course, and many optimizers work by analyzing the “basic blocks” of a method. A “basic block” is a section of IL where control flow always enters at the top and always leaves at the bottom; by knowing where all the basic blocks are, the optimizer can analyze the control flow of the method. The & and | operators introduce no additional basic blocks into a method, but the && operator as you can see above introduces two new basic blocks that were not there before, labeled CONSEQUENCE and DONE. Now the jitter has more work to do.

And remember, the jitter has to work fast; it is jitting code in real time here. As method complexity increases, the number of optimizations that can be successfully performed at runtime at reasonable cost decreases. The jitter is entirely within its rights to say “this method either is too long / has too many basic blocks; I’m never going to inline it”, for example. So perhaps the machine code generated is a little worse than it otherwise could have been.

And finally, think about the generated machine code. Again, the code generated from the && version will be larger, which means less program logic fits in the small  processor cache, which means more cache evictions. Also, the more branches that are in the code, the more branch prediction the CPU must do, which means more opportunities to predict wrong.


UPDATE: A commenter asks if the C# compiler or jitter can decide to change lazy operators into eager operators if doing so is provably correct and likely faster. Yes, a compiler is allowed to do so; whether the C# or JIT compilers actually do so, I don’t know. I’ll check!

ANOTHER UPDATE: It does! I was unaware of this optimization, and probably should have checked to see if it existed before I wrote this article. :-) In C# 6, if the right hand side of an && operation is a local variable then the IL is generated as though it was &. I do not recall having seen this optimization before; perhaps it is new, or perhaps I simply never took a sufficiently close look at the IL generator. (I was aware that if either side of the operator is a compile-time constant true or false then optimizations are performed, but optimizations when operands are known at compile time is a good subject for another day.)


Now, I hasten to point out that these considerations here are the very definition of nano-optimizations. No commercial program ever attributed its widespread acceptance and profitability in the marketplace because a few &s were used judiciously instead of &&. The road to performance still demands good engineering discipline rather than random applications of tips and tricks. Still, I think it is useful to realize that avoiding the evaluation of the right hand side might, in some cases, be more expensive than simply doing the evaluation. When generating code to lower nullable arithmetic, for example, the C# compiler will generate eager operations instead of lazy operations.