Shared posts

10 Jul 05:57

Agile is not now, nor was it ever, Waterfall.

I read Agile is the new Waterfall at first with disgust, then with horror, and then finally with a meager amount of very qualified approval.

The author makes a reasonable point towards the end; but in getting to that point he states a number of falsehoods and eventually discredits a philosophy and discipline that does not deserve it. He is close to throwing everything out with the bathwater.

He begins by claiming that no sane person advocates waterfall. I don’t know what universe the author lives in; but in this universe there are quite a few people who advocate waterfall. Are they sane? By any legal standard they are. Anyone who thinks that the battle against waterfall is over simply hasn’t been fighting in the right trenches.

If you want to get a feel for just how wrong the author is about this, just google “Waterfall Software Teams” and count the number of articles that talk about striking a balance, or mixing the two processes, etc. People are not anxious to give up on the past.

Next he quotes MacBeth’s lament over the futility of life, and equates it with the “the promise of Agile”

“…full of sound and fury, signifying nothing.”

I take rather profound exception to the idea that the events leading up to the Agile Manifesto, and the Manifesto itself are an example of futility and meaningless noise. The author wasn’t there. The author doesn’t know. While I agree that in certain circles there is more heat than light; to claim that the entire movement is insignificant is to ignore a vast swath of software history.

I’m not going to critique the author point by point. Suffice it to say that he knows very little of the history, and what little he does know he’s gotten wrong. In so doing he has cast a pall of disrespect over a large number of people who have made huge contributions to our field. A pall that discredits an ideology that has had a profoundly positive effect.

He rails against the Agile consultancies who try to help organizations make the shift to Agile. Some of his complaints are justified; but most are not. Changing an organization is hard! Those companies that try to change, and hire help to make that change, are courageous.

Are there Agile consultancies that are better than others? Yes. Certainly. Caveat Emptor! But denigrating the entire effort is simply ignorant.

The One Point.

The author is wrong about Agile in virtually every regard. But he does make one good point. Unfortunately the context in which he makes that point is so wrong that the point is almost lost in the cacophony of blather that surrounds it. That point is:

“Bring in the bare minimum amount of process.”

Yes! Of course!

Does every software team need the entire suite of agile practices? Of course not. But let’s look at them:

  1. The Planning Game. Over the years it has become very clear that there are many ways to shave this Yak. Some teams need more process around this than others. For some, a simple list of features will do. For others, a Kanban board will be sufficient. Still others will need the full suite of stories, and tasks, and releases, and story points, and… Well, you know. Choose wisely!

  2. Customer Tests. Lots of customers don’t want to be bothered with these tests. That’s a shame, since they are demonstrably the best way to specify requirements. For those teams that have customers engaged enough to specify the requirements in terms of Cucumber tests, or FitNesse tests there is no better alternative. Teams that are not so fortunate are not likely to benefit from this practice. My personal rule is: If the customers neither read nor write the tests, then high level unit tests written in code suffice.

  3. Small Releases. It’s hard to imagine a team that would not benefit from this practice. Keep the releases small. The more time you wait between exposing the customers to the system, the more can go wrong.

  4. Whole Team. Again, it’s hard to imagine a team that would not benefit from a close relationships between the business people, and the developers. Not all teams are so fortunate, of course.

  5. Collective Ownership. As far as I’m concerned any team that has individual code ownership is deeply dysfunctional. If the owner of some part of the code decides to leave, the whole team is left in crisis mode. There are many ways to achieve collective ownership, but the bottom line is very simple. No single individual should be able to hold the team hostage. Every part of the code should be known by more than one person – the more the better.

  6. Coding Standard. This simply goes along with Collective Ownership. The code should look like the team wrote it, not like one of the individuals wrote it. The members of the team should agree on the way that their code will appear. This isn’t rocket science.

  7. Sustainable Pace. This is a real simple idea. Software projects are marathons, not sprints. You dare not run at a rate that you cannot sustain for the long term. Murphy tells us that any team that violates this practice is doomed to flame out at the worst possible moment.

  8. Continuous Integration. Certainly there are teams who’s projects are so small that setting up a CI server is redundant. However, for most teams this is such a positive win that neglecting it would be immoral, if not insane.

  9. Pair Programming. Some teams benefit greatly by using this practice. Others do not. For the latter, some form of code review is likely necessary. In any case, it is a very good idea for every line of code to have been seen by more than one pair of eyes.

  10. Simple Design. If we learned anything in the ’90s it is that over-design is suicide. The level of design is team dependent, of course; but the simpler the better is simply a good rule of thumb.

  11. Refactoring. Does anybody really want to argue that programmers should not keep their code as clean as possible? Does anyone want to argue that code should not be improved with time? Teams may choose different degrees of refactoring; but zero is probably not acceptable.

  12. Test Driven Development. This is certainly the most controversial of all the Agile practices. But the controversy is not about the word Test. Virtually everyone agrees that writing unit tests is important. Some of us think that the order in which they are written is important too. Different teams will choose different strategies. But teams that ignore testing are not destined for rapid success.

Conclusion

Does every team need every one of these practices? Certainly not. Do most teams need at least some of them? Of course they do! Again, choose wisely!

I believe that the author of the original article was exposed to teams who were doing Flaccid Scrum and made the mistake that that’s all there was to Agile. He is correct that there have been some uninformed consultancies who have taught this poor variant of the Agile practices. In that sense his diatribe is understandable. Still, ignorance is no excuse. If you are going to impugn the character of good people and good ideas, you’d better do your damned homework.

09 Jul 16:19

A Little Structure

What is Structured Programming?

Ummm?… Wasn’t it some ancient history having to do with GOTO?

Ancient. Hmmm. Yes, I guess some might consider 1968 to be ancient. But can you tell me what Structured Programming is?

It was a rule that said not to use GOTO statements.

Why do you keep using the past tense?

Because nobody cares about Structured Programming anymore.

They don’t?

No, I mean, hardly anybody knows what it is; except that it’s got something to do with not using GOTO.

Do you use GOTO?

Of course not! I mean, well… Hardly ever.

Why not?

Well, mostly because the languages I use don’t have GOTO.

Why do you suppose that is?

Because you don’t really need it.

How do you know you don’t need GOTO?

Well… I haven’t had to use it … much.

Have you ever heard of Corrado Bohm or Giuseppe Jacopini?

Who?

Corrado Bohm and Giuseppe Jacopini. In 1966 they wrote a paper that mathematically proved that GOTO was not necessary.

Huh. That’s cool… I guess.

Actually, yes, it’s very cool. Because, you see, in 1966 the GOTO statement was the primary means by which programmers connected their programs together.

Really?

Yes. For example, here’s an if statement in FORTRAN:

IF (A-10) 22,33,44

That looks primitive. What does it mean?

It means, if the value of the variable A minus 10 is negative, GOTO statement 22. If zero, GOTO statement 33. Otherwise GOTO statement 44.

Wow! That’s kinda gnarly. So, like, how did you use that?

So in Java I might say:

if (a>10)
  b++;
else
  b--;

In FORTRAN that would be:

	IF (A-10) 20,20,30
20	B = B - 1
	GOTO 40
30	B = B + 1
40	...

Yuk! Yuk! That’s awful.

That’s what we were used to. We’d never even thought it could be different.

And so then those two guys, Bohm and Jacowhatsit…

Bohm and Jocopini.

Yeah, they wrote their paper and everybody stopped using GOTO.

No, not quite. In fact, not at all. You see their paper was a pretty technical mathematical proof, so hardly anybody read it.

Heh heh, yeah, I get that. But somebody must have…

Oh yes. Several. But most notably a man named Edsger Dijkstra.

Dije… DIYGE..

You pronounce his last name: DIKEstruh. In March of 1968 he wrote a letter to the ACM.

A letter? To who?

Yes, a very short note. It was written to the editors of a magazine called The Communications of the ACM. He titled it Go To Statement Considered Harmful.

What did the letter say? Did it convince everybody?

No, it really didn’t. Oh, some people saw the logic right away. Others were – um – skeptical – for a long time.

So what did the letter say?

Well, you should read it. It’s pretty short. But I’ll give you the gist.

He made the case that you could restrict your program to three different control structures: Sequence, Selection, and Iteration.

OK, so – Huh?

Sequence is when two statements follow each other in sequence like this:

doStepOne();
doStepTwo();

Those statements might be simple assignments, or procedure calls, or any other kind of valid statement. They are executed in sequence. Right?

OK, Sure. So then… what’s the next one?

Selection. One of two statements will be executed based on some boolean value. Like this:

if (someBooleanValue())
	doThisStep();
else
	doOtherStep();

Yeah, OK. So then… that last one…

Iteration. A statement can be repeated until a boolean value becomes false. Like this:

while(someBooleanValue())
 	doThisStep();

Yeah, so, sure. That’s how we write code nowadays. But you said people didn’t buy into this right away?

No, they didn’t. Dijkstra argued that if you restricted yourself to those three structures then…

Oh! Structures. Structured Programming. I get it!

Um. Yes. That’s right. So, if you restrict yourself to those three, um, structures, then you can easily reason about your code. But if you use unrestricted GOTO then you can’t.

Wait. What? Whaddya mean, reason?

Well, Dijkstra’s argument was that a structured program can be easily analyzed because the state of the system at any line of code, depends only on the boolean values being tested by selection and iteration, and the list of calling procedures on the stack.

Um. sure. Whatever.

(Sigh.) Look, just read his paper, he makes it pretty clear.

OK, well, so then what happened. I mean, how did people become convinced?

Well, in 1972, Dijkstra wrote a book with O. J. Dahl, and C. A. R. Hoare. It was called Structured Programming.

Oh! So that’s what convinced everybody.

Well, no. Though it did – uh – elevate the controversy.

You mean like you guys were having flame wars over this?

No, we didn’t have Facebook. We didn’t even have the internet. But we could write letters to the editors of the various trade journals. And, let me tell you, some of those letters were scathing.

Ha ha. Sort of like snail mail flames.

Indeed. The more things change, the more they stay the same.

Anyway, the good thing was that the book got lots of people talking, and trying things out, and even convinced some people.

But not everyone.

No, not everyone. Many people continued to hold on to their GOTO statements; and would not give them up.

So then when did that end?

It ended when people stopped making and using languages that had GOTO statements, and started using languages that didn’t.

You mean like Java?

Yes. Like Java. Nowadays the majority of programmers use a language that has no GOTO. And an even larger majority avoid using GOTO even if their language has one. So, for the most part, Dijsktra’s war has been won. Structured Programming is the norm today.

Wow! So, Hurray for Dijsktra for giving us this new technology… back in the olden days…

New Technology? No, no, you misunderstand.

Why? I mean, this structured programming thingie was like his invention, right?

Oh, no. He didn’t invent anything. What he did was to identify something we shouldn’t do. That’s not a technology. That’s a discipline.

Huh? I thought Structured Programming made things better.

Oh, it did. But not by giving us some new tools or technologies. It made things better by taking away a damaging tool.

Hmmm. OK. Yeah, I guess that’s right. He took GOTO away from us.

It might be better to say that Structured Programming imposes discipline upon direct transfer of control.

That sound like gobeltygoop.

Yes, I suppose it does.

31 May 13:43

Introducing ASP.NET WebHooks Receivers - WebHooks made easy.

by Scott Hanselman

ASP.NET Web Hooks Receivers general architectureThere's been a lot of enthusiasm lately about the direction that ASP.NET is going lately, and rightfully so.

However, while ASP.NET 5 is cool and exciting, it's also not yet released (at the time of this writing, Beta 8 is being worked on). There are very cool things happening around ASP.NET 4.6 which is released and ready to go live today. Something else that's VERY cool that I want to explore today is ASP.NET WebHooks, which just came out as a preview and is being actively worked on.

Just as there's Web Forms, MVC, SignalR, Web API and they are all components within ASP.NET, you can think of Web Hooks as another member of the ASP.NET family of technologies. When you want it, it's there to plug in. If you don't use it, it costs you nothing in weight or runtime.

What are WebHooks?

Let's start with the What Are They part of the conversation. WebHooks are a convention. They are HTTP callbacks. Moreover, they are "user-defined HTTP callbacks." You and/or your app signs up for notification when something happens and your URL endpoint will get an HTTP POST when that thing happens. WebHooks can and should be RESTful as well. That means if you have a nice RESTful Web API then adding WebHooks to your application should not only be easy, it should a natural and clean extension.

So what? Why do we need a library for this?

Technically you don't, of course. You could theoretically implement the WebHooks pattern with an HttpHandler if you felt you had something to prove. You could more reasonably do it with ASP.NET Web API, but the general thinking is that if there's a clear and common pattern for doing something then it should be made easier and codified for correctness.

Even more, since WebHooks is such a common pattern and it's being used by folks like Dropbox, GitHub,MailChimp, PayPal, Pusher, Salesforce, Slack, Stripe, Trello, and WordPress then it'd be nice if ASP.NET came with support for these right out of the box. And it does. Support for receiving all of these and more is included.

There is also a need for easy ways for your applications to send events as WebHooks. In order to do that you need to manage and store subscriptions, and when the time comes to correctly make the callbacks to the right set of subscribers.

ASP.NET WebHooks

ASP.NET WebHooks is open source, being actively developed on GitHub and is targeting ASP.NET Web API 2 and ASP.NET MVC 5 today. It helps deal with the administrivia involved in dealing with WebHooks. It was announced on the Microsoft WebDev blog (you should subscribe) a few weeks back.

There's some great docs already being written but the most interesting bits are in the many examples.

When you install ASP.NET WebHooks you get a WebHook Handler that is the receiver to accept WebHook requests from services. Using a GitHub WebHook as an example, you can easily make a new project then publish it to Azure WebSites. GitHub WebHooks are required to use SSL for their transport which could be a barrier, but Azure WebSites using the *.azurewebsites.net domain get SSL for free. This will make your first WebHook and testing easier.

A good starter WebHook to try creating is one that gets called when an event happens on GitHub. For example, you might want a notification when someone comments on a GitHub issue as a first step in creating a GitHub bot.

The default routing structure is https://<host>/api/webhooks/incoming/<receiver> which you'll put in your GitHub repository's settings, along with a SHA256 hash or some other big secret. The secret is then put in a config setting called MS_WebHookReceiverSecret_GitHub in this example.

public class GitHubHandler : WebHookHandler

{
public override Task ExecuteAsync(string receiver, WebHookHandlerContext context)
{
string action = context.Actions.First();
JObject data = context.GetDataOrDefault<JObject>();

return Task.FromResult(true);
}
}

In this tiny example, the "action" string will contain "issues" if someone comments on an issue (meaning it's an event coming from the "issues" source).

Once you've been triggered by a WebHook callback, you can decide what to do about it. You might want to simply respond, log something, or start a more sophisticated process. There's even a way to trigger an Azure WebJob with this new extension.

More WebHooks

Sending WebHooks is similarly simple and there's already a great post on how to get started here. Finally there's even some skunkworks tooling by Brady Gaster that plugs into Visual Studio 2015 and makes things even easier.

What a lovely dialog box for making ASP.NET WebHooks even easier!

Go check out ASP.NET Web Hooks and give your feedback in the GitHub issues or directly to Henrik or Brady on Twitter!


Sponsor: Thanks to my friends and Infragistics for sponsoring the feed this week. Responsive web design on any browser, any platform and any device with Infragistics jQuery/HTML5 Controls.  Get super-charged performance with the world’s fastest HTML5 Grid - Download for free now!



© 2015 Scott Hanselman. All rights reserved.
     
26 May 13:18

How to simulate a low bandwidth connection for testing web sites and applications

by Scott Hanselman

Facebook just announced an internal initiative called "2G Tuesdays" and I think it's brilliant. It's a clear and concrete way to remind folks with fast internet (who likely have always had fast internet) that not everyone has unlimited bandwidth or a fast and reliable pipe. Did you know Facebook even has a tiny app called "Facebook Lite" that is just 1Mb and has good support for slower developing networks?

You should always test your websites and applications on a low bandwidth connection, but few people take the time. Many people don't know how to simulate simulate low bandwidth or think it's hard to set up.

Simulating low bandwidth with Google Chrome

If you're using Google Chrome, you can go to the Network Tab in F12 Tools and select a bandwidth level to simulate:

Selecting lower bandwidth in Google Chrome F12 Tools

Even better, you can also add Custom Profile to specify not only throughput but custom latency:

Custom Profiles for Google Chrome that control throughput and latency

Once you've set this up, you can also click "disable cache" and simulate a complete cold start for your site on a slow connection. 20 seconds is a long time to wait.

Google Chrome timeline showing my site on a 2G connection

Simulating a slow connection with a Proxy Server like Fiddler

If you aren't using Chrome or you want to simulate a slow connection for your apps or other browsers, you can slow it down from a Proxy Server like Fiddler or Charles.

Fiddler has a "simulate modem" option under Rules | Performance, and you can change the values from Rules | Customize Rules:

image

You can put in delays in milliseconds per KB in the script under m_SimulateModem:

if (m_SimulateModem) {

// Delay sends by 300ms per KB uploaded.
oSession["request-trickle-delay"] = "300";
// Delay receives by 150ms per KB downloaded.
oSession["response-trickle-delay"] = "150";
}

There's a number of proxy servers you can get to slow down traffic across your system. If you have Java, you can also try out one called "Sloppy." What's your favorite tool for slowing traffic down?

Conclusion

There is SO MUCH you can do to make the experience of loading your site better, not just for low-bandwidth folks, but for everyone. Squish your images! Don't use PNGs when a JPEG would do. Minify! Use CDNs!

image

However, step 0 is actually using your website on a slow connection. Go do that now.

Related Links


Sponsor: Big thanks to Infragistics for sponsoring the feed this week. Quickly & effortlessly create advanced, stylish, & high performing UIs for ASP.NET MVC with Ignite UI. Leverage the full power of Infragistics’ JavaScript-based jQuery/HTML5 control suite today.


© 2015 Scott Hanselman. All rights reserved.
     
26 May 12:20

NuGet Package of the Week: Microphone registers and discovers Web APIs and REST services with Consul

by Scott Hanselman

I'm sitting on a plane on the way back from a lovely time in Europe. I attended and spoke at some great conferences and met some cool people - some of which you'll hear on the podcast soon. Anyway, one of the things that I heard mentioned by attendees more than once was the issue of (micro) service discovery for RESTful APIs. Now if you lived through the WS*.* years you'll perhaps feel a lot of this is familiar or repeated territory, but the new stuff definitely fits together more effortlessly than in the past.

Consul is a system that does service discovery, configuration management, and health checking for your services. You can write Web APIs in lots of things, Rails, Python, and ASP.NET with WebAPI or NancyFX.

Microphone is a library by Roger Johansson that plugs into both WebAPI and Nancy and very simply and easily registers your services with Consul. It's recently been expanded to support CoreOs-ETCD as well, so it's really a general purpose framework.

I made a little .NET 4.6 console app that self hosts a WebAPI like this.

namespace ConsulSelfHostedWebAPIService

{
class Program
{
static void Main(string[] args)
{
Cluster.Bootstrap(new WebApiProvider(), new ConsulProvider(), "HanselWebApiService", "v1");
Console.ReadLine();
}
}

public class DefaultController : ApiController
{
public string Get()
{
return "Hey it's my personal WebApi Service";
}
}
}

Now my Web API is registered with Consul, and now Consul itself is a RESTful Web API where I can hit http://localhost:8500/v1/agent/services and get a list of registered services. It's the Discovery Service.

Consul reporting my WebAPI

Then later in a client or perhaps another Web API, I can ask for it by name and I'll get back the address and port that it's on, then call it.

var instance = await Cluster.FindServiceInstanceAsync("Heycool");


return String.Format("Look there's a service at {0}:{1}", instance.Address, instance.Port);

Here's an active debug session showing the address and port in the instance:

Using Microphone.WebAPI and Consul for Service Discovery

It will be interesting to see what will happen with Consul and systems like it if the Azure Service Fabric gains traction. Service Fabric offers a lot more, but I wonder if there is a use case for both, with Service Fabric managing lifecycles and Consul doing discovery.

This is all early days, but it's interesting. What do you think about these new discovery services for Web APIs?


Sponsor: Big thanks to Infragistics for sponsoring the feed this week. Quickly & effortlessly create advanced, stylish, & high performing UIs for ASP.NET MVC with Ignite UI. Leverage the full power of Infragistics’ JavaScript-based jQuery/HTML5 control suite today.



© 2015 Scott Hanselman. All rights reserved.
     
25 May 18:58

A/B Testing and Testing In Production with Azure Web Apps

by Scott Hanselman

I've got a lot of production web sites running in Azure right now. Some are for small side projects and some are larger like the sites for the Hanselminutes Podcast and This Developer's Life. I like Web Apps/Sites (which is Platform as a Service) rather than Virtual Machines (Infrastructure as a Service) because I don't like thinking about the underlying operating system if I can avoid it. I like to be able to scale the site up (faster, bigger) or out (more machines in the farm) with a slider bar.

In fact, there's some other more advanced and useful features that Azure Web Apps have that keep me using Web Apps almost exclusively.

I'll use a little site I made called KeysLeft.com that tells you how many keystrokes are left in your hands before you die. Think of it as a productivity awareness tool.

First, I'll add a Deployment Slot to my existing Git-deployed Web App. The source for KeysLeft lives in GitHub here. When I check-in a change it's automatically deployed. But what if I wanted to have a staging branch and automatically deploy to a staging.keysleft.com first? If it works out, then move it to production by swapping sites. That'd be sweet.

Staging Slots for Azure Web Apps

You can see here my main KeysLeft web app has a Staging "side car" app that is totally separate but logically related/adjacent to production. Notice the "swap" button in the toolbar. Love it.

Adding Deployment Slots to an Azure Web App

This Web App has its configuration copied from the main one, and I can setup Continuous Deployment to pull from a different branch, like "staging" for example. The name of the deployment slot becomes a suffix, so keysleft-staging.azurewebsites.net unless you set up a custom CNAME like staging.keysleft.com. You can have up to 4 deployment slots in addition to production (so dev, test, staging, whatever, production) on Standard Web Apps.

A/B Testing for Azure Web Apps

Once I've got a slot or two set up and running a version of my app, I can do A/B testing if I'd like. I can set up a feature that was called "Testing in Production" and is now "Traffic Routing" and tell Azure what percentage of traffic goes to prod and what goes to staging. Of course, you have to be sure to write your application so such that authentication and session is managed however is appropriate, especially if you'd like the user to have a seamless experience.

Here I've got 10% of the traffic going to staging, seamlessly, and the other 90% is going to production. I can make a small change (background color for example) and then hit the main site over and over and see the occasional (10% of course) request being routed to the staging slot. You can configure this static routing however you'd like.

10% Traffic to Staging

Then I could hook up Application Insights or New Relic or some other event/diagnostics system and measure the difference in user reaction between features that changed.

Advanced Testing in Production

Made it this far? Then you're in for a treat. Static routing is cool, to be clear, but scripting a more dynamic experience is even more interesting. Galin Iliev, one of the developers of this feature, gave me this Powershell script to show off more powerful stuff.

First, you can use PowerShell to manage this stuff. You can change routing values and ramp up or ramp down. For example, here we start at 10% and change it by 5 every 10 minutes.

# Select-AzureSubscription YOURSGOESHERE


$siteName = "keysleft"
$rule1 = New-Object Microsoft.WindowsAzure.Commands.Utilities.Websites.Services.WebEntities.RampUpRule
$rule1.ActionHostName = "keysleft-staging.azurewebsites.net"
$rule1.ReroutePercentage = 10;
$rule1.Name = "staging"

$rule1.ChangeIntervalInMinutes = 10;
$rule1.ChangeStep = 5;
$rule1.MinReroutePercentage = 1;
$rule1.MaxReroutePercentage = 80;

Set-AzureWebsite $siteName -Slot Production -RoutingRules $rule1

But! What if you could write code to actually make the decision to continue or fall back dynamically? You can add a callback URL and a Site Extension called the "TiP Callback Extension."

$rule1.ChangeDecisionCallbackUrl = https://keysleft.scm.azurewebsites.net/TipCallback/api/routing

The Site Extension (and all Site Extensions for that matter) is just a little sidecar Web API. This callback gets a small POST when it's time to make a decision, and you decide what to do based on HTTP-related context that was passed in and then return a ChangeDirectionResult object as JSON. You can adjust traffic dynamically, you can adjust traffic when doing a deployment, do a slow, measured roll out, or back off if you detect issues.

NOTE: The ChangeDescisionCallbackUrl and this code below is totally optional (so don't stress) but it's super powerful. You can just do static routing, you can do basic scripted dynamic traffic routing, or you can have make a decision callback URL. So the choice is yours.

You can check out the code by visiting yoursite.scm.azurewebsites.net after installing the TiP callback site extension and look at the Site Extensions folder. That said, here is the general idea.

using System.Web.Http;

using TipCallback.Models;

namespace TipCallback.Controllers
{
public class RoutingController : ApiController
{
[HttpPost]
public ChangeDirectionResult GetRoutingDirection([FromBody] RerouteChangeRequest metrics)
{
// Use either Step or RoutingPercentage. If both returned RoutingPercentage takes precedence
return new ChangeDirectionResult
{
Step = (int)metrics.Metrics["self"].Requests,
RoutingPercentage = 10
};
}
}
}

Here's the object you return. It's just a class with two ints, but this is super-annotated.

/// <summary>

/// Return information how to change TiP ramp up percentage.
/// Use either Step or RoutingPercentage. If both returned RoutingPercentage takes precedence
/// Either way MinRoutingPercentage and MaxRoutingPercentage set in API rule are in force
/// </summary>
[DataContract]
public class ChangeDirectionResult
{
/// <summary>
/// Step to change the Routing percentage. Positive number will increase it routing.
/// Negative will decrease it.
/// </summary>
[DataMember(Name = "step")]
public int? Step { get; set; }

/// <summary>
/// Hard routing percentage to set regardless of step.
/// </summary>
[DataMember(Name = "routingPercentage")]
public int? RoutingPercentage { get; set; }
}

All this stuff is included in Standard Azure Web Apps so if you're using Standard apps (I have 19 websites running in my one Standard plan) then you already have this feature and it's included in the price. Pretty cool.

Related Links


Sponsor: Big thanks to Infragistics for sponsoring the feed this week. Responsive web design on any browser, any platform and any device with Infragistics jQuery/HTML5 Controls.  Get super-charged performance with the world’s fastest HTML5 Grid - Download for free now!



© 2015 Scott Hanselman. All rights reserved.
     
25 May 18:29

Dealing with Software Religious Arguments and Architectural Zealotry

by Scott Hanselman
4302882942_9e2b92fdeb_b

Warning: Excessive use of Capitals for Emphasis ahead.

A friend of mine left his job to start a medical startup and has been in the middle of a Fight Over The Tech Stack. The current challenge is very bifurcated...very polarized. It's old vs. new, enterprise vs. startup, closed vs. open source, reliable vs. untested. There doesn't seem to be any middle ground.

Sometimes fights like these start with a Zealot.

Zealot: a person who is fanatical and uncompromising in pursuit of their religious, political, or other ideals.

Not all, don't get mad yet, but sometimes. Sometimes a Technical Religious Zealot is on your team - or runs your team - and they can't make objective decisions about a particular piece of technology.

"Don't use Microsoft, it killed my Pappy! Rails? Please, that won't scale. Node? Maybe if you're 17 that'll work! The only real way to write right software is with Technology X."

The language may not be this overt, but the essence is that Software can only be built This Way.

Here's the thing. Lean in. There's lots of ways to build software. Lots of successful ways. In fact, Success is a great metric.

But there's a lot of crappy Java apps, there's a lot of crappy C# apps, and there's lot of crappy Technology X apps.

Enthusiasm for a technology is understandable, especially if you've had previous success. I've worked in C++, Pascal, node.js, Java, and C#, myself. I've had great success with all of them, but I'm currently most excited about .NET and C#. I'm an enthusiast, to be clear. I've also told people who have hired me for projects that .NET wasn't the right tech for their problem.

Be excited about your technical religion, but also not only respect others' technical religion, celebrate their successes and learn from them as they may inform your own architectures. Every religious can learn from others, and the same is true in software.

Beware the Zealots. Software is a place for measurement, for experience, for research, and for thoughtful and enthusiastic discussion. You or the Zealot may ultimately disagree with the team decision but you should disagree and commit. A good Chief Architect can pull all these diverse architectural conversations and business requirements into a reasonable (and likely hybrid) stack that will serve the company for years to come.

Dear Reader, how do you deal with Technology Decisions that turn into Religious Arguments? Sound off in the comments.

SOCIAL: Hey folks, please do follow me on Facebook https://fb.me/scott.hanselman or Twitter! https://twitter.com/shanselman

* Photo "Enthusiasm Rainbow Gel" by Raquel Baranow used under CC BY 2.0


Sponsor: Big thanks to Infragistics for sponsoring the feed this week! Responsive web design on any browser, any platform and any device with Infragistics jQuery/HTML5 Controls.  Get super-charged performance with the world’s fastest HTML5 Grid - Download for free now!



© 2015 Scott Hanselman. All rights reserved.
     
05 Apr 15:08

Les TEE SHIRTs ‘FIERE d’etre dev’ sont arrivés !

by Fier d'être développeur
Fière d'être dev Y a pas que les mecs qui développent et y a pas que les mecs qui sont fiers de développer !

Alors il était normal que les développeuses membres aient leur Tee Shirt aussi. Et c’est chose faite.

‘Je l’ai reçu ce matin et je me suis empressée de le mettre parce que je suis fière de dire que je suis développeuse’ a déclaré Bérengère Lagrange. Pour rappel, Bérengère est développeuse spécialisée dans le Développement 4D chez Power à Lyon.

Elle avait accepté de répondre à nos questions, retour sur l’interview.

Si vous aussi vous êtes fière d’être développeuse, n’hésitez pas à adhérer à l’association pour le faire savoir !

04 Apr 19:37

Doing Terrible Things To Your Code

by Jeff Atwood

In 1992, I thought I was the best programmer in the world. In my defense, I had just graduated from college, this was pre-Internet, and I lived in Boulder, Colorado working in small business jobs where I was lucky to even hear about other programmers much less meet them.

I eventually fell in with a guy named Bill O'Neil, who hired me to do contract programming. He formed a company with the regrettably generic name of Computer Research & Technologies, and we proceeded to work on various gigs together, building line of business CRUD apps in Visual Basic or FoxPro running on Windows 3.1 (and sometimes DOS, though we had a sense by then that this new-fangled GUI thing was here to stay).

Bill was the first professional programmer I had ever worked with. Heck, for that matter, he was the first programmer I ever worked with. He'd spec out some work with me, I'd build it in Visual Basic, and then I'd hand it over to him for review. He'd then calmly proceed to utterly demolish my code:

  • Tab order? Wrong.
  • Entering a number instead of a string? Crash.
  • Entering a date in the past? Crash.
  • Entering too many characters? Crash.
  • UI element alignment? Off.
  • Does it work with unusual characters in names like, say, O'Neil? Nope.

One thing that surprised me was that the code itself was rarely the problem. He occasionally had some comments about the way I wrote or structured the code, but what I clearly had no idea about is testing my code.

I dreaded handing my work over to him for inspection. I slowly, painfully learned that the truly difficult part of coding is dealing with the thousands of ways things can go wrong with your application at any given time – most of them user related.

That was my first experience with the buddy system, and thanks to Bill, I came out of that relationship with a deep respect for software craftsmanship. I have no idea what Bill is up to these days, but I tip my hat to him, wherever he is. I didn't always enjoy it, but learning to develop discipline around testing (and breaking) my own stuff unquestionably made me a better programmer.

It's tempting to lay all this responsibility at the feet of the mythical QA engineer.

If you are ever lucky enough to work with one, you should have a very, very healthy fear of professional testers. They are terrifying. Just scan this "Did I remember to test" list and you'll be having the worst kind of flashbacks in no time. And that's the abbreviated version of his list.

I believe a key turning point in every professional programmer's working life is when you realize you are your own worst enemy, and the only way to mitigate that threat is to embrace it. Act like your own worst enemy. Break your UI. Break your code. Do terrible things to your software.

This means programmers need a good working knowledge of at least the common mistakes, the frequent cases that average programmers tend to miss, to work against. You are tester zero. This is your responsibility.

Let's start with Patrick McKenzie's classic Falsehoods Programmers Believe about Names:

  1. People have exactly one canonical full name.
  2. People have exactly one full name which they go by.
  3. People have, at this point in time, exactly one canonical full name.
  4. People have, at this point in time, one full name which they go by.
  5. People have exactly N names, for any value of N.
  6. People’s names fit within a certain defined amount of space.
  7. People’s names do not change.
  8. People’s names change, but only at a certain enumerated set of events.
  9. People’s names are written in ASCII.
  10. People’s names are written in any single character set.

That's just the first 10. There are thirty more. Plus a lot in the comments if you're in the mood for extra credit. Or, how does Falsehoods Programmers Believe About Time grab you?

  1. There are always 24 hours in a day.
  2. Months have either 30 or 31 days.
  3. Years have 365 days.
  4. February is always 28 days long.
  5. Any 24-hour period will always begin and end in the same day (or week, or month).
  6. A week always begins and ends in the same month.
  7. A week (or a month) always begins and ends in the same year.
  8. The machine that a program runs on will always be in the GMT time zone.
  9. Ok, that’s not true. But at least the time zone in which a program has to run will never change.
  10. Well, surely there will never be a change to the time zone in which a program has to run in production.
  11. The system clock will always be set to the correct local time.
  12. The system clock will always be set to a time that is not wildly different from the correct local time.
  13. If the system clock is incorrect, it will at least always be off by a consistent number of seconds.
  14. The server clock and the client clock will always be set to the same time.
  15. The server clock and the client clock will always be set to around the same time.

Are there more? Of course there are! There's even a whole additional list of stuff he forgot when he put that giant list together.

Catastrophic Error - User attempted to use program in the manner program was meant to be used

I think you can see where this is going. This is programming. We do this stuff for fun, remember?

But in true made-for-TV fashion, wait, there's more! Seriously, guys, where are you going? Get back here. We have more awesome failure states to learn about:

At this point I wouldn't blame you if you decided to quit programming altogether. But I think it's better if we learn to do for each other what Bill did for me, twenty years ago — teach less experienced developers that a good programmer knows they have to do terrible things to their code. Do it because if you don't, I guarantee you other people will, and when they do, they will either walk away or create a support ticket. I'm not sure which is worse.

[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
04 Apr 12:56

Welcome to The Internet of Compromised Things

by Jeff Atwood

This post is a bit of a public service announcement, so I'll get right to the point:

Every time you use WiFi, ask yourself: could I be connecting to the Internet through a compromised router with malware?

It's becoming more and more common to see malware installed not at the server, desktop, laptop, or smartphone level, but at the router level. Routers have become quite capable, powerful little computers in their own right over the last 5 years, and that means they can, unfortunately, be harnessed to work against you.

I write about this because it recently happened to two people I know.

In both cases, they eventually determined the source of the problem was that the router they were connecting to the Internet through had been compromised.

This is way more evil genius than infecting a mere computer. If you can manage to systematically infect common home and business routers, you can potentially compromise every computer connected to them.

Hilarious meme images I am contractually obligated to add to each blog post aside, this is scary stuff and you should be scared.

Router malware is the ultimate man-in-the-middle attack. For all meaningful traffic sent through a compromised router that isn't HTTPS encrypted, it is 100% game over. The attacker will certainly be sending all that traffic somewhere they can sniff it for anything important: logins, passwords, credit card info, other personal or financial information. And they can direct you to phishing websites at will – if you think you're on the "real" login page for the banking site you use, think again.

Heck, even if you completely trust the person whose router you are using, they could be technically be doing this to you. But they probably aren't.

Probably.

In John's case, the attackers inserted annoying ads in all unencrypted web traffic, which is an obvious tell to a sophisticated user. But how exactly would the average user figure out where this junk is coming from (or worse, assume the regular web is just full of ad junk all the time), when even a technical guy like John – founder of the open source Ghost blogging software used on this very blog – was flummoxed?

But that's OK, we're smart users who would only access public WiFi using HTTPS websites, right? Sadly, even if the traffic is HTTPS encrypted, it can still be subverted! There's an extremely technical blow-by-blow analysis at Cryptostorm, but the TL;DR is this:

Compromised router answers DNS req for *.google.com to 3rd party with faked HTTPS cert, you download malware Chrome. Game over.

HTTPS certificate shenanigans. DNS and BGP manipulation. Very hairy stuff.

How is this possible? Let's start with the weakest link, your router. Or more specifically, the programmers responsible for coding the admin interface to your router.

They must be terribly incompetent coders to let your router get compromised over the Internet, since one of the major selling points of a router is to act as a basic firewall layer between the Internet and you… right?

In their defense, that part of a router generally works as advertised. More commonly, you aren't being attacked from the hardened outside. You're being attacked from the soft, creamy inside.

That's right, the calls are coming from inside your house!

By that I mean you'll visit a malicious website that scripts your own browser to access the web-based admin pages of your router, and reset (or use the default) admin passwords to reconfigure it.

Nasty, isn't it? They attack from the inside using your own browser. But that's not the only way.

  • Maybe you accidentally turned on remote administration, so your router can be modified from the outside.

  • Maybe you left your router's admin passwords at default.

  • Maybe there is a legitimate external exploit for your router and you're running a very old version of firmware.

  • Maybe your ISP provided your router and made a security error in the configuration of the device.

In addition to being kind of terrifying, this does not bode well for the Internet of Things.

Internet of Compromised Things, more like.

OK, so what can we do about this? There's no perfect answer; I think it has to be a defense in depth strategy.

Inside Your Home

Buy a new, quality router. You don't want a router that's years old and hasn't been updated. But on the other hand you also don't want something too new that hasn't been vetted for firmware and/or security issues in the real world.

Also, any router your ISP provides is going to be about as crappy and "recent" as the awful stereo system you get in a new car. So I say stick with well known consumer brands. There are some hardcore folks who think all consumer routers are trash, so YMMV.

I can recommend the Asus RT-AC87U – it did very well in the SmallNetBuilder tests, Asus is a respectable brand, it's been out a year, and for most people, this is probably an upgrade over what you currently have without being totally bleeding edge overkill. I know it is an upgrade for me.

(I am also eagerly awaiting Eero as a domestic best of breed device with amazing custom firmware, and have one pre-ordered, but it hasn't shipped yet.)

Download and install the latest firmware. Ideally, do this before connecting the device to the Internet. But if you connect and then immediately use the firmware auto-update feature, who am I to judge you.

Change the default admin passwords. Don't leave it at the documented defaults, because then it could be potentially scripted and accessed.

Turn off WPS. Turns out the Wi-Fi Protected Setup feature intended to make it "easy" to connect to a router by pressing a button or entering a PIN made it … a bit too easy. This is always on by default, so be sure to disable it.

Turn off uPNP. Since we're talking about attacks that come from "inside your house", uPNP offers zero protection as it has no method of authentication. If you need it for specific apps, you'll find out, and you can forward those ports manually as needed.

Make sure remote administration is turned off. I've never owned a router that had this on by default, but check just to be double plus sure.

For Wifi, turn on WPA2+AES and use a long, strong password. Again, I feel most modern routers get the defaults right these days, but just check. The password is your responsibility, and password strength matters tremendously for wireless security, so be sure to make it a long one – at least 20 characters with all the variability you can muster.

Pick a unique SSID. Default SSIDs just scream hack me, for I have all defaults and a clueless owner. And no, don't bother "hiding" your SSID, it's a waste of time.

Optional: use less congested channels for WiFi. The default is "auto", but you can sometimes get better performance by picking less used frequencies at the ends of the spectrum. As summarized by official ASUS support reps:

  • Set 2.4 GHz channel bandwidth to 40 MHz, and change the control channel to 1, 6 or 11.

  • Set 5 GHz channel bandwidth to 80 MHz, and change the control channel to 165 or 161.

Experts only: install an open source firmware. I discussed this a fair bit in Everyone Needs a Router, but you have to be very careful which router model you buy, and you'll probably need to stick with older models. There are several which are specifically sold to be friendly to open source firmware.

Outside Your Home

Well, this one is simple. Assume everything you do outside your home, on a remote network or over WiFi is being monitored by IBGs: Internet Bad Guys.

I know, kind of an oppressive way to voyage out into the world, but it's better to start out with a defensive mindset, because you could be connecting to anyone's compromised router or network out there.

But, good news. There are only two key things you need to remember once you're outside, facing down that fiery ball of hell in the sky and armies of IBGs.

  1. Never access anything but HTTPS websites.

    If it isn't available over HTTPS, don't go there!

    You might be OK with HTTP if you are not logging in to the website, just browsing it, but even then IBGs could inject malware in the page and potentially compromise your device. And never, ever enter anything over HTTP you aren't 100% comfortable with bad guys seeing and using against you somehow.

    We've made tremendous progress in HTTPS Everywhere over the last 5 years, and these days most major websites offer (or even better, force) HTTPS access. So if you just want to quickly check your GMail or Facebook or Twitter, you will be fine, because those services all force HTTPS.

  2. If you must access non-HTTPS websites, or you are not sure, always use a VPN.

    A VPN encrypts all your traffic, so you no longer have to worry about using HTTPS. You do have to worry about whether or not you trust your VPN provider, but that's a much longer discussion than I want to get into right now.

    It's a good idea to pick a go-to VPN provider so you have one ready and get used to how it works over time. Initially it will feel like a bunch of extra work, and it kinda is, but if you care about your security an encrypt-everything VPN is bedrock. And if you don't care about your security, well, why are you even reading this?

If it feels like these are both variants of the same rule, always strongly encrypt everything, you aren't wrong. That's the way things are headed. The math is as sound as it ever was – but unfortunately the people and devices, less so.

Be Safe Out There

Until I heard Damien's story and John's story, I had no idea router hardware could be such a huge point of compromise. I didn't realize that you could be innocently visiting a friend's house, and because he happens to be the parent of three teenage boys and the owner of an old, unsecured router that you connect to via WiFi … your life will suddenly get a lot more complicated.

As the amount of stuff we connect to the Internet grows, we have to understand that the Internet of Things is a bunch of tiny, powerful computers, too – and they need the same strong attention to security that our smartphones, laptops, and servers already enjoy.

[advertisement] At Stack Overflow, we help developers learn, share, and grow. Whether you’re looking for your next dream job or looking to build out your team, we've got your back.
07 Mar 21:10

Download all your NuGet Package Licenses

The other day I was discussing the open source dependencies we had in a project with a lawyer. Forgetting my IANAL (I am not a lawyer) status, I made some bold statement regarding our legal obligations, or lack thereof, with respect to the licenses.

I can just see her rolling her eyes and thinking to herself, “ORLY?” She patiently and kindly asked if I could produce a list of all the licenses in the project.

Groan! This means I need to look at every package in the solution and then either open the package and look for the license URL in the metadata, or I need to search for each package and find the license on NuGet.org.

If only the original creators of NuGet exposed the package metadata in a structured manner. If only they had the foresight to provide that information in a scriptable fashion.

Then it dawned on me. Hey! I’m one of those people! And that’s exactly what we did! I bet I could programmatically access this information. So I immediately opened up the Package Manager Console in Visual Studio and cranked out a PowerShell script…HA HA HA! Just kidding. I, being the lazy ass I am, turned to Google and hoped someone else figured it out before me.

I didn’t find an exact solution, but I found a really good start. This StackOverflow answer by Matt Ward shows how to download every license for a single package. I then found this post by Ed Courtenay to list every package in a solution. I combined the two together and tweaked them a bit (such as filtering out null project names) and ended up with this one liner you can paste into your Package Manager Console. Note that you’ll want to change the path to something that makes sense on your machine.

I posted this as a gist as well.

@( Get-Project -All | ? { $_.ProjectName } | % { Get-Package -ProjectName $_.ProjectName } ) | Sort -Unique | % { $pkg = $_ ; Try { (New-Object System.Net.WebClient).DownloadFile($pkg.LicenseUrl, 'c:\dev\licenses\' + $pkg.Id + ".txt") } Catch [system.exception] { Write-Host "Could not download license for $pkg" } }

UPDATE: My first attempt had a bug in the catch clause that would prevent it from showing the package when an exception occurred. Thanks to Graham Clark for noticing it, Stephen Yeadon for suggesting a fix, and Gabriel for providing a PR for the fix.

Be sure to double check that the list is correct by comparing it to the list of package folders in your packages directory. This isn’t the complete list for my project because we also reference submodules, but it’s a really great start!

I have high hopes that some PowerShell guru will come along and improve it even more. But it works on my machine!

02 Mar 20:03

Git Alias To Migrate Commits To A Branch

Show of hands if this ever happens to you. After a long day of fighting fires at work, you settle into your favorite chair to unwind and write code. Your fingers fly over the keyboard punctuating your code with semi-colons or parentheses or whatever is appropriate.

But after a few commits, it dawns on you that you’re in the wrong branch. Yeah? Me too. This happens to me all the time because I lack impulse control. You can put your hands down now.

GitHub Flow

As you may know, a key component of the GitHub Flow lightweight workflow is to do all new feature work in a branch. Fixing a bug? Create a branch! Adding a new feature? Create a branch! Need to climb a tree? Well, you get the picture.

So what happens when you run into the situation I just described? Are you stuck? Heavens no! The thing about Git is that its very design supports fixing up mistakes after the fact. It’s very forgiving in this regard. For example, a recent blog post on the GitHub blog highlights all the different ways you can undo mistakes in Git.

The Easy Case - Fixing master

This is the simple case. I made commits on master that were intended for a branch off of master. Let’s walk through this scenario step by step with some visual aids.

The following diagram shows the state of my repository before I got all itchy trigger finger on it.

Initial state

As you can see, I have two commits to the master branch. HEAD points to the tip of my current branch. You can also see a remote tracking branch named origin/master (this is a special branch that tracks the master branch on the remote server). So at this point, my local master matches the master on the server.

This is the state of my repository when I am struck by inspiration and I start to code.

First

I make one commit. Then two.

Second Commit - fixing time

Each time I make a commit, the local master branch is updated to the new commit. Uh oh! As in the scenario in the opening paragraph, I meant to create these two commits on a new branch creatively named new-branch. I better fix this up.

The first step is to create the new branch. We can create it and check it out all in one step.

git checkout -b new-branch

checkout a new branch

At this point, both the new-branch and master point to the same commit. Now I can force the master branch back to its original position.

git branch --force master origin/master

force branch master

Here’s the set of commands that I ran all together.

git checkout -b new-branch
git branch --force master origin/master

Fixing up a non-master branch

The wrong branch

This case is a bit more complicated. Here I have a branch named wrong-branch that is my current branch. But I thought I was working in the master branch. I make two commits in this branch by mistake which causes this fine mess.

A fine mess

What I want here is to migrate commits E and F to a new branch off of master. Here’s the set of commands.

Let’s walk through these steps one by one. Not to worry, as before, I create a new branch.

git checkout -b new-branch

Always a new branch

Again, just like before, I force wrong-branch to its state on the server.

git branch --force wrong-branch origin/wrong-branch

force branch

But now, I need to move the commits from the branch new-branch onto master.

git rebase --onto master wrong-branch

Note that git rebase --onto works on the current branch (HEAD). So git rebase --onto master wrong-branch is saying migrate the commits between wrong-branch and HEAD onto master.

Final result

The git rebase command is a great way to move (well, actually you replay commits, but that’s a story for another day) commits onto other branches. The handy --onto flag makes it possible to specify a range of commits to move elsewhere. Pivotal Labs has a helpful post that describes this option in more detail.

So in this case, I moved commits E and F because they are the ones since wrong-branch on the current branch, new-branch.

Here’s the set of command I ran all together.

git checkout -b new-branch
git branch --force wrong-branch origin/wrong-branch
git rebase --onto master wrong-branch

Migrate commit ranges - great for local only branches

The assumption I made in the past two examples is that I’m working with branches that I’ve pushed to a remote. When you push a branch to a remote, you can create a local “remote tracking branch” that tracks the state of the branch on the remote server using the -u option.

For example, when I pushed the wrong-branch, I ran the command git push -u origin wrong-branch which not only pushes the branch to the remote (named origin), but creates the branch named origin/wrong-branch which corresponds to the state of wrong-branch on the server.

I can use a remote tracking branch as a convenient “Save Point” that I can reset to if I accidentally make commits on the corresponding local branch. It makes it easy to find the range of commits that are only on my machine and move just those.

But I could be in the situation where I don’t have a remote branch. Or maybe the branch I started muddying up already had a local commit that I don’t want to move.

That’s fine, I can just specify a commit range. For example, if I only wanted to move the last commit on wrong-branch into a new branch, I might do this.

git checkout -b new-branch
git branch --force wrong-branch HEAD~1
git rebase --onto master wrong-branch

Alias was a fine TV show, but a better Git technique

When you see the set of commands I ran, I hope you’re thinking “Hey, that looks like a rote series of steps and you should automate that!” This is why I like you. You’re very clever and very correct!

Automating a series of git commands sounds like a job for a Git Alias! Aliases are a powerful way of automating or extending Git with your own Git commands.

In a blog post I wrote last year, GitHub Flow Like a Pro with these 13 Git aliases, I wrote about some aliases I use to support my workflow.

Well now I have one more to add to this list. I decided to call this alias, migrate. Here’s the definition for the alias. Notice that it uses git rebase --onto which we used for the second scenario I described. It turns out that this happens to work for the first scenario too.

    migrate = "!f(){ CURRENT=$(git symbolic-ref --short HEAD); git checkout -b $1 && git branch --force $CURRENT ${3-'$CURRENT@{u}'} && git rebase --onto ${2-master} $CURRENT; }; f"

There’s a lot going on here and I could probably write a whole blog post unpacking it, but for now I’ll try and focus on the usage pattern.

This alias has one required parameter, the new branch name, and two optional parameters.

parameter type Description
branch-name required Name of the new branch.
target-branch optional Defaults to “master”. The branch that the new branch is created off of.
commit-range optional The commits to migrate. Defaults to the current remote tracking branch.

This command always migrates the current branch.

If I’m on a branch and want to migrate the local only commits over to master, I can just run git migrate new-branch-name. This works whether I’m on master or some other wrong branch.

I can also migrate the commits to a branch created off of something other than master using this command: git migrate new-branch other-branch

And finally, if I want to just migrate the last commit to a new branch created off of master, I can do this.

git migrate new-branch master HEAD~1

And there you go. A nice alias that automates a set of steps to fix a common mistake. Let me know if you find it useful!

Also, I want to give a special thanks to @mhagger for his help with this post. The original draft pull request had the grace of a two-year-old neurosurgeon with a mallet. The straightforward Git commands I proposed would rewrite the working tree twice. With his proposed changes, this alias never rewrites the working tree. Like math, there’s often a more elegant solution with Git once you understand the available tools.

02 Mar 19:49

The Meaning of Work

The TED Radio Hour podcast has an amazing episode entitled “The Meaning of Work”. It consists of four segments that cover various aspects of finding meaning and motivation at work. You should definitely listen to it, but I’ll provide a brief summary here of some points I found interesting.

The first segment features Margaret Heffernan who gave this TED talk about what makes high functioning teams.

At the beginning of the talk, she recounts a study by the biologist William Muir, emphasis mine.

Chickens live in groups, so first of all, he selected just an average flock, and he let it alone for six generations. But then he created a second group of the individually most productive chickens – you could call them superchickens – and he put them together in a superflock, and each generation, he selected only the most productive for breeding.

After six generations had passed, what did he find? Well, the first group, the average group, was doing just fine. They were all plump and fully feathered and egg production had increased dramatically. What about the second group? Well, all but three were dead. They’d pecked the rest to death. The individually productive chickens had only achieved their success by suppressing the productivity of the rest.

Sound familiar? We’re often taught that great results come from solitary geniuses who bunker down to work hard and emerge some time later with some great work of genius to bestow upon the world, alongside a luxurious beard perhaps.

Jim Carrey in some movie

But it’s a myth. Great results come from deep collaborations among teams of people who trust each other. Heffernan goes on to cite an MIT study that noted what lead to high functioning teams.

Nor were the most successful groups the ones that had the highest aggregate I.Q. Instead, they had three characteristics, the really successful teams. First of all, they showed high degrees of social sensitivity to each other. This is measured by something called the Reading the Mind in the Eyes Test. It’s broadly considered a test for empathy, and the groups that scored highly on this did better. Secondly, the successful groups gave roughly equal time to each other, so that no one voice dominated, but neither were there any passengers. And thirdly, the more successful groups had more women in them.

People who are attuned to each other, who can talk to each other with trust, empathy, and candor, create an environment where ideas can really flow. It’s wonderful to work in such an environment.

Another thing that struck me in the Radio Hour interview is this exchange she describes she often has with businesses.

What’s the driving goal here? And they answer, $60 billion in revenue. And I’ll say, “you have got to be joking! What on earth makes you think that everybody is really going to give it their all to hit a revenue target. You know you have to talk to something much deeper inside people than that. You have to talk to people about something that makes a difference to them everyday if you want them to bring their best and do their best and feel that you’ve given them the opportunity to do the best work they’ve ever done.”

This resonates with me. Revenue and profit targets don’t put a spark in my step in the morning.

Rather, it’s the story of Anna and how my own kids reflect that story that get me motivated. I’m excited to work on a platform that the future makers of the world will use to build their next great ideas.

I don’t mind getting paid well, but it doesn’t produce a deep connection to my work. As Heffernan points out,

For decades, we’ve tried to motivate people with money, even though we’ve got a vast amount of research that shows that money erodes social connectedness.

That lesson is reiterated in Drive: the surprising truth about what motivates us which I’ve linked to many times in the past.

What gets me up in the morning with a spring in my step is working on a platform that houses the code that helps send people into space or coordinates humanitarian efforts here on earth.

What motivates you?

Oh, by the way, many teams at GitHub including mine are hiring. Come do meaningful work with us!

15 Dec 19:08

Announcing TypeScript 1.5

by Jonathan Turner [MS]

Today we’re happy to announce the release of TypeScript 1.5.  This release took an alpha, a beta, and your help to get here.  It’s a big one, so let’s get started!

TypeScript 1.5 is part of the newly released Visual Studio 2015.  You can also get a separate download for Visual Studio 2013, npm, and straight from GitHub.

ES6 support


TypeScript 1.5 - closing the gap on Kangax ES6 support

 

TypeScript 1.5 adds a number of new ES6 features including modules, destructuring, spread, for..of, symbols, computed properties, let/const, and tagged string templates.  There is quite a bit of information available in the links above, including samples of how to use these features.  With these features, TypeScript takes a big step in completing its goal of becoming a superset of ES6 and offering type-checking for all of ES6’s major features

In the above table, you can see our progress on the Kangax ES6 support table.  This table, originally for JS engines, also shows coverage of the features transpilers and polyfills support for ES5 output.  With TypeScript 1.5, we doubled the number of passing tests and will continue to improve over the next few releases.

Modules

There has been quite a bit of work on how modules work in the 1.5 release.  With this release, we’ve begun supporting the official ES6 modules, we’re simplifying how modules work, and we’re adding support for more kinds of modules as output.

ES6 modules

TypeScript 1.5 supports the new module syntax from ES6.  The ES6 module syntax offers a rich way of working with modules.  Similar to external modules in TypeScript, ES6 modules can import modules and exports each piece of your public API.  Additionally, ES6 modules allow you to selectively import the parts of that public API you want to use.

import * as Math from "my/math";
import { add, subtract } from "my/math";

You can also work with the module itself using a ‘default’ export.  The default allows you a handle on what the module main content is.  This gives you even more precise control over the API you make available.

// math.ts

export function add(x, y) { return x + y }
export function subtract(x, y) { return x – y }
export default function multiply(x, y) { return x * y }

// myFile.ts

import {add, subtract} from "math";
import times from "math";
var result = times(add(2, 3), subtract(5, 3));

If you look closely, you can see the 'export default' used as the last line of math.ts.  This line allows us to control a 'default' export, which is what is exported when you don't import specific exports with curly braces ({ }) but instead use a name, like the second line of myFiles.ts.

Simplifying modules

One of the common points of feedback we’ve heard as new users pick up TypeScript for the first time is that the modules are a bit confusing.  Before ES6, there were internal and external modules.  Now with support for ES6 modules, there is now another module to learn about.  We’re simplifying this with the 1.5 release.

Going forward, internal modules will be called ‘namespace’.  We chose to use this term because of the closeness between how this form works and namespaces in other languages, and how the pattern in JS, sometimes called IIFE, is used in practice. We’ve already updated the handbook to reflect this change.  We’re encouraging teams to use the new terminology and corresponding syntax.

Likewise, external modules just become ‘modules’, with a strong emphasis on the standard ES6 module syntax.  With these changes, there is now just one ‘module’, and it works like the corresponding concept in JavaScript.  

New module output

TypeScript has supported multiple module loaders since the early days.  Because JavaScript is used in both the browser and on the server, TypeScript has supported compiling modules for either AMD or CommonJS.  

We’re adding two new module output formats to help continue support more JavaScript practices: SystemJS and UMD.  SystemJS will allow you to use ES6 modules closer to their native semantics without requiring an ES6-compatible browser engine.  UMD gives you a way to output a single module that works in both AMD and CommonJS.

Lightweight, portable projects

One of the tricky things with TypeScript projects is that it’s not often easy to move from a single file to working with a growing project of files.  You generally have two options: adding ///<reference> statements to tie your project together, or manually handling everything on the commandline.  Neither approach is particularly clean, and easily become a mess as the project grows.  Additionally, only the ///<reference> approach works well with editors, so you inevitably have a number of them in addition to your build.

TypeScript 1.5 introduces a new feature to make getting started withTypeScript easier.  The compiler now supports ‘tsconfig.json’, a new file which allows you to specify the files in your project and the compiler settings to use.  This lets you create a lightweight project that can be used both on the command-line and within the editor.  In fact, VS Code, Sublime, Atom, and others already support using tsconfig.json files.

Decorators

The 1.5 release also adds support for the proposed Decorator feature of ES7, which is currently being developed in collaboration with the Angular, Ember, and Aurelia teams.  Since Decorators are being defined in ES7, which hasn’t stabilized yet, the feature is considered ‘experimental’, but it is already showing how powerful it is when working with rich libraries and applications. 

import {Component, View, NgFor, bootstrap} from "angular2/angular2";
import {loadFile} from "audioFile";
import {displayAudioFile} from "displayAudio";

@Component({selector: 'file-list'})
@View({template: `
  <select id="fileSelect" size="5">
    <option *ng-for="#item of items; #i = index"
      [selected]="selected === item"(click)="updateSelection()">{{ item }}</option>
  </select>`,
  directives: [NgFor]
})

class MyDisplay {
  items: string[];
  constructor() {
    this.items = ["item1", "item2"];
  }

  updateSelection() { … }
}
Using decorators in Angular 2

Decorators allow you to attach metadata to classes and members, as well as update the functionality of what is being decorated.  As you can see above, Angular 2 uses Decorators to define the HTML selector and template on a class directly. We’re excited to see what else developers do with this feature.

Note: to use decorators in your projects, you'll need to pass the --experimentalDecorators flag to the compiler.

What’s next

The 1.5 release has significant new language features to use in your projects.  It’s been a fairly long release, and we’re excited to hear your feedback on TypeScript 1.5 as well as jump in and finish TypeScript 1.6.

We’ve also heard your feedback that you’d like smaller releases, more often.  We’re currently working on ways to make that happen, so you get great features, high quality, and don’t have to wait long to get it.  Stay tuned…

Thanks!

This release was possible because of the all the contributors that helped make it happen.  Special thanks to everyone in this list, who all contributed code to the 1.5 release:
  • Ahmad Farid 
  • Anders Hejlsberg
  • Arnav Singh
  • Basarat Ali Syed 
  • Bill Ticehurst 
  • Bryan Forbes 
  • Caitlin Potter 
  • Chris Bubernak
  • Colin Snover
  • Cyrus Najmabadi
  • Dan Quirk 
  • Daniel Rosenwasser
  • Dick van den Brink 
  • Dirk Bäumer 
  • Frank Wallis 
  • Guillaume Salles 
  • Ivo Gabe de Wolff 
  • James Whitney 
  • Jason Freeman
  • Jason Ramsay 
  • Johannes Rieken 
  • Jonathan Bond-Caron
  • Kagami Sascha Rosylight
  • Keith Mashinter
  • Lorant Pinter 
  • Masahiro Wakame
  • Mohamed Hegazy 
  • Oleg Mihailik
  • Paul van Brenk 
  • Pedro Maltez 
  • Ron Buckton 
  • Ryan Cavanaugh 
  • Sheetal Nandi
  • Shengping Zhong
  • Stan Thomas
  • Steve Lucco
  • Tingan Ho
  • Tomas Grubliauskas
  • TruongSinh Tran-Nguyen 
  • Vladimir Matveev
  • Yui Tanglertsampan
  • Zev Spitz 
  • Zhengbo Li
28 Oct 10:16

Renault présente le concept Coupé Corbusier

by Antoine Dufeu
Renault0Aujourd’hui débute à la Villa Savoye de Poissy, dont l’architecte est Le Corbusier, une exposition intitulée Des voitures à habiter : automobile et modernisme XXe-XXIe siècles. Renault en profite pour dévoiler un concept Coupé Corbusier plutôt étonnant. Le nouveau concept Coupé Corbusier de Renault surprend. Il s’affranchit des codes esthétiques que la marque française développe […]
26 Oct 12:32

Fairphone 2 hands-on: Modular phones are finally here

by Ars Staff

(credit: Andrii Degeler)

The Fairphone 2 is launching only in select European countries. The company says it plans to bring the device to other countries in 2016.

AMSTERDAM—With more and more similarly priced and specced Android smartphones arriving on the market, unique selling points are becoming increasingly rare. There's nothing bad about selling a decent phone with an attractive price tag, but it's always more interesting to take a look at something that stands out.

You don't have to add a plethora of unnecessary features or keep pumping the display resolution up, though. You can also stand out by changing the way a device is manufactured and sold. That's what Dutch startup Fairphone has been doing for a while now.

Read 21 remaining paragraphs | Comments

26 Oct 07:39

This 11-year-old is selling cryptographically secure passwords for $2 each

by Cyrus Farivar

Watch out, NSA. Mira Modi is helping everyone use better passwords. (credit: Julia Angwin)

We now live in a world where a New York City sixth grader is making money selling strong passwords. Earlier this month, Mira Modi, 11, began a small business at dicewarepasswords.com, where she generates six-word Diceware passphrases by hand.

Diceware is a well-known decades-old system for coming up with passwords. It involves rolling actual six-sided dice as a way to generate truly random numbers that are matched to a long list of English words. Those words are then combined into a non-sensical string ("ample banal bias delta gist latex") that exhibits true randomness and is therefore difficult to crack. The trick, though, is that these passphrases prove relatively easy for humans to memorize.

"This whole concept of making your own passwords and being super secure and stuff, I don’t think my friends understand that, but I think it’s cool," Modi told Ars by phone.

Read 18 remaining paragraphs | Comments

23 Oct 06:05

It's happening - OpenSSH for Windows...from Microsoft

by Scott Hanselman
OpenSSH for Windows

Back in June the folks over at the Microsoft PowerShell blog indicated they were going to support SSH in Windows soon. I read the post a few times and I must admit I read deeply between the lines and enjoyed the post very much. For example, this passage, with emphasis mine.

Finally, I'd like to share some background on today’s announcement, because this is the 3rd time the PowerShell team has attempted to support SSH.  The first attempts were during PowerShell V1 and V2 and were rejected.  Given our changes in leadership and culture, we decided to give it another try and this time, because we are able to show the clear and compelling customer value, the company is very supportive.  So I want to take a minute and thank all of you in the community who have been clearly and articulately making the case for why and how we should support SSH! 

Fast forward a few months and they've just released a VERY early version. It's not quite useful enough for a daily driver but it's heartening that it's happening. Sure, it's late, and ya, it should have happened years ago, but it's happening and it'll be built in. SSH will be one less thing to worry about.

Note as they said:

With this initial milestone complete, we are now making the code publicly available and open for public contributions. Please note that this code is still very early and should be treated as a developer preview and is not supported for use in production.

The repository is over at https://github.com/PowerShell/Win32-OpenSSH and the first release is here https://github.com/PowerShell/Win32-OpenSSH/releases. I just unblocked the zip and unzipped it into my c:\utils folder so it was in my path.

I SSH'ed into an Ubuntu machine I have running in Azure like this:

>ssh scott@foofoo.cloudapp.net -p 12345

I did have an issue immediately with an error and some formatting, which I filed and also discussed here. I was able to mostly work around with it "export TERM=xterm" but I'm sure they'll fix it, as again, it's super early.

As an alternative SSH client, try the Bitvise SSH Client. It has a command line app called "stermc" that acts like SSH. I made an ssh.bat file that contains just "stermc %1" and this let's me shush around nicely.


Sponsor: Many thanks to Atalasoft for sponsoring the feed this week. If your project requires image viewing, format freedom, scanning, or other document-centric workflows, Atalasoft’s document imaging experts can help. Evaluate their developer tools for 30 days with remarkable human support.


© 2015 Scott Hanselman. All rights reserved.
     
14 Oct 07:55

What is Trolling?

by Jeff Atwood

If you engage in discussion on the Internet long enough, you're bound to encounter it: someone calling someone else a troll.

The common interpretation of Troll is the Grimms' Fairy Tales, Lord of the Rings, "hangs out under a bridge" type of troll.

Thus, a troll is someone who exists to hurt people, cause harm, and break a bunch of stuff because that's something brutish trolls just … do, isn't it?

In that sense, calling someone a Troll is not so different from the pre-Internet tactic of calling someone a monster – implying that they lack all the self-control and self-awareness a normal human being would have.

Pretty harsh.

That might be what the term is evolving to mean, but it's not the original intent.

The original definition of troll was not a beast, but a fisherman:

Troll

verb \ˈtrōl\

  1. to fish with a hook and line that you pull through the water

  2. to search for or try to get (something)

  3. to search through (something)

If you're curious why the fishing metaphor is so apt, check out this interview:

There's so much fishing going on here someone should have probably applied for a permit first.

  • He engages in the interview just enough to get the other person to argue. From there, he fishes for anything that can nudge the argument into some kind of car wreck that everyone can gawk at, generating lots of views and publicity.

  • He isn't interested in learning anything about the movie, or getting any insight, however fleeting, into this celebrity and how they approached acting or directing. Those are perfunctory concerns, quickly discarded on the way to their true goal: generating controversy, the more the better.

I almost feel sorry for Quentin Tarantino, who is so obviously passionate about what he does, because this guy is a classic troll.

  1. He came to generate argument.
  2. He doesn't truly care about the topic.

Some trolls can seem to care about a topic, because they hold extreme views on it, and will hold forth at great length on said topic, in excruciating detail, to anyone who will listen. For days. Weeks. Months. But this is an illusion.

The most striking characteristic of the worst trolls is that their position on a given topic is absolutely written in stone, immutable, and they will defend said position to the death in the face of any criticism, evidence, or reason.

Look. I'm not new to the Internet. I know nobody has ever convinced anybody to change their mind about anything through mere online discussion before. It's unpossible.

But I love discussion. And in any discussion that has a purpose other than gladiatorial opinion bloodsport, the most telling question you can ask of anyone is this:

Why are you here?

Did you join this discussion to learn? To listen? To understand other perspectives? Or are you here to berate us and recite your talking points over and over? Are you more interested in fighting over who is right than actually communicating?

If you really care about a topic, you should want to learn as much as you can about it, to understand its boundaries, and the endless perspectives and details that make up any interesting topic. Heck, I don't even want anyone to change your mind. But you do have to demonstrate to us that you are at least somewhat willing to entertain other people's perspectives, and potentially evolve your position on the topic to a more nuanced, complex one over time.

In other words, are you here in good faith?

People whose actions demonstrate that they are participating in bad faith – whether they are on the "right" side of the debate or not – need to be shown the door.

So now you know how to identify a troll, at least by the classic definition. But how do you handle a troll?

You walk away.

I'm afraid I don't have anything uniquely insightful to offer over that old chestnut, "Don't feed the trolls." Responding to a troll just gives them evidence of their success for others to enjoy, and powerful incentive to try it again to get a rise out of the next sucker and satiate their perverse desire for opinion bloodsport. Someone has to break the chain.

I'm all for giving people the benefit of the doubt. Just because someone has a controversial opinion, or seems kind of argumentative (guilty, by the way), doesn't automatically make them a troll. But their actions over time might.

(I also recognize that in matters of social justice, there is sometimes value in speaking out and speaking up, versus walking away.)

So the next time you encounter someone who can't stop arguing, who seems unable to generate anything other than heat and friction, whose actions amply demonstrate that they are no longer participating in the conversation in good faith … just walk away. Don't take the bait.

Even if sometimes, that troll is you.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
07 Oct 09:41

Hands-on: The Surface Book is a laptop. But it’s also a tablet.

by Peter Bright

Video shot and edited by Jennifer Hahn. (video link)

Almost as soon as Microsoft announced the Surface RT and Surface Pro in 2012, there was an immediate reaction: "OK, that's sort of nice, I guess, but when will there be a Surface laptop?"

There's never been any doubt about the Surface line's build quality, attention to detail, and aesthetics. But for many of us, a tablet with a kickstand and separate keyboard lacks an essential quality: lapability. The Surface Pro 3's variable-position kickstand and more secure magnetic keyboard attachment meant that the thing could be used on your lap at a pinch, but it never had the stability or convenience of a true laptop with a stiff hinge, and no matter how much Microsoft claimed it to be a laptop replacement, it wasn't.

Read 12 remaining paragraphs | Comments

06 Oct 15:50

Microsoft introduces Surface Book, a convertible for Surface fans

by Andrew Cunningham

The Microsoft Surface Book

12 more images in gallery

NEW YORK—In addition to the Surface Pro 4, Microsoft has unveiled another brand-new PC today. The Microsoft Surface Book is Microsoft's first-ever convertible laptop, and it looks like it's aimed at people who are intrigued by the Surface Pro 4 but aren't interested in a tablet.

The laptop has a 13.5-inch, 3000×2000 PixelSense touchscreen with 6 million pixels, and it includes an Intel Skylake processor and a dedicated Nvidia GeForce GPU with GDDR5 memory. It also has PCI Express-connected solid-state storage. Microsoft claims it's the "fastest 13-inch laptop anywhere on any planet," a statement that we don't have the means to confirm, and that it's "two times more powerful" than the 13-inch MacBook Pro.

Read 4 remaining paragraphs | Comments

06 Oct 11:23

Penny Pinching in the Cloud: Your web app doesn't need 64-bit

by Scott Hanselman

Often times I hear folks say that they need (or want) 64-bit support when they deploy to the cloud. They'll deploy their modest application to Azure, for example, as a Web Application, then immediately go to the settings and set it to 64-bit. So many years later and it's "do I need 64-bit" is still confusing to a lot of people.

Change your Azure bitness settings here

I made basic Hello World ASP.NET app and deployed it. Now, I go to that Web Apps "blade" in the Azure Portal, click Tools, then Process Explorer (after exercising the app a little.) I'm running 32-bit here. The K is the Kudu "sidecar" deployment site (for things like Git deploy and diagnostics), and the other icon is the production site.

30 meg working set for IIS in 32 bit mode

Now, I'll swap it to 64-bit and exercise the web app again. Remember, this app is just a super basic app.

102 meg working set in IIS in 64-bit mode

See how the working set (memory) jump? It's a little extreme in a hello world example, but it's always going to be bigger than 32-bit. Always. 64-bit'll do that. Does your site need to address more than 4 gigabytes of memory from any single process? No? Then your web app probably doesn't need to be 64-bit. Don't believe me? Test it for yourself.

I'll go even further. Most web apps don't need 64-bit, but here's the real reason. If you stay 32-bit when putting your Web Application in the cloud you can fit more applications into a limited space. Maybe your Medium App Service Plan can actually be a Small and save you money.

Until 64-bit only is the default in things like Nano Server, today you can fit more Web Apps into limited memory if you stick with 32-bit.

I personally have 18 web apps in a Standard Small App Service in my personal Microsoft Azure account. They are sites like my podcast Hanselminutes and they get decent traffic. But most never get over 300-600 megs of memory and there's literally no reason for them to be 64-bit today. As such, I can fit more in the Small App Service Plan I've chosen.

18 web apps in a single app service plan

Remember that the Azure Pricing Calculator isn't totally obvious when it comes to Web Applications. It's not ~$55 per Basic Web Site. There's a Virtual Machine under there, they call the whole thing an "App Service Plan" and your Web Apps sit on top of that plan/VM. It's really $55 for a plan that supports as many web applications you can comfortably fit in there.

The cloud is a great deal when you're smart about the resources you've been given. If you're using Azure and you're not using most of the the resources in your service plan, you're possibly wasting money.

What Penny Pinching in the Cloud tips do you have? Disagree with this advice? Sound off in the comments.

Related Links


Sponsor: Thanks to my friends at Accusoft for sponsoring the feed this week. Just a few lines of code lets you add HTML5 document viewing, redaction, annotation, and more to your apps and websites. Download a free trial now!



© 2015 Scott Hanselman. All rights reserved.
     
02 Oct 12:19

Our Brave New World of 4K Displays

by Jeff Atwood

It's been three years since I last upgraded monitors. Those inexpensive Korean 27" IPS panels, with a resolution of 2560×1440 – also known as 1440p – have served me well. You have no idea how many people I've witnessed being Wrong On The Internet on these babies.

I recently got the upgrade itch real bad:

  • 4K monitors have stabilized as a category, from super bleeding edge "I'm probably going to regret buying this" early adopter stuff, and beginning to approach mainstream maturity.

  • Windows 10, with its promise of better high DPI handling, was released. I know, I know, we've been promised reasonable DPI handling in Windows for the last five years, but hope springs eternal. This time will be different!™

  • I needed a reason to buy a new high end video card, which I was also itching to upgrade, and simplify from a dual card config back to a (very powerful) single card config.

  • I wanted to rid myself of the monitor power bricks and USB powered DVI to DisplayPort converters that those Korean monitors required. I covet simple, modern DisplayPort connectors. I was beginning to feel like a bad person because I had never even owned a display that had a DisplayPort connector. First world problems, man.

  • 1440p at 27" is decent, but it's also … sort of an awkward no-man's land. Nowhere near high enough resolution to be retina, but it is high enough that you probably want to scale things a bit. After living with this for a few years, I think it's better to just suck it up and deal with giant pixels (34" at 1440p, say), or go with something much more high resolution and trust that everyone is getting their collective act together by now on software support for high DPI.

Given my great experiences with modern high DPI smartphone and tablet displays (are there any other kind these days?), I want those same beautiful high resolution displays on my desktop, too. I'm good enough, I'm smart enough, and doggone it, people like me.

I was excited, then, to discover some strong recommendations for the Asus PB279Q.

The Asus PB279Q is a 27" panel, same size as my previous cheap Korean IPS monitors, but it is more premium in every regard:

  • 3840×2160
  • "professional grade" color reproduction
  • thinner bezel
  • lighter weight
  • semi-matte (not super glossy)
  • integrated power (no external power brick)
  • DisplayPort 1.2 and HDMI 1.4 support built in

It is also a more premium monitor in price, at around $700, whereas I got my super-cheap no-frills Korean IPS 1440p monitors for roughly half that price. But when I say no-frills, I mean it – these Korean monitors didn't even have on-screen controls!

4K is a surprisingly big bump in resolution over 1440p — we go from 3.7 to 8.3 megapixels.

But, is it … retina?

It depends how you define that term, and from what distance you're viewing the screen. Per Is This Retina:

27" 3840×2160 'retina' at a viewing distance of 21"
27" 2560×1440 'retina' at a viewing distance of 32"

With proper computer desk ergonomics you should be sitting with the top of your monitor at eye level, at about an arm's length in front of you. I just measured my arm and, fully extended, it's about 26". Sitting at my desk, I'm probably about that distance from my monitor or a bit closer, but certainly beyond the 21" necessary to call this monitor 'retina' despite being 163 PPI. It definitely looks that way to my eye.

I have more words to write here, but let's cut to the chase for the impatient and the TL;DR crowd. This 4K monitor is totally amazing and you should buy one. It feels exactly like going from the non-retina iPad 2 to the retina iPad 3 did, except on the desktop. It makes all the text on your screen look beautiful. There is almost no downside.

There are a few caveats, though:

  • You will need a beefy video card to drive a 4K monitor. I personally went all out for the GeForce 980 Ti, because I might want to actually game at this native resolution, and the 980 Ti is the undisputed fastest single video card in the world at the moment. If you're not a gamer, any midrange video card should do fine.

  • Display scaling is definitely still a problem at times with a 4K monitor. You will run into apps that don't respect DPI settings and end up magnifying-glass tiny. Scott Hanselman provided many examples in January 2014, and although stuff has improved since then with Windows 10, it's far from perfect.

    Browsers scale great, and the OS does too, but if you use any desktop apps built by careless developers, you'll run into this. The only good long term solution is to spread the gospel of 4K and shame them into submission with me. Preach it, brothers and sisters!

  • Enable DisplayPort 1.2 in the monitor settings so you can turn on 60Hz. Trust me, you do not want to experience a 30Hz LCD display. It is unspeakably bad, enough to put one off computer screens forever. For people who tell you they can't see the difference between 30fps and 60fps, just switch their monitors to 30hz and watch them squirm in pain.

    Viewing those comparison videos, I begin to understand why gamers want 90Hz, 120Hz or even 144Hz monitors. 60fps / 60 Hz should be the absolute minimum, no matter what resolution you're running. Luckily DisplayPort 1.2 enables 60 Hz at 4K, but only just. You'll need DisplayPort 1.3+ to do better than that.

  • Disable the crappy built in monitor speakers. Headphones or bust, baby!

  • Turn down the brightness from the standard factory default of retina scorching 100% to something saner like 50%. Why do manufacturers do this? Is it because they hate eyeballs? While you're there, you might mess around with some basic display calibration, too.

This Asus PB279Q 4K monitor is the best thing I've upgraded on my computer in years. Well, actually, thing(s) I've upgraded, because I am not f**ing around over here.

Flo monitor arms, front view, triple monitors

I'm a long time proponent of the triple monitor lifestyle, and the only thing better than a 4K display is three 4K displays! That's 11,520×2,160 pixels to you, or 6,480×3,840 if rotated.

(Good luck attempting to game on this configuration with all three monitors active, though. You're gonna need it. Some newer games are too demanding to run on "High" settings on a single 4K monitor, even with the mighty Nvidia 980 Ti.)

I've also been experimenting with better LCD monitor arms that properly support my preferred triple monitor configurations. Here's a picture from the back, where all the action is:

Flo monitor arms, triple monitors, rear view

These are the Flo Monitor Supports, and they free up a ton of desk space in a triple monitor configuration while also looking quite snazzy. I'm fond of putting my keyboard just under the center monitor, which isn't possible with any monitor stand.

Flo monitor arm suggested multi-monitor setups

With these Flo arms you can "scale up" your configuration from dual to triple or even quad (!) monitor later.

4K monitors are here, they're not that expensive, the desktop operating systems and video hardware are in place to properly support them, and in the appropriate size (27") we can finally have an amazing retina display experience at typical desktop viewing distances. Choose the Asus PB279Q 4K monitor, or whatever 4K monitor you prefer, but take the plunge.

In 2007, I asked Where Are The High Resolution Displays, and now, 8 years later, they've finally, finally arrived on my desktop. Praise the lord and pass the pixels!

Oh, and gird your loins for 8K one day. It, too, is coming.

[advertisement] Building out your tech team? Stack Overflow Careers helps you hire from the largest community for programmers on the planet. We built our site with developers like you in mind.
02 Oct 10:55

Report: Twitter mulling posts with more than 140 characters

by Sam Machkovech

(credit: Shawn Campbell)

Twitter and some of its most enterprising users have found simple ways to get around the service's major limit of 140 characters per post, whether by enabling full-Tweet embeds, offering username tags within photos, or making it easier to read images loaded with text. But according to a Tuesday report by Re/code, the 140-character wall itself may soon crumble.

Citing "multiple people familiar with the company's plans," Re/code hinted at "a new product" that would allow Twitter users to exceed the default post length. However, the report didn't clarify whether that would be in the form of a brand new app or some other option, and it was anchored with a warning that "the long-form feature may never make it to consumers."

That may be because the company is also internally mulling ways to retain the 140-character limit while removing other text bottlenecks. According to Re/code, elements such as links and usernames might no longer count toward post lengths.

Read 1 remaining paragraphs | Comments

28 Sep 11:28

Windows 10 will soon be more environmentally friendly with updated dialog box

by Peter Bright

Gone, but not forgotten.

For the longest time, one of the things that people liked to poke fun at in Windows was a dialog box used to add fonts to the system. The rarely used dialog used Windows 3.1-era icons and fonts, even in Windows Vista, making it a weird anachronism. Microsoft tidied up that bit of Windows legacy in Windows 7 by removing the box entirely, but other relics remain.

One of the most annoying is the environment variables dialog. This box hasn't been updated for what feels like millennia, and it's cramped and awkward to use as a result. Environment variables can be lengthy, and they almost never fit in the current dialog. This is particularly acute for one of the most important variables, PATH. The PATH variable stores the names of all the directories that the system should search when hunting for executables, and many applications and development tools like to add their directories to the PATH. It quickly gets unwieldy.

The current annoying dialog.

And unlike the add font dialog, which people only ever looked at just to point and laugh—it was rarely used to actually install fonts—the environment variables box is actually useful, as it's the easiest and best way of changing Windows environment variables.

Read 3 remaining paragraphs | Comments

15 Sep 15:54

Quels futurs pilotes pour Renault en 2016 ?

by Thibault Larue
Alors que l'avenir de Renault est en train de se décider, quels pourraient être les pilotes en vue du retour dès 2016 ? Pistes et éléments de réflexions.
15 Sep 10:32

A better 404 page and redirects with GitHub Pages

A while back I migrated my blog to Jekyll and GitHub Pages. I worked hard to preserve my existing URLs.

But the process wasn’t perfect. My old blog engine was a bit forgiving about URLs. As long as the URL “slug” was correct, the URL could have any date in it. So there happened to be quite a few non-canonical URLs out in the wild.

So what I did was create a 404 page that had a link to log an issue against my blog. GitHub Pages will serve up this page for any file not found errors. Here’s an example of the rendered 404 page.

And the 404 issues started to roll in. Great! So what do I do with those issues now? How do I fix them?

GitHub Pages fortunately supports the Jekyll Redirect From plugin. For a guide on how to set it up on your GitHub Pages site, check out this GitHub Pages help documentation.

Here’s an example of my first attempt at front-matter for a blog post on my blog that contains a redirect.

---
layout: post
title: "Localizing ASP.NET MVC Validation"
permalink: /404.html
date: 2009-12-07 -0800
comments: true
disqus_identifier: 18664
redirect_from: "/archive/2009/12/12/localizing-aspnetmvc-validation.aspx"
categories: [aspnetmvc localization validation]
---

As you can see, my old blog was an ASP.NET application so all the file extensions end with .aspx. Unfortunately, this caused a problem. GitHub currently serves unknown extensions like this using the application/octet-stream content type. So when someone visits the old URL using Google Chrome, instead of a redirect, they end up downloading the HTML for the redirect. It happens to work on Internet Explorer which I suspect does a bit of content sniffing.

It turns out, there’s an easy solution as suggested by @charliesome. If you add the .html extension to a Jekyll URL, GitHub Pages will handle the omission of the extension just fine.

Thus, I fixed the redirect like so:

redirect_from: "/archive/2009/12/12/localizing-aspnetmvc-validation.aspx.html"

By doing so, a request for http://haacked.com/archive/2009/12/12/localizing-aspnetmvc-validation.aspx is properly redirected. This is especially useful to know for those of you migrating from old blog engines that appended a file extension other than ``.html` to all post URLs.

Also, if you need to redirect multiple URLs, you can use a Jekyll array like so:

redirect_from:
  - "/archive/2012/04/15/The-Real-Pain-Of-Software-Development-2.aspx.aspx.html/"
  - "/archive/2012/04/15/The-Real-Pain-Of-Software-Development-2.aspx.html/"

Note that this isn’t just useful for blogs. If you have a documentation site and re-organize the content, use the redirect_from plug-in to preserve the old URLs. Hope to see your content on GitHub Pages soon!

15 Sep 05:59

A new Windows 10 Mobile build is out, now with fewer hamburgers

by Peter Bright

It's been about a month since Microsoft last updated the Windows 10 Mobile Insider Preview. A new build is now available on the Fast ring, bringing it to number 10536.1004.

As usual, the build offers a bunch of functional improvements (such as new voice input languages and one-handed mode support for all phones regardless of size), fixes for things broken in older builds (such as mobile hotspot functionality), and new bugs of its own (especially for the Lumia 1020; read Microsoft's blog for all the details).

The new Windows 10 Photos app was updated a couple of weeks ago, and an even newer version is included with the new operating system build. This updated app marks a gratifying about turn by Microsoft. Earlier builds of the app sported hamburger menus for accessing most of the application's features. This drew widespread criticism from the community for a whole host of reasons.

Read 3 remaining paragraphs | Comments

14 Sep 05:44

Moneyball of Hiring

The shaman squatted next to the entrails on the ground and stared intently at the pattern formed by the splatter. There was something there, but confirmation was needed. Turning away from the decomposing remains, the shaman consulted the dregs of a cup of tea, searching the shifting patterns of the swirling tea leaves for corroboration. There it was. A decision could be made. “Yes, this person will be successful here. We should hire this person.”

Spring Pouchong tea - CC BY SA 3.0 by Richard Corner

Such is the state of hiring developers today.

Our approach to hiring is misguided

The approach to hiring developers and managing their performance afterwards at many if not most tech companies is based on age old ritual and outdated ideas of what predicts how an employee will perform. Most of it ends up being very poor at predicting success and rife with bias.

For example, remember when questions like “How would you move Mt. Fuji?” were all the rage at companies like Microsoft and Google? The hope was that in answering such questions, the interviewee would demonstrate clever problem solving skills and intelligence. Surely this would help hire the best and brightest?

Nope.

According to Google’s Senior VP of People Operations Laszlo Bock, Google long ago realized these questions were complete wastes of time.

Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship.

We’re not the first to face this

Our industry is at a stage where we are rife with superstition about what factors predict putting together a great team. This sounds eerily familiar.

The central premise of Moneyball is that the collected wisdom of baseball insiders (including players, managers, coaches, scouts, and the front office) over the past century is subjective and often flawed. Statistics such as stolen bases, runs batted in, and batting average, typically used to gauge players, are relics of a 19th-century view of the game and the statistics available at that time.

Moneyball, a book by Michael Lewis documents how the Oakland Athletics baseball team decided to ignore tradition and use an evidence-based statistical approach to figuring out what makes a strong baseball team. This practice of using empirical statistical analysis towards baseball became known as sabermetrics.

Prior to this approach, conventional wisdom looked at stolen bases, runs batted in, and batting average as indicators of success. Home run hitters were held in especially high esteem, even those with low batting averages. It was not unlike our industry’s fascination with hiring Rock Stars. But the sabermetrics approach found otherwise.

Rigorous statistical analysis had demonstrated that on-base percentage and slugging percentage are better indicators of offensive success.

Did it work?

By re-evaluating the strategies that produce wins on the field, the 2002 Athletics, with approximately US$44 million in salary, were competitive with larger market teams such as the New York Yankees, who spent over US$125 million in payroll that same season…This approach brought the A’s to the playoffs in 2002 and 2003.

Moneyball of Hiring

It makes me wonder, where is the Moneyball of software hiring and performance management?

Companies like Google, as evidenced by the previously mentioned study, are applying a lot of data to the analysis of hiring and performance management. I bet that analysis is a competitive advantage in their ability to hire the best and form great teams. It gives them the ability to hire people overlooked by other companies still stuck in the superstition that making candidates code on white boards or reverse linked lists will find the best people.

Even so, this area is ripe to apply more science to it and study it on a grander scale. I would love to see multiple large companies collect and share this data for the greater good of the industry and society at large. Studies like this often are a force in reducing unconscious bias and increasing diversity.

Having this data in the open might remove this one competitive advantage in hiring, but companies can still compete by offering interesting work, great compensation, and benefits.

The good news is, there are a lot of people and companies thinking about this. This article, What’s Wrong with Job Interviews, and How to Fix Them is a great example.

We’ll never get it right

Even with all this data, we’ll never perfect hiring. Studying human behavior is a tricky affair. If we could predict it well, the stock market would be completely predictable.

Companies should embrace the fact that they will often be wrong. They will make mistakes in hiring. As much time as a company spends attempting to make their hiring process rock solid, they should also spend a similar amount of time building humane systems for correcting hiring mistakes. This is a theme I’ve touched upon before - the inevitability of failure.

13 Sep 16:07

Optimize for Tiny Victories

by Scott Hanselman
image

I was talking with Dawn C. Hayes, a maker and occasional adjunct processor in NYC earlier this week. We were talking about things like motivation and things like biting off more than we can chew when it comes to large projects, as well as estimating how long something will take. She mentioned that it's important to optimize for quick early successes, like getting a student to have an "I got the LED to light up" moment. With today's short attention span internet, you can see that's totally true. Every programming language has a "5 min quick start" dedicated to giving you some sense of accomplishment quickly. But she also pointed out that after the LED Moment students (and everyone ever, says me) always underestimate how long stuff will take. It's easy to describe a project in a few sentences but it might take months or a year to make it a reality.

This is my challenge as well, perhaps it's yours, too. As we talked, I realized that I developed a technique for managing this without realizing it.

I optimize my workflow for lots of tiny victories.

For example, my son and I are working on 3D printing a quadcopter drone. I have no idea what I'm doing, I have no drone experience, and I'm mediocre with electronics. Not to mention I'm dealing with a 7 year old who wants to know why it hasn't taken off yet, forgetting that we just had the idea a minute ago.

I'm mentally breaking it up in work sprints, little dependencies, but in order to stay motivated we're making sure each sprint - whether it's a day or an hour - is a victory as well as a sprint. What can we do to not just move the ball forward but also achieve something. Something small, to be clear. But something we can be excited about, something we can tell mommy about, something we can feel good about.

We're attempting to make a freaking quadcopter and it's very possible we won't succeed. But we soldered two wires together today, and the muiltimeter needle moved, so we're pretty excited about that tiny victory and that's how we're telling the story. It will keep us going until tomorrow's sprint.

Do you do this too? Tell us in the comments.


Sponsor: Big thanks to my friends at Raygun for sponsoring the feed this week. Only 16% of people will try a failing app more than twice. Raygun offers real-time error and crash reporting for your web and mobile apps that you can set up in minutes. Find out more and get started for free here.


© 2015 Scott Hanselman. All rights reserved.