Shared posts

07 Jun 13:01

The guy behind the pogo-stick startup Cangoroo also runs an ad agency with a curious history of media stunts

by Jeff Beer

Is it actually real? Maybe! But it also joins a long list of once media-hyped, now very very quiet ventures from the same Swedish ad agency.

Over the past few weeks, the global media has taken turns issuing this-is-crazy-is-it-real? hot takes on a new mobility startup called Cangoroo. What makes this newcomer more newsworthy than the swarm of e-scooters or bike shares popping up in urban centers around the world is that its first product is . . . a freakin’ pogo stick.

Read Full Story

23 May 17:37

Linda Hamilton is back and buff as ever in Terminator: Dark Fate trailer

by Jennifer Ouellette

Linda Hamilton reprises her role as the original Sarah Connor in Paramount Pictures’ Terminator: Dark Fate.

Linda Hamilton is back as Sarah Connor, as tough and distrustful of time-traveling sentient machines as ever, in the first trailer for Terminator: Dark Fate, the sixth installment in hugely influential franchise.

(Mild spoilers for original Terminator and Terminator 2: Judgement Day below.)

The entire franchise is premised on the notion that sentient killing machines from the future can be sent back in time to take out key human figures destined to lead the resistance against the self-aware AI network known as Skynet, thereby preventing a nuclear holocaust that wipes out the human race. In the original Terminator film, the target was a young and innocent Sarah Connor, future mother to resistance leader John Connor. Then came Terminator 2: Judgement Day, or as I like to call it, The Best Damn Sequel of All Time. A second Terminator is sent to take out a teenaged John—with the twist that Schwarzenegger's original Terminator has been reprogrammed as his protector against a newer model known as the T-1000.

Read 5 remaining paragraphs | Comments

03 May 06:42

The Curious Case of the JSON BOM

Recently, I was testing some interop with Azure Service Bus, which has a rather useful feature when used with Azure Functions in that you can directly bind JSON to an custom type, to do something like:

[FunctionName("SaySomething")]
public static void Run([ServiceBusTrigger("Endpoints.SaySomething", Connection = "SbConnection")]SaySomething command, ILogger log)  
{
    log.LogInformation($"Incoming message: {command.Message}");
}

As long as we have valid JSON, everything should "just work". However, when I sent a JSON message to receive, I got a...rather unhelpful message:

[4/4/2019 2:50:35 PM] System.Private.CoreLib: Exception while executing function: SaySomething. Microsoft.Azure.WebJobs.Host: Exception binding parameter 'command'. Microsoft.Azure.WebJobs.ServiceBus: Binding parameters to complex objects (such as 'SaySomething') uses Json.NET serialization.
1. Bind the parameter type as 'string' instead of 'SaySomething' to get the raw values and avoid JSON deserialization, or  
2. Change the queue payload to be valid json. The JSON parser failed: Unexpected character encountered while parsing value: ?. Path '', line 0, position 0.  

Not too great! So what's going on here? Where did that parsed value come from? Why is the path/line/position tell us the beginning of the stream? The answer leads us down a path of encoding, RFC standards, and the .NET source code.

Diagnosing the problem

Looking at the error message, it appears we can try to do exactly what it asks and bind to a string and just deserialize manually:

[FunctionName("SaySomething")]
public static void Run([ServiceBusTrigger("Endpoints.SaySomething", Connection = "SbConnection")]string message, ILogger log)  
{
    var command = JsonConvert.DeserializeObject<SaySomething>(message);
    log.LogInformation($"Incoming message: {command.Message}");
}

Deserializing the message like this actually works, we can run with no problems! But still strange - why did it work with a string, but not automatically binding from the wire message?

On the wire, the message payload is simply a byte array. So something could be happening reading the bytes into a string - which can be a bit "lossy" depending on the encoding used. To fully understand what's going on, we need to understand how the text was encoded to see how it should be decoded. Clearly though the decoding of our string "fixes" the problem, but I don't see it as a viable solution.

To dig further, let's drop down our messaging binding to its lowest form to get the raw bytes:

[FunctionName("SaySomething")]
public static void Run([ServiceBusTrigger("Endpoints.SaySomething", Connection = "SbConnection")]byte[] message, ILogger log)  
{
    var value = Encoding.UTF8.GetString(message);
    var command = JsonConvert.DeserializeObject<SaySomething>(message);
    log.LogInformation($"Incoming message: {command.Message}");
}

Going this route, we get our original exception:

[4/4/2019 3:58:38 PM] System.Private.CoreLib: Exception while executing function: SaySomething. Newtonsoft.Json: Unexpected character encountered while parsing value: ?. Path '', line 0, position 0.

Something is clearly different between getting the string value through the Azure Functions/ServiceBus trigger binding, and going through Encoding.UTF8. To see what's different, let's look at that value:

{"Message":"Hello World"}

That looks fine! However, let's grab the raw bytes from the stream:

EFBBBF7B224D657373616765223A2248656C6C6F20576F726C64227D  

And put that in a decoder:

{"Message":"Hello World"}

Well there's your problem! A bunch of junk characters at the beginning of the string. Where did those come from? A quick search of those characters reveals the culprit: our wire format included the UTF8 Byte Order Mark of 0xEF,0xBB,0xBF. Whoops!

JSON BOM'd

Having the UTF-8 BOM in our wire message messed some things up for us, but why should that matter? It turns out in the JSON RFC spec, having the BOM in our string is forbidden (emphasis mine):

JSON text exchanged between systems that are not part of a closed ecosystem MUST be encoded using UTF-8 [RFC3629].

Previous specifications of JSON have not required the use of UTF-8 when transmitting JSON text. However, the vast majority of JSON- based software implementations have chosen to use the UTF-8 encoding, to the extent that it is the only encoding that achieves interoperability.

Implementations MUST NOT add a byte order mark (U+FEFF) to the beginning of a networked-transmitted JSON text. In the interests of interoperability, implementations that parse JSON texts MAY ignore the presence of a byte order mark rather than treating it as an error. .

Now that we've identified our culprit, why did our code sometimes succeed and sometimes fail? It turns out that we do really need to care about the encoding of our messages, and even when we think we pick sensible defaults, this may not be the case.

Looking at the documentation for the Encoding.UTF8 property, we see that the Encoding objects have two important toggles:

  • Should it emit the UTF-8 BOM identifier?
  • Should it throw for invalid bytes?

We'll get to that second one here in a second, but something we can see from the documentation and the code is that Encoding.UTF8 says "yes" for the first question and "no" for the second. However, if you use Encoding.Default, it's different! It will be "no" for the first question and "no" for the second.

Herein lies our problem - the JSON spec says that the the encoded bytes must not include the BOM, but may ignore a BOM. Between "does" and "does not", our implementation went on the "does not" side of "may".

We can't really affect the decoding of bytes to string or bytes to object in Azure Functions (or it's rather annoying to), but perhaps we can fix the problem in the first place - JSON originally encoded with a BOM.

When debugging, I noticed that Encoding.UTF8.GetBytes() did not return any BOM, but clearly I'm getting one here. So what's going on? It gets even muddier when we start to introduce streams.

Crossing streams

Typically, when dealing with I/O, you're dealing with a Stream. And typically again, if you're writing a stream, you're dealing with a StreamWriter whose default behavior is UTF-8 encoding without a BOM. The comments are interesting here, as it says:

// The high level goal is to be tolerant of encoding errors when we read and very strict 
// when we write. Hence, default StreamWriter encoding will throw on encoding error.  

So StreamWriter is "no-BOM, throw on error" but StreamReader is Encoding.UTF8, which is "yes for BOM, no for throwing error". Each option is opposite the other!

If we're using a vanilla StreamWriter, we still shouldn't have a BOM. Ah, but we aren't! I was using NServiceBus to generate the message (I'm lazy that way) and its Newtonsoft.Json serializer to generate the message bytes. Looking underneath the covers, we see the default reader and writer explicitly pass in Encoding.UTF8 for both reading and writing. This is very likely not what we want for writing, since the default behavior of Encoding.UTF8 is to include a BOM.

The quick fix is to swap out the encoding with something that's a better default here in our NServiceBus setup configuration:

var serialization = endpointConfiguration.UseSerialization<NewtonsoftSerializer>();  
serialization.WriterCreator(s =>  
{
    var streamWriter = new StreamWriter(s, new UTF8Encoding(false));
    return new JsonTextWriter(streamWriter);
});

We have a number of options here, such as just using the default StreamWriter but in my case I'd rather be very explicit about what options I want to use.

The longer fix is a pull request to patch this behavior so that the default writer will not emit the BOM (but will need a bit of testing since technically this changes the wire format).

So the moral of the story - if you see weird characters like  showing up in your text, enjoy a couple of days digging in to character encoding and making really bad jokes.

16 Apr 13:04

We, The Unoffended

We, The Unoffended, believe that a free society depends upon the tolerance and forebearance of it’s members toward each other, and each other’s ideas. We therefore strive to remain unoffended by the freedom of others to speak and act according to their identity, politics, ambitions, and desires. We are intolerant only of intentional or negligent harm done to others.

Therefore, given the above,

We, The Unoffended:

  • Are not offended by what you do, what you say, or who you are.

  • Are not offended by your race, your age, your gender, your sexual preference, your identification, your politics, your religion, or any other natural or chosen attribute.

  • Are not offended by your thoughts, your votes, your prayers, your hopes, your dreams, or your ambitions.

  • Celebrate, support, and defend your right to enjoy the personal dignity of your identity, your beliefs, and your choices.

  • Value you for who you are and for whatever skills, intelligence, perception, wisdom, and empathy you choose to share.

  • Believe that you owe us nothing; and are not offended if you choose to share nothing with us.

  • Are not offended by disagreement. Your willingness to share disagreeing thoughts and ideas is a gift that we value and cherish.

  • Believe that, in order for the best ideas to rise to the top, all ideas must be heard and evaluated on their merits. We therefore encourage the expression of any and all ideas; and resist their suppression.

  • Believe that ideas are not harmful, so long as they do not specifically incite harmful actions.

  • Are not offended by careful actions that cause inadervtent harm.

01 Mar 06:36

Amazon’s latest program to curb emissions? One delivery day per house, per week

by Megan Geuss
Amazon boxes in a warehouse.

Enlarge / Completed customer orders are seen in their boxes, awaiting delivery, at the Amazon Fulfillment Centre on November 14, 2018, in Hemel Hempstead, England. (credit: Leon Neal/Getty Images)

On Thursday, Amazon announced that it would be making a program widely available to Amazon Prime members that would allow them to schedule all deliveries for a single day, once a week. The so-called "Amazon Day" service will be voluntary and targets customers who are concerned about their carbon footprint.

Grouping purchase deliveries will help Amazon cut down on emissions associated with sending a delivery truck to the same house multiple times a week, and the company says holding orders for a single day during the week will also allow it to group orders within a single package, thereby reducing packaging. Customers can select their preferred day of the week to receive shipments. According to CNN, customers can add items to their Amazon Day shipment up until two days in advance of the shipment.

Customers can also choose to remove an item from "Amazon Day" delivery, having it shipped more expeditiously if necessary. Select Prime members have already had access to the program, but it was made available to all Prime members as of today.

Read 3 remaining paragraphs | Comments

01 Feb 06:46

Visiting The National Museum of Computing inside Bletchley Park - Can we crack Enigma with Raspberry Pis?

by Scott Hanselman

image"The National Museum of Computing is a museum in the United Kingdom dedicated to collecting and restoring historic computer systems. The museum is based in rented premises at Bletchley Park in Milton Keynes, Buckinghamshire and opened in 2007" and I was able to visit it today with my buddies Damian and David. It was absolutely brilliant.

I'd encourage you to have a listen to my 2015 podcast with Dr. Sue Black who used social media to raise awareness of the state of Bletchley Park and help return the site to solvency.

The National Museum of Computing is a must-see if you are ever in the UK. It was a short 30ish minute train ride up from London. We spent the whole afternoon there.

There is a rebuild of the Colossus, the the world's first electronic computer. It had a single purpose: to help decipher the Lorenz-encrypted (Tunny) messages between Hitler and his generals during World War II. The Colossus Gallery housing the rebuild of Colossus tells that remarkable story.

A working Bombe machine

The backside of the Bombe

National Computing Museum

Cipher Machine

We saw the Turing-Welchman Bombe machine, an electro-mechanical device used to break Enigma-enciphered messages about enemy military operations during the Second World War. They offer guided tours (recommended as the volunteers have encyclopedic knowledge) and we were able to encrypt a message with the German Enigma (there's a 90 second video I made, here) and decrypt it with the Bombe, which is effectively 12 Enigmas working in parallel, backwards.

Inside the top lid of a working EngimaA working Engima

It's worth noting - this from their website - that the first Bombe, named Victory, started code-breaking on Bletchley Park on 14 March 1940 and by the end of the war almost 1676 female WRNS and 263 male RAF personnel were involved in the deployment of 211 Bombe machines. The museum has a working reconstructed Bombe.

 

I wanted to understand the computing power these systems had then, and now. Check out the website where you can learn about the OctaPi - a Raspberry Pi array of eight Pis working together to brute-force Enigma. You can make your own here!

I hope you enjoy these pics and videos and I hope you one day get to enjoy the history and technology in and around Bletchley Park.


Sponsor: Check out Seq 5 for real-time diagnostics from ASP.NET Core and Serilog, now with faster queries, support for Docker on Linux, and beautiful new dark and light themes.



© 2018 Scott Hanselman. All rights reserved.
     
08 Oct 11:39

There is no longer any such thing as Computer Security

by Jeff Atwood

Remember "cybersecurity"?

its-cybersecurity-yay

Mysterious hooded computer guys doing mysterious hooded computer guy .. things! Who knows what kind of naughty digital mischief they might be up to?

Unfortunately, we now live in a world where this kind of digital mischief is literally rewriting the world's history. For proof of that, you need look no further than this single email that was sent March 19th, 2016.

podesta-hack-email-text

If you don't recognize what this is, it is a phishing email.

phishing-guy

This is by now a very, very famous phishing email, arguably the most famous of all time. But let's consider how this email even got sent to its target in the first place:

  • An attacker slurped up lists of any public emails of 2008 political campaign staffers.

  • One 2008 staffer was also hired for the 2016 political campaign

  • That particular staffer had non-public campaign emails in their address book, and one of them was a powerful key campaign member with an extensive email history.

On successful phish leads to an even wider address book attack net down the line. Once they gain access to a person's inbox, they use it to prepare to their next attack. They'll harvest existing email addresses, subject lines, content, and attachments to construct plausible looking boobytrapped emails and mail them to all of their contacts. How sophisticated and targeted to a particular person this effort is determines whether it's so-called "spear" phishing or not.

phishing-vs-spear-phishing

In this case is it was not at all targeted. This is a remarkably unsophisticated, absolutely generic routine phishing attack. There is zero focused attack effort on display here. But note the target did not immediately click the link in the email!

podesta-hack-email-link-1

Instead, he did exactly what you'd want a person to do in this scenario: he emailed IT support and asked if this email was valid. But IT made a fatal mistake in their response.

podesta-it-support-response

Do you see it? Here's the kicker:

Mr. Delavan, in an interview, said that his bad advice was a result of a typo: He knew this was a phishing attack, as the campaign was getting dozens of them. He said he had meant to type that it was an “illegitimate” email, an error that he said has plagued him ever since.

One word. He got one word wrong. But what a word to get wrong, and in the first sentence! The email did provide the proper Google address to reset your password. But the lede was already buried since the first sentence said "legitimate"; the phishing link in that email was then clicked. And the rest is literally history.

What's even funnier (well, in the way of gallows humor, I guess) is that public stats were left enabled for that bit.ly tracking link, so you can see exactly what crazy domain that "Google login page" resolved to, and that it was clicked exactly twice, on the same day it was mailed.

bitly-podesta-tracking-link

As I said, these were not exactly sophisticated attackers. So yeah, in theory an attentive user could pay attention to the browser's address bar and notice that after clicking the link, they arrived at

http://myaccount.google.com-securitysettingpage.tk/security/signinoptions/password

instead of

https://myaccount.google.com/security

Note that the phishing URL is carefully constructed so the most "correct" part is at the front, and weirdness is sandwiched in the middle. Unless you're paying very close attention and your address bar is long enough to expose the full URL, it's … tricky. See this 10 second video for a dramatic example.

(And if you think that one's good, check out this one. Don't forget all the unicode look-alike trickery you can pull, too.)

I originally wrote this post as a presentation for the Berkeley Computer Science Club back in March, and at that time I gathered a list of public phishing pages I found on the web.

nightlifesofl.com
ehizaza-limited.com
tcgoogle.com
appsgoogie.com
security-facabook.com

Of those five examples from 6 months ago, one is completely gone, one loads just fine, and three present an appropriately scary red interstitial warning page that strongly advises you not to visit the page you're trying to visit, courtesy of Google's safe browsing API. But of course this kind of shared blacklist domain name protection will be completely useless on any fresh phishing site. (Don't even get me started on how blacklists have never really worked anyway.)

google-login-phishing-page

It doesn't exactly require a PhD degree in computer science to phish someone:

  • Buy a crazy long, realistic looking domain name.
  • Point it to a cloud server somewhere.
  • Get a free HTTPS certificate courtesy of our friends at Let's Encrypt.
  • Build a realistic copy of a login page that silently transmits everything you type in those login fields to you – perhaps even in real time, as the target types.
  • Harvest email addresses and mass mail a plausible looking phishing email with your URL.

I want to emphasize that although clearly mistakes were made in this specific situation, none of the people involved here were amateurs. They had training and experience. They were working with IT and security professionals. Furthermore, they knew digital attacks were incoming.

The … campaign was no easy target; several former employees said the organization put particular stress on digital safety.

Work emails were protected by two-factor authentication, a technique that uses a second passcode to keep accounts secure. Most messages were deleted after 30 days and staff went through phishing drills. Security awareness even followed the campaigners into the bathroom, where someone put a picture of a toothbrush under the words: “You shouldn’t share your passwords either.”

The campaign itself used two factor auth extensively, which is why personal gmail accounts were targeted, because they were less protected.

The key takeaway here is that it's basically impossible, statistically speaking, to prevent your organization from being phished.

Or is it?

techsolidarity-logo

Nobody is doing better work in this space right now than Maciej Ceglowski and Tech Solidarity. Their list of basic security precautions for non-profits and journalists is pure gold and has been vetted by many industry professionals with security credentials that are actually impressive, unlike mine. Everyone should read this list very closely, point by point.

Everyone?

Computers, courtesy of smartphones, are now such a pervasive part of average life for average people that there is no longer any such thing as "computer security". There is only security. In other words, these are normal security practices everyone should be familiar with. Not just computer geeks. Not just political activists and politicians. Not just journalists and nonprofits.

Everyone.

It is a fair bit of reading, so because I know you are just as lazy as I am, and I am epically lazy, let me summarize what I view as the three important takeaways from the hard work Tech Solidarity put into these resources. These three short sentences are the 60 second summary of what you want to do, and what you want to share with others so they do, too.

1) Enable Two Factor authentication through an app, and not SMS, everywhere you can.

google-2fa-1

Logging in with only a password, now matter how long and unique you attempt to make that password, will never be enough. A password is what you know; you need to add the second factor of something you have (or something you are) to achieve significant additional security. SMS can famously be intercepted, social engineered, or sim-jacked all too easily. If it's SMS, it's not secure, period. So install an authenticator app, and use it, at least for your most important credentials such as your email account and your bank.

Have I mentioned that Discourse added two factor authentication support in version 2.0, and our just released 2.1 adds printed backup codes, too? There are two paths forward: you can talk about the solution, or you can build the solution. I'm trying to do both to the best of my ability. Look for the 2FA auth option in your user preferences on your favorite Discourse instance. It's there for you.

(This is also a company policy at Discourse; if you work here, you 2FA everything all the time. No other login option exists.)

2) Make all your passwords 11 characters or more.

It's a long story, but anything under 11 characters is basically the same as having no password at all these days. I personally recommend at least 14 characters, maybe even 16. But this won't be a problem for you, because...

3) Use a password manager.

If you use a password manager, you can simultaneously avoid the pernicious danger of password re-use and the difficulty of coming up with unique and random passwords all the time. It is my hope in the long run that cloud based password management gets deeply built into Android, iOS, OSX, and Windows so that people don't need to run a weird melange of third party apps to achieve this essential task. Password management is foundational and should not be the province of third parties on principle, because you never outsource a core competency.

Bonus rule! For the particularly at-risk, get and use a U2F key.

In the long term, two factor through an app isn't quite secure enough due to the very real (and growing) specter of real-time phishing. Authentication apps offer timed keys that expire after a minute or two, but if the attacker can get you to type an authentication key and relay it to the target site fast enough, they can still log in as you. If you need ultimate protection, look into U2F keys.

u2f-keys

I believe U2F support is still too immature at the moment, particularly on mobile, for this to be practical for the average person right now. But if you do happen to fall into those groups that will be under attack, you absolutely want to set up U2F keys where you can today. They're cheap, and the good news is that they literally make phishing impossible at last. Given that Google had 100% company wide success against phishing with U2F, we know this works.

In today's world, computers are now so omnipresent that there is no longer any such thing as cybersecurity, online security, or computer security – there's only security. You either have it, or you don't. If you follow and share these three rules, hopefully you too can have a modicum of security today.

27 Mar 11:25

Photos from the March for Our Lives (42 photos)

Spurred into action after the shooting at Marjory Stoneman Douglas High School in Florida last month, hundreds of thousands of Americans are taking to the streets today in hundreds of coordinated protests, calling for legislators to address school safety and gun violence. More than 800 March for Our Lives events are planned across the United States and around the world. Gathered here, images from rallies overseas and across the United States.

Looking west, people fill Pennsylvania Avenue during the "March for Our Lives" rally in support of gun control, on March 24, 2018, in Washington, D.C. (Alex Brandon / AP)
02 Mar 14:21

Running ASP.NET Core on GoDaddy's cheapest shared Linux Hosting - Don't Try This At Home

by Scott Hanselman

First, a disclaimer. Don't do this. I did this to test a theory and to prove a point. ASP.NET Core and the .NET Core that it runs on are open source and run pretty much anywhere. I wanted to see if I could run an ASP.NET Core site on GoDaddy's cheapest hosting ($3, although it scales to $8) that basically supports only PHP. It's not a full Linux VM. It's locked-down and limited. You don't have root. You are missing most tools you'd expect you'd have.

BUT.

I wanted to see if I could get ASP.NET Core running on it anyway. Maybe if I do, they (and other inexpensive hosts) will talk to the .NET team, learn that ASP.NET Core is open source and could easily run on their existing infrastructure.

AGAIN: Don't do this. It's hacky. It's silly. But it's hella cool. IMHO. Also, big thanks to Tomas Weinfurt for his help!

First, I went to GoDaddy and signed up for their cheap hosting. Again, not a VM, but their shared one. I also registered supercheapaspnetsite.com as well. They use a cPanel-based web management system that doesn't really let you do anything. You can turn on SSH, do some PHP stuff, and generally poke around, but it's not exactly low-level.

First I ssh (shoosh!) in and see what I'm working with. I'm shooshing with Ubuntu on Windows 10 feature, that every developer should turn on. It's makes it really easy to work with Linux hosts if you're starting from Linux on Windows 10.

secretname@theirvmname [/proc]$ cat version
Linux version 2.6.32-773.26.1.lve1.4.46.el6.x86_64 (mockbuild@build.cloudlinux.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC) ) #1 SMP Tue Dec 5 18:55:41 EST 2017
secretname@theirvmname [/proc]$

OK, looks like Red Hat, so CentOS 6 should be compatible.

I'm going to use .NET Core 2.1 (which is in preview now!) and get the SDK at https://www.microsoft.com/net/download/all and install it on my Windows machine where I will develop and build the app. I don't NEED to use Windows to do this, but it's the laptop I have and it's also nice to know I can build on Windows but target CentOS/RHEL6.

Next I'll make a new ASP.NET site with

dotnet new razor

and then I'll publish a self-contained version like this:

dotnet publish -r rhel.6-x64

And those files will end up in a folder like \supercheapaspnetsite\bin\Debug\netcoreapp2.1\rhel.6-x64\publish\

NOTE: You may need to add the NuGet feed for the dailies for this .NET Core preview in order to get the RHEL6 runtime downloaded during this local publish.

Then I used WinSCP (or whatever FTP/SCP client you like, rsync, etc) to get the files over to the ~/www folder on your GoDaddy shared site. Then I

chmod +x ./supercheapasnetsite

to make it executable. Now, from my ssh session at GoDaddy, let's try to run my app!

secretname@theirvmname [~/www]$ ./supercheapaspnetsite
Failed to load hb, error: libunwind.so.8: cannot open shared object file: No such file or directory
Failed to bind to CoreCLR at '/home/secretname/public_html/libcoreclr.so'

Of course it couldn't be that easy, right? .NET Core wants the unwind library (shared object) and it doesn't exist on this locked down system.

AND I don't have yum/apt/rpm or a way to install it right?

I could go looking for tar.gz file somewhere like this http://download.savannah.nongnu.org/releases/libunwind/ but I need to think about versions and make sure things line up. Given that I'm targeting CentOS6, I should start here https://centos.pkgs.org/6/epel-x86_64/libunwind-1.1-3.el6.x86_64.rpm.html and download libunwind-1.1-3.el6.x86_64.rpm.

I need to crack open that rpm file and get the library. RPM packages are just headers on top of a CPIO archive, so I can apt-get install rpm2cpio from my local Ubuntu instances (on Windows 10). Then from /mnt/c/users/scott/Downloads (where I downloaded the file) I will extract it.

rpm2cpio ./libunwind-1.1-3.el6.x86_64.rpm | cpio -idmv

There they are.

image

This part is cool. Even though I have these files, I don't have root or any way to "install" them. However I could either export/use the LD_LIBRARY_PATH environment variable to control how libraries get loaded OR I could put these files in $ORIGIN/netcoredeps. You can read more about Self Contained Linux Applications on .NET Core here.

The main executable of published .NET Core applications (which is the .NET Core host) has an RPATH property set to $ORIGIN/netcoredeps. That means that when Linux shared library loader is looking for shared libraries, it looks to this location before looking to default shared library locations. It is worth noting that the paths specified by the LD_LIBRARY_PATHenvironment variable or libraries specified by the LD_PRELOAD environment variable are still used before the RPATH property. So, in order to use local copies of the third-party libraries, developers need to create a directory named netcoredeps next to the main application executable and copy all the necessary dependencies into it.

At this point I've added a "netcoredeps" folder to my public folder, and then copied it (scp) over to GoDaddy. Let's run it again.

secretname@theirvmname [~/www]$ ./supercheapaspnetsite
FailFast: Couldn't find a valid ICU package installed on the system. Set the configuration flag System.Globalization.Invariant to true if you want to run with no globalization support.
   at System.Environment.FailFast(System.String)
   at System.Globalization.GlobalizationMode.GetGlobalizationInvariantMode()
   at System.Globalization.GlobalizationMode..cctor()
   at System.Globalization.CultureData.CreateCultureWithInvariantData()
   at System.Globalization.CultureData.get_Invariant()
   at System.Globalization.CultureInfo..cctor()
   at System.StringComparer..cctor()
   at System.AppDomain.InitializeCompatibilityFlags()
   at System.AppDomain.Setup(System.Object)
Aborted

Ok, now it's complaining about ICU packages. These are for globalization. That is also mentioned in the self-contained-linux apps docs and there's a precompiled binary I could download. But there's options.

If your app doesn't explicitly opt out of using globalization, you also need to add libicuuc.so.{version}, libicui18n.so.{version}, and libicudata.so.{version}

I like "opt-out" so I don't have to go dig these ups (although I could) so I can either set the CORECLR_GLOBAL_INVARIANT env var to 1, or I can add System.Globalization.Invariant = true to supercheapaspnetsite.runtimeconfig.json, which I'll do with just to be obnoxious. ;)

When I run it again I get another complained about libuv. Yet another shared library that isn't installed on this instance. I could  go get it and put it in netcoredeps OR since I'm using the .NET Core 2.1, I could try something new. There were some improvements made in .NET Core 2.1 around sockets and http performance. On the client side, these new managed libraries are written from the ground up in managed code using the new high-performance Span<T> and on the server-side I could use Kestrel's (Kestrel is the .NET Core webserver) experimental UseSockets() as they are starting to move that over.

In other words, I can bypass libuv usage entirely by changing my Program.cs to use the use UseSockets() like this.

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
     WebHost.CreateDefaultBuilder(args)
     .UseSockets()
     .UseStartup<Startup>();

Let's run it again. I'll add the ASPNETCORE_URLS environment variable and set it to a high port like 8080. Remember, I'm not admin so I can't use any port under 1024.

secretname@theirvmname [~/www]$ export ASPNETCORE_URLS="http://*:8080"
secretname@theirvmname [~/www]$ ./supercheapaspnetsite
Hosting environment: Production
Content root path: /home/secretname/public_html
Now listening on: http://0.0.0.0:8080
Application started. Press Ctrl+C to shut down.

Holy crap it actually started.

Ok, but I can't access it from supercheapaspnetsite.com:8080 because this is GoDaddy's locked down managed shared hosting. I can't just open a port or forward a port in their control panel.

But. They use Apache, and that has the .htaccess file!

Could I use mod_proxy and try this?

ProxyPassReverse / http://127.0.0.1:8080/

Looks like no, they haven't turned this on. Likely they don't want to proxy off to external domains, but it'd be nice if they allowed localhost. Bummer. So close.

Fine, I'll proxy the traffic myself. (Not perfect, but this is all a spike)

RewriteRule ^(.*)$  "show.php" [L]

Cool, now a cheesy proxy goes in show.php.

<?php
$site = 'http://127.0.0.1:8080';
$request = $_SERVER['REQUEST_URI'];
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $site . $request);
curl_setopt($ch, CURLOPT_HEADER, TRUE);
$f = fopen("headers.txt", "a");
    curl_setopt($ch, CURLOPT_VERBOSE, 0);
    curl_setopt($ch, CURLOPT_STDERR, $f);
    #don't output curl response, I need to strip the headers.
    #yes I know I can just CURLOPT_HEADER, false and all this 
    # goes away, but for testing we log headers
    curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 
    curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
$hold = curl_exec($ch);
#strip headers
$header_size = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
$headers = substr($hold, 0, $header_size);
$response = substr($hold, $header_size);
$headerArray = explode(PHP_EOL, $headers);
echo $response; #echo ourselves. Yes I know curl can do this for us.
?>

Cheesy, yes. Works for GET? Also, yes. This really is Apache's job, not ours, but kudos to Tomas for this evil idea.

An ASP.NET Core app at a host that doesn't support it

Boom. How about another page at /about? Yes.

Another page with ASP.NET Core at a host that doesn't support it

Lovely. But I had to run the app myself. I have no supervisor or process manager (again this is already handled by GoDaddy for PHP but I'm in unprivileged world.) Shooshing in and running it is a bad idea and not sustainable. (Well, this whole thing is not sustainable, but still.)

We could copy "screen" over and start it up and detach like use screen ./supercheapaspnet app, but again, if it crashes, no one will start it. We do have crontab, so for now, we'll launch the app on a schedule occasionally to do a health check and if needed, keep it running. Also added a few debugging tools in ~/bin:

secretname@theirvmname [~/bin]$ ll
total 304
drwxrwxr-x  2    4096 Feb 28 20:13 ./
drwx--x--x 20    4096 Mar  1 01:32 ../
-rwxr-xr-x  1  150776 Feb 28 20:10 lsof*
-rwxr-xr-x  1   21816 Feb 28 20:13 nc*
-rwxr-xr-x  1  123360 Feb 28 20:07 netstat*

All in all, not that hard. ASP.NET Core and .NET Core underneath it can run pretty much anywhere, just like PHP, Python, whatever.

If you're a host and you want to talk to someone at Microsoft about setting up ASP.NET Core shared hosting, email Sourabh.Shirhatti@microsoft.com and talk to them! If you are GoDaddy, I apologize, and you should also email. ;)


Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.



© 2018 Scott Hanselman. All rights reserved.
     
26 Feb 14:15

One Email Rule - Have a separate Inbox and an Inbox CC to reduce email stress. Guaranteed.

by Scott Hanselman

Two folders in your email client. One called I've mentioned this tip before but once more for the folks in the back. This email productivity tip is a game-changer for most information workers.

We all struggled with email.

  • Some of us just declare Email Bankruptcy every few months. Ctrl-A, delete, right? They'll send it again.
  • Some of us make detailed and amazing Rube Goldbergian email rules and deliberately file things away into folders we will never open again.
  • Some of us just decide that if an email scrolls off the screen, well, it's gone.

Don't let the psychic weight of 50,000 unread emails give you headaches. Go ahead, declare email bankruptcy - you're already in debt - then try this one email rule.

One Email Rule

Email in your inbox is only for email where you are on the TO: line.

All other emails (BCC'ed or CC'ed) should go into a folder called "Inbox - CC."

That's it.

I just got back from a week away. Look at my email there. 728 emails. Ugh. But just 8 were sent directly to me. Perhaps that's not a realistic scenario for you, sure. Maybe it'd be more like 300 and 400. Or 100 and 600.

Point is, emails you are CC'ed on are FYI (for your information) emails. They aren't Take Action Now emails. Now if they ARE, then you need to take a moment and train your team. Very simple, just reply and say, "oops, I didn't see this immediately because I was cc'ed. If you need me to see something now, please to: me." It'll just take a moment to "train" your coworkers because this is a fundamentally intuitive way to work. They'll say, "oh, make sense. Cool."

Try this out and I guarantee it'll change your workflow. Next, do this. Check your Inbox - CC less often than your Inbox. I check CC'ed email a few times a week, while I may check Inbox a few times a day.

If you like this tip, check out my complete list of Productivity Tips!


Sponsor: Unleash a faster Python Supercharge your applications performance on future forward Intel® platforms with The Intel® Distribution for Python. Available for Windows, Linux, and macOS. Get the Intel® Distribution for Python* Now!



© 2017 Scott Hanselman. All rights reserved.
     
12 Feb 15:46

Quick Tip – Return HTTP Status Code from ASP.NET Core Methods

by Talking Dotnet

It is advisable to return the proper HTTP status code in response to a client request. This helps the client to understand the request’s result and then take corrective measure to handle it. Proper use of the status codes will help to handle a request’s response in an appropriate way. Out of the box, ASP.NET […]

The post Quick Tip – Return HTTP Status Code from ASP.NET Core Methods appeared first on Talking Dotnet.

31 Jan 16:19

F1 - La Formule 1 renonce aux grid girls

by Vincent Lalanne-Sicaud
Jugées symboles d'une image de femme passéiste, les grid-girls ne seront plus présentes avant les Grand Prix de Formule 1.
26 Jan 16:58

Photos of a Women's March Weekend (45 photos)

Over this weekend, organizers staged approximately 200 demonstrations around the world to commemorate the one-year anniversary of the massive 2017 Women's March on Washington. Hundreds of thousands of protesters marched once again in continued opposition to the administration of President Donald Trump, to promote women's rights, health issues, equality, diversity, and inclusion, and to mobilize voters and candidates for the upcoming midterm elections in the United States. Below are images of this weekend’s marches in Washington, New York, Chicago, Boston, Los Angeles, Seattle, and from cities in England, Sweden, France, Canada, Germany, Italy, and more.

People participate in the second annual Women's March in Los Angeles, California, on January 20, 2018. (Patrick Fallon / Reuters)
26 Jan 16:56

Hopeful Images From 2017 (35 photos)

2017 has been another year of news stories that produced photos which can often be difficult or disturbing to view. I’ve made it a tradition to compose an essay of uplifting images from the past year. The following are images of personal victories, families and friends at play, expressions of love and compassion, volunteers at work, assistance being given to those in need, or simply small and pleasant moments. While composing these, I am always reminded of one of my favorite quotes from Mr. Rogers, who once said that when he was young and saw scary things in the news, “My mother would say to me, ‘look for the helpers—you will always find people who are helping.’ To this day, I remember my mother's words and I am always comforted by realizing that there are still so many helpers—so many caring people in this world.”

Atila, a trained therapeutic greyhound used to treat patients with mental-health issues and learning difficulties, falls asleep as it is caressed by three patients at Benito Menni health facility in Elizondo, Spain, on February 13, 2017. (Susana Vera / Reuters)
13 Jan 13:30

To Serve Man, with Software

by Jeff Atwood

I didn't choose to be a programmer. Somehow, it seemed, the computers chose me. For a long time, that was fine, that was enough; that was all I needed. But along the way I never felt that being a programmer was this unambiguously great-for-everyone career field with zero downsides. There are absolutely occupational hazards of being a programmer, and one of my favorite programming quotes is an allusion to one of them:

It should be noted that no ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter.

Which reminds me of another joke that people were telling in 2015:

Donald Trump is basically a comment section running for president

Which is troubling because technically, technically, I run a company that builds comment sections.

Here at the tail end of 2017, from where I sit neither of these jokes seem particularly funny to me any more. Perhaps I have lost the capacity to feel joy as a human being? Haha just kidding! ... kinda.

Remember in 2011 when Marc Andreeseen said that "Software is eating the world?"

software is eating the world, Marc Andreessen

That used to sound all hip and cool and inspirational, like "Wow! We software developers really are making a difference in the world!" and now for the life of me I can't read it as anything other than an ominous warning that we just weren't smart enough to translate properly at the time. But maybe now we are.

to-serve-man

I've said many, many times that the key to becoming an experienced software developer is to understand that you are, at all times, your own worst enemy. I don't mean this in a negative way – you have to constantly plan for and design around your inevitable human mistakes and fallibility. It's fundamental to good software engineering because, well, we're all human. The good-slash-bad news is that you're only accidentally out to get yourself. But what happens when we're infinitely connected and software is suddenly everywhere, in everyone's pockets every moment of the day, starting to approximate a natural extension of our bodies? All of a sudden those little collective social software accidents become considerably more dangerous:

The issue is bigger than any single scandal, I told him. As headlines have exposed the troubling inner workings of company after company, startup culture no longer feels like fodder for gentle parodies about ping pong and hoodies. It feels ugly and rotten. Facebook, the greatest startup success story of this era, isn’t a merry band of hackers building cutesy tools that allow you to digitally Poke your friends. It’s a powerful and potentially sinister collector of personal data, a propaganda partner to government censors, and an enabler of discriminatory advertising.

I'm reminded of a particular Mitchell and Webb skit: "Are we the baddies?"

On the topic of unanticipated downsides to technology, there is no show more essential than Black Mirror. If you haven't watched Black Mirror yet, do not pass go, do not collect $200, go immediately to Netflix and watch it. Go on! Go ahead!

⚠ Fair warning: please DO NOT start with season 1 episode 1 of Black Mirror! Start with season 3, and go forward. If you like those, dip into season 2 and the just-released season 4, then the rest. But humor me and please at least watch the first episode of season 3.

The technology described in Black Mirror can be fanciful at times, but several episodes portray disturbingly plausible scenarios with today's science and tech, much less what we'll have 20 to 50 years from now. These are very real cautionary tales, and some of this stuff is well on its way toward being realized.

Programmers don't think of themselves as people with the power to change the world. Most programmers I know, including myself, grew up as nerds, geeks, social outcasts. Did I ever tell you about the time I wrote a self-destructing Apple // boot disk program to let a girl in middle school know that I liked her? I was (and still am) a terrible programmer, but oh man did I ever test the heck out of that code before copying on to her school floppy disc. But I digress. What do you do when you wake up one day and software has kind of eaten the world, and it is no longer clear if software is in fact an unambiguously good thing, like we thought, like everyone told us … like we wanted it to be?

Months ago I submitted a brief interview for a children's book about coding.

I recently recieved a complimentary copy of the book in the mail. I paged to my short interview, alongside the very cool Kiki Prottsman. I had no real recollection of the interview questions after the months of lead time it takes to print a physical book, but reading the printed page, I suddenly hit myself over the head with the very answer I had been searching my soul for these past 6 months:

Jeff Atwood quote: what do you love most about coding?

In attempting to simplify my answers for an audience of kids, I had concisely articulated the one thing that keeps me coming back to software: to serve man. Not on a platter, for bullshit monetization – but software that helps people be the best version of themselves.

And you know why I do it? I need that help, too. I get tired, angry, upset, emotional, cranky, irritable, frustrated and I need to be reminded from time to time to choose to be the better version of myself. I don't always succeed. But I want to. And I believe everyone else – for some reasonable statistical value of everyone else – fundamentally does, too.

That was the not-so-secret design philosophy behind Stack Overflow, that by helping others become better programmers, you too would become a better programmer. It's unavoidable. And, even better, if we leave enough helpful breadcrumbs behind for those that follow us, we collectively advance the whole of programming for everyone.

I apologize for not blogging much in 2017. I've certainly been busy with Discourse which is actually going great; we grew to 21 people and gave $55,000 back this year to the open source ecosystem we build on. But that's no excuse. The truth is that it's been hard to write because this has been a deeply troubling year in so many dimensions — for men, for tech, for American democracy. I'm ashamed of much that happened, and I think one of the first and most important steps we can take is to embrace explicit codes of conduct throughout our industry. I also continue to believe, if we start to think more holistically about what our software can do to serve all people, not just ourselves personally (or, even worse, the company we work for) — that software can and should be part of the solution.

I tried to amplify on these thoughts in recent podcasts:

 Community Engineering Report with Kim Crayton
 Developer on Fire with Dave Rael
 Dorm Room Tycoon with William Channer

Software is easy to change, but people ... aren't. So in the new year, as software developers, let's make a resolution to focus on the part we can change, and keep asking ourselves one very important question: how can our software help people become the best version of themselves?

13 Jan 11:17

Docker and Linux Containers on Windows, with or without Hyper-V Virtual Machines

by Scott Hanselman

Containers are lovely, in case you haven't heard. They are a nice and clean way to get a reliable and guaranteed deployment, no matter the host system.

If I want to run my my ASP.NET Core application, I can just type "docker run -p 5000:80 shanselman/demos" at the command line, and it'll start up! I don't have any concerns that it won't run. It'll run, and run well.

Some containers naysayers say , sure, we could do the same thing with Virtual Machines, but even today, a VHD (virtual hard drive) is rather an unruly thing and includes a ton of overhead that a container doesn't have. Containers are happening and you should be looking hard at them for your deployments.

docker run shanselman/demos

Historically on Windows, however, Linux Containers run inside a Hyper-V virtual machine. This can be a good thing or a bad thing, depending on what your goals are. Running Containers inside a VM gives you significant isolation with some overhead. This is nice for Servers but less so for my laptop. Docker for Windows hides the VM for the most part, but it's there. Your Container runs inside a Linux VM that runs within Hyper-V on Windows proper.

HyperV on Windows

With the latest version of Windows 10 (or 10 Server) and the beta of Docker for Windows, there's native Linux Container support on Windows. That means there's no Virtual Machine or Hyper-V involved (unless you want), so Linux Containers run on Windows itself using Windows 10's built in container support.

For now you have to switch "modes" between Hyper V and native Containers, and you can't (yet) run Linux and Windows Containers side by side. The word on the street is that this is just a point in time thing, and that Docker will at some point support running Linux and Windows Containers in parallel. That's pretty sweet because it opens up all kinds of cool hybrid scenarios. I could run a Windows Server container with an .NET Framework ASP.NET app that talks to a Linux Container running Redis or Postgres. I could then put them all up into Kubernetes in Azure, for example.

Once I've turned Linux Containers on Windows on within Docker, everything just works and has one less moving part.

Linux Containers on Docker

I can easily and quickly run busybox or real Ubuntu (although Windows 10 already supports Ubuntu natively with WSL):

docker run -ti busybox sh

More useful even is to run the Azure Command Line with no install! Just "docker run -it microsoft/azure-cli" and it's running in a Linux Container.

Azure CLI in a Container

I can even run nyancat! (Thanks Thomas!)

docker run -it supertest2014/nyan

nyancat!

Speculating - I look forward to the day I can run "minikube start --vm-driver="windows" (or something) and easily set up a Kubernetes development system locally using Windows native Linux Container support rather than using Hyper-V Virtual Machines, if I choose to.


Sponsor: Why miss out on version controlling your database? It’s easier than you think because SQL Source Control connects your database to the same version control tools you use for applications. Find out how.


© 2017 Scott Hanselman. All rights reserved.
     
13 Jan 11:14

Writing smarter cross-platform .NET Core apps with the API Analyzer and Windows Compatibility Pack

by Scott Hanselman

.NET Core is Open Source and Cross PlatformThere's a couple of great utilities that have come out in the last few weeks in the .NET Core world that you should be aware of. They are deeply useful when porting/writing cross-platform code.

.NET API Analyzer

First is the API Analyzer. As you know, APIs sometimes get deprecated, or you'll use a method on Windows and find it doesn't work on Linux. The API Analyzer is a Roslyn (remember Roslyn is the name of the C#/.NET compiler) analyzer that's easily added to your project as a NuGet package. All you have to do is add it and you'll immediately start getting warnings and/or squiggles calling out APIs that might be a problem.

Check out this quick example. I'll make a quick console app, then add the analyzer. Note the version is current as of the time of this post. It'll change.

C:\supercrossplatapp> dotnet new console

C:\supercrossplatapp> dotnet add package Microsoft.DotNet.Analyzers.Compatibility --version 0.1.2-alpha

Then I'll use an API that only works on Windows. However, I still want my app to run everywhere.

static void Main(string[] args)

{
Console.WriteLine("Hello World!");

if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows))
{
var w = Console.WindowWidth;
Console.WriteLine($"Console Width is {w}");
}
}

Then I'll "dotnet build" (or run, which implies build) and I get a nice warning that one API doesn't work everywhere.

C:\supercrossplatapp> dotnet build


Program.cs(14,33): warning PC001: Console.WindowWidth isn't supported on Linux, MacOSX [C:\Users\scott\Desktop\supercr
ossplatapp\supercrossplatapp.csproj]
supercrossplatapp -> C:\supercrossplatapp\bin\Debug\netcoreapp2.0\supercrossplatapp.dll

Build succeeded.

Olia from the .NET Team did a great YouTube video where she shows off the API Analyzer and how it works. The code for the API Analyzer up here on GitHub. Please leave an issue if you find one!

Windows Compatibility Pack for .NET Core

Second, the Windows Compatibility Pack for .NET Core is a nice piece of tech. When .NET Core 2.0 come out and the .NET Standard 2.0 was finalized, it included over 32k APIs that made it extremely compatible with existing .NET Framework code. In fact, it's so compatible, I was able to easily take a 15 year old .NET app and port it over to .NET Core 2.0 without any trouble at all.

They have more than doubled the set of available APIs from 13k in .NET Standard 1.6 to 32k in .NET Standard 2.0.

.NET Standard 2.0 is cool because it's supported on the following platforms:

  • .NET Framework 4.6.1
  • .NET Core 2.0
  • Mono 5.4
  • Xamarin.iOS 10.14
  • Xamarin.Mac 3.8
  • Xamarin.Android 7.5

When you're porting code over to .NET Core that has lots of Windows-specific dependencies, you might find yourself bumping into APIs that aren't a part of .NET Standard 2.0. So, there's a new (preview) Microsoft.Windows.Compatibility NuGet package that "provides access to APIs that were previously available only for .NET Framework."

There will be two kinds of APIs in the Compatibility Pack. APIs that were a part of Windows originally but can work cross-platform, and APIs that will always be Windows only, because they are super OS-specific. APIs calls to the Windows Registry will always be Windows-specific, for example. But the System.DirectoryServices or System.Drawing APIs could be written in a way that works anywhere. The Windows Compatibility Pack adds over 20,000 more APIs, on top of what's already available in .NET Core. Check out the great video that Immo shot on the compat pack.

The point is, if the API that is blocking you from using .NET Core is now available in this compat pack, yay! But you should also know WHY you are pointing to .NET Core. Work continues on both .NET Core and .NET (Full) Framework on Windows. If your app works great today, there's no need to port unless you need a .NET Core specific feature. Here's a great list of rules of thumb from the docs:

Use .NET Core for your server application when:+

  • You have cross-platform needs.
  • You are targeting microservices.
  • You are using Docker containers.
  • You need high-performance and scalable systems.
  • You need side-by-side .NET versions per application.

Use .NET Framework for your server application when:

  • Your app currently uses .NET Framework (recommendation is to extend instead of migrating).
  • Your app uses third-party .NET libraries or NuGet packages not available for .NET Core.
  • Your app uses .NET technologies that aren't available for .NET Core.
  • Your app uses a platform that doesn’t support .NET Core.

Finally, it's worth pointing out a few other tools that can aid you in using the right APIs for the job.

Enjoy!


Sponsor: Get the latest JetBrains Rider preview for .NET Core 2.0 support, Value Tracking and Call Tracking, MSTest runner, new code inspections and refactorings, and the Parallel Stacks view in debugger.


© 2017 Scott Hanselman. All rights reserved.
     
13 Jan 11:10

Accelerated 3D VR, sure, but impress me with a nice ASCII progress bar or spinner

by Scott Hanselman

I'm glad you have a 1080p 60fps accelerated graphics setup, but I'm old school. Impress me with a really nice polished ASCII progress bar or spinner!

I received two tips this week about cool .NET Core ready progress bars so I thought I'd try them out.

ShellProgressBar by Martijn Laarman

This one is super cool. It even supports child progress bars for async stuff happening in parallel! It's very easy to use. I was able to get a nice looking progress bar going in minutes.

static void Main(string[] args)

{
const int totalTicks = 100;
var options = new ProgressBarOptions
{
ForegroundColor = ConsoleColor.Yellow,
ForegroundColorDone = ConsoleColor.DarkGreen,
BackgroundColor = ConsoleColor.DarkGray,
BackgroundCharacter = '\u2593'
};
using (var pbar = new ProgressBar(totalTicks, "Initial message", options))
{
pbar.Tick(); //will advance pbar to 1 out of 10.
//we can also advance and update the progressbar text
pbar.Tick("Step 2 of 10");
TickToCompletion(pbar, totalTicks, sleep: 50);
}
}

Boom.

Cool ASCII Progress Bars in .NET Core

Be sure to check out the examples for ShellProgressBar, specifically ExampleBase.cs where he has some helper stuff like TickToCompletion() that isn't initially obvious.

Kurukuru by Mayuki Sawatari

Another nice progress system that is in active development for .NET Core (like super active...I can see they updated code an hour ago!) is called Kurukuru. This code is less about progress bars and more about spinners. It's smart about Unicode vs. non-Unicode as there's a lot of cool characters you could use in a Unicode-aware console that make for attractive spinners.

What a lovely ASCII Spinner in .NET Core!

Kurukuru is also super easy to use and integrated into your code. It also uses the "using" disposable pattern in a clever way. Wrap your work and if you throw an exception, it will show a failed spinner.

Spinner.Start("Processing...", () =>

{
Thread.Sleep(1000 * 3);

// MEMO: If you want to show as failed, throw a exception here.
// throw new Exception("Something went wrong!");
});

Spinner.Start("Stage 1...", spinner =>
{
Thread.Sleep(1000 * 3);
spinner.Text = "Stage 2...";
Thread.Sleep(1000 * 3);
spinner.Fail("Something went wrong!");
});

TIP: If your .NET Core console app wants to use an async Main (like I did) and call Kurukuru's async methods, you'll want to indicate you want to use the latest C# 7.1 features by adding this to your project's *.csproj file:

<PropertyGroup>
    <LangVersion>latest</LangVersion>
</PropertyGroup>

This allowed me to do this:

public static async Task Main(string[] args)

{
Console.WriteLine("Hello World!");
await Spinner.StartAsync("Stage 1...", async spinner =>
{
await Task.Delay(1000 * 3);
spinner.Text = "Stage 2...";
await Task.Delay(1000 * 3);
spinner.Fail("Something went wrong!");
});
}

Did I miss some? I'm sure I did. What nice ASCII progress bars and spinners make YOU happy?

And again, as with all Open Source, I encourage you to HELP OUT! I know the authors would appreciate it.


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     
13 Dec 10:23

The 2017 Christmas List of Best STEM Toys for kids

by Scott Hanselman

In 2016 and 2015 I made a list of best Christmas STEM Toys for kids! If I may say so, they are still good lists today, so do check them out. Be aware I use Amazon referral links so I get a little kickback (and you support this blog!) when you use these links. I'll be using the pocket money to...wait for it...buy STEM toys for kids! So thanks in advance!

Here's a Christmas List of things that I've either personally purchased, tried for a time, or borrowed from a friend. These are great toys and products for kids of all genders and people of all ages.

Piper Computer Kit with Minecraft Raspberry Pi edition

The Piper is a little spendy at first glance, but it's EXTREMELY complete and very thoughtfully created. Sure, you can just get a Raspberry Pi and hack on it - but the Piper is not just a Pi. It's a complete kit where your little one builds their own wooden "laptop" box (more of a luggable), and then starting with just a single button, builds up the computer. The Minecraft content isn't just vanilla Microsoft. It's custom episodic content! Custom voice overs, episodes, and challenges.

What's genius about Piper, though, is how the software world interacts with the hardware. For example, at one point you're looking for treasure on a Minecraft beach. The Piper suggests you need a treasure detector, so you learn about wiring and LEDs and wire up a treasure detector LED while it's running. Then you run your Minecraft person around while the LED blinks faster to detect treasure. It's absolute genius. Definitely a favorite in our house for the 8-12 year old set.

Piper Raspberry Pi Kit

Suspend! by Melissa and Doug

Suspend is becoming the new Jenga for my kids. The game doesn't look like much if you judge a book by its cover, but it's addictive and my kids now want to buy a second one to see if they can build even higher. An excellent addition to family game night.

Suspend! by Melissa and Doug

Engino Discovering Stem: Levers, Linkages & Structures Building Kit

I love LEGO but I'm always trying new building kids. Engino is reminiscent of Technics or some of the advanced LEGO elements, but this modestly priced kit is far more focused - even suitable for incorporating into home schooling.

Engino Discovering Stem: Levers, Linkages & Structures Building Kit

Gravity Maze

I've always wanted a 3D Chess Set. Barring that, check out Gravity Maze. It's almost like a physical version of a well-designed iPad game. It included 60 challenges (levels) that you then add pieces to in order to solve. It gets harder than you'd think, fast! If you like this, also check out Circuit Maze.

818Ly6yML

Osmo Genius Kit (2017)

Osmo is an iPad add-on that takes the ingenious idea of an adapter that lets your iPad see the tabletop (via a mirror/lens) and then builds on that clever concept with a whole series of games, exercises, and core subject tests. It's best for the under 12 set - I'd say it's ideal for about 6-8 year olds.

81iVPligcyL


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     
06 Dec 22:28

Shall we play a game? WarGames gets interactive reboot from Her Story dev

by Nathan Mattise

Didn't expect to see this today (but glad it happened).

As various creators look toward the new year, December seems to bring with it a new trailer every day (Batman Ninja? Altered CarbonBlack Mirror just this morning before a December 29 launch). But one teaser stood out from the fray this morning—because when you invoke WarGames, we nerds pay attention.

Before some readers instantly panic, no, the 1983 Matthew Broderick hacker classic is not being rebooted verbatim for the big screen. Instead, Her Story writer/dev Sam Barlow will modernize the tale for 2017-2018 and turn it into an "experimental interactive series" (which is how Barlow describes Her Story).

“With #WarGames I was thrilled to take the questions raised by the original movie and ask them again in a world where technology has fundamentally changed our lives," Barlow said in the game's press release. “I am excited to introduce viewers to the new hacker protagonist, Kelly, who represents the breadth of modern hacker culture and its humanity. As viewers help steer her story, I hope they will fall in love with her as much as the #WarGames team did.”

Read 5 remaining paragraphs | Comments

10 Nov 09:57

Pour Stephen Hawking, l’humanité doit se préparer à quitter la Terre

by webmaster@futura-sciences.com (Futura-Sciences)
Face aux grandes crises qui attendent l’humanité, Stephen Hawking a redit durant une intervention à Beijing l’urgence de se préparer au voyage interstellaire. Nous avons selon lui un demi-millénaire :  « d'ici 2600, la Terre se transformera en une grosse boule de feu. »
14 Oct 13:53

Ten technologies that might change the world: A review of Soonish

by Chris Lee

Enlarge (credit: Penguin Random House)

I only read three comics regularly, and one of those is Saturday Morning Breakfast Cereal (SMBC). It is clever and funny—even educational on occasion. I think it is safe to call me a fan. So, I was excited to hear that SMBC creator Zach Weinersmith had decided to cowrite a book with his wife Kelly, a bioscience professor at Rice University in Houston. Even better, their book would be about science.

Soonish is a not a single story; instead, it is 10 short stories. Pulling out the crystal ball, the Weinersmiths have chosen 10 areas that many hope will become actual, factual things that exist outside of our head or the lab. Each chapter is split into four parts: the first part tells us what the thing—a brain/computer interface, for instance—is, how it might be put together, and why we don't already have one. The second part asks the eternal question: what could possibly go wrong? And the third prognosticates on how the world will be changed by the thing. The last part is a funny anecdote about some of the Weinersmiths' encounters while researching the chapter.

Now, you may or may not agree with the Weinersmiths' choices. Space elevators, for instance, is not something I'd pick as ever being a thing—but their research is what brings their choices to life. They've spent a great deal of time buried under papers and talking to scientists. You can sort of feel the weight of the research pressing down on you as you read it. And not in a bad way—more like an extra blanket on a cold night.

Read 6 remaining paragraphs | Comments

09 Oct 18:47

Why do managers go bad?

by Phil Haack

In Endless Immensity of the Sea I wrote about a leadership style that encourages intrinsic motivation. Many people I talk to don’t work in such an environment. Even those who work in places that promote the ideals of autonomy and intrinsic motivation often find that over time, things change for the worse. Why does this happen?

I believe it’s the result of management entropy. Over time, if an organization doesn’t actively work to fight it, their leaders start to lose touch with what really motivates people.

Theory X and Theory Y are two theories of human motivation and management devised by Douglas McGregor that serve to explain how managers view human motivation.

Theory X is an authoritarian style where the emphasis is on “productivity, on the concept of a fair day’s work, on the evils of feather-bedding and restriction of output, on rewards for performance … [it] reflects an underlying belief that management must counteract an inherent human tendency to avoid work”

Meanwhile,

Theory Y is a participative style of management which “assumes that people will exercise self-direction and self-control in the achievement of organisational objectives to the degree that they are committed to those objectives”. It is management’s main task in such a system to maximise that commitment.

There’s also a Theory Z style of management that came later.

One of the most important pieces of this theory is that management must have a high degree of confidence in its workers in order for this type of participative management to work. This theory assumes that workers will be participating in the decisions of the company to a great degree.

It’s pretty clear that in the tech industry, most companies aspire to have a management style that encourages intrinsic motivation and personal autonomy. As Dan Pink notes, there’s a lot of evidence that it’s more motivating and effective for the type of creative work we do than Theory X.

However, I have a theory that despite all this evidence and aspirations to be Theory Y or Z, many managers in the tech industry are really closet Theory X practitioners.

In many cases, it may not even be a conscious choice. Or, perhaps they didn’t start that way, but over time they drift. One scenario that could cause such a drift is when a company encounters a series of setbacks.

A good leader looks hard at the culture and system put in place and how they contribute to the setbacks. A good leader makes it a priority to improve those things. A bad leader blames individuals. This blame feeds into the Theory X narrative and causes leaders to lose trust in their people.

In a following post, I hope to cover some typical myths and incorrect beliefs that managers have that also contribute to managers drifting to the dark side of Theory X.

05 Oct 09:21

2017 National Geographic Nature Photographer of the Year Contest (15 photos)

National Geographic Magazine has opened its annual photo contest for 2017, with the deadline for submissions coming up on November 17. The Grand Prize Winner will receive $10,000, publication in National Geographic Magazine and a feature on National Geographic’s Instagram account. The folks at National Geographic were, once more, kind enough to let me choose among the contest entries so far for display here. The captions below were written by the individual photographers, and lightly edited for style.

"Kalsoy." Kalsoy island and Kallur lighthouse in sunset light in the Faroe Islands. (© Copyright Wojciech Kruczynski / 2017 National Geographic Nature Photographer of the Year)
30 Sep 12:36

Tabs vs Spaces - A peaceful resolution with EditorConfig in Visual Studio. Plus .NET Extensions!

by Scott Hanselman

The culture wars continue. The country is divided with no end in sight. Tabs or spaces? There's even an insane (IMHO) assertion that the spaces people make more money.

I'm going with Gina Trapani on this one. I choose working code.

Teams can fight but the problem of formatting code across teams is solved by EditorConfig. I'm surprised more people don't know about it and use it, so this blog post is my small way of getting the word out. TELL THE PEOPLE.

Take a project and make a new .editorconfig file and put this in it. I'll use a dotnet new console example hello world app.

[*.cs]

indent_style = tab
indent_size = tab
tab_size = 4

I've set mine in this example to just *.cs, but you could also say [*.{cs,js}] or just [*] if you like, as well as have multiple sections.

You'll check this file in WITH your project so that everyone on the team shares the team's values.

Here in Notepad2 we can see someone has used spaces for whitespace, like a savage. Whitespace appears as pale dots in this editor.

image

I'll open this project in Visual Studio 2017 which supports the EditorConfig file natively. Notice the warning at the bottom where VS lets me know that this project has conventions that are different than my own.

user preferences for this file type are overwidden by this project's coding conventions

VS Format Document commands will use tabs rather than spaces for this project. Here is the same doc reformatted in VS:

image

At this point I'm comforted that the spaces have been defeated and that cooler heads have prevailed - at least for this project.

.NET Extensions to EditorConfig

Even better, if your editor supports it, you can include "EditorConfig Extensions" for specific files or languages. This way your team can keep things consistent across projects. If you're familiar with FxCop and StyleCop, this is like those.

There's a ton of great .NET EditorConfig options you can set to ensure the team uses consistent Language Conventions, Naming Conventions, and Formatting Rules.

  • Language Conventions are rules pertaining to the C# or Visual Basic language, for example, var/explicit type, use expression-bodied member.
  • Formatting Rules are rules regarding the layout and structure of your code in order to make it easier to read, for example, Allman braces, spaces in control blocks.
  • Naming Conventions are rules respecting the way objects are named, for example, async methods must end in "Async".

You can also set the importance of these rules with things like "suggestion," or "warning," or even "error."

As an example, I'll set that my team wants predefined types for locals:

dotnet_style_predefined_type_for_locals_parameters_members = true:error

Visual Studio here puts up a lightbulb and the suggested fix because my team would rather I use "string" than the full "System.String.

Visual Studio respects EditorConfig

The excellent editorconfig for .NET docs have a LOT of great options you can use or ignore. Here's just a FEW (controversial) examples:

  • csharp_new_line_before_open_brace - Do we put open braces at the end of a line, or on their own new line?
  • csharp_new_line_before_members_in_object_initializers - Do we allow A = 3, B = 4, for insist on a new line for each?
  • csharp_indent_case_contents - Do we freakishly line up all our switch/case statements, or do we indent each case like the creator intended?
  • You can even decide on how you Want To Case Things And Oddly Do Sentence Case: pascal_case, camel_case, first_word_upper, all_upper, all_lower

If you're using Visual Studios 2010, 2012, 2013, or 2015, fear not. There's at least a basic EditorConfig free extension for you that enforces the basic rules. There is also an extension for Visual Studio Code to support EditorConfig files that takes just seconds to install although I don't see a C# one for now, just one for whitespace.


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!



© 2017 Scott Hanselman. All rights reserved.
     
26 Sep 11:24

Everything you need to know about the Super NES Classic Edition

by Kyle Orland

Enlarge / It's amazing what a few decades of miniaturization (and yellowing) can do.

Doing a full review of a piece of hardware like the Super NES Classic Edition is kind of an odd concept. The $80/£80 system itself is really just a vessel to transmit a handful of well-remembered classic games from Nintendo's glorious 16-bit console past. To do that job adequately, all the system has to do is produce a more-or-less accurate emulation of the decades-old Super NES hardware (it does) with graphics that look properly scaled on an HDTV (they are) and two included controllers that have a responsive and authentic feel (they do).

Beyond that, any review of the Super NES Classic is just re-evaluating a bunch of decades-old games to see if they stand the test of time (and the newly released Star Fox 2, which we'll review in more detail later in the week). And while we'd love to see a few more obscure cult classics on the list of included titles (Zombies Ate My Neighbors, Legend of the Mystical Ninja, Tetris Attack, Ogre Battle, etc. etc.), it's hard to find fault with the selection of 20 varied and well-remembered games Nintendo has put together here.

Rather than belabor those basic and unremarkable points, we've put together the following selection of lesser-known facts and observations about the system gleaned from a weekend of nostalgic play. Consider this our attempt to provide you with everything you need to know (and some things you probably don't) about the Super NES Classic Edition before you go out hunting for the hardware that goes on sale this weekend.

Read 3 remaining paragraphs | Comments

21 Sep 14:06

Photos of the Earthquake in Mexico City (40 photos)

On September 19, 2017, a magnitude 7.1 earthquake shook Mexico City, rattling skyscrapers and sending millions into the streets. Reuters is reporting at least 200 deaths across several Mexican states. Coincidentally, Tuesday was the 32nd anniversary of the devastating 1985 Mexico City earthquake, an occasion that led to many first responders and volunteers already being gathered outside, taking part in earthquake-preparedness drills. Below, some early images of the still-unfolding disaster in Mexico City. Updated with new 12 new images on September 20.

People remove debris from a collapsed building, looking for possible victims after a quake rattled Mexico City on September 19, 2017. (Omar Torres / AFP / Getty)
21 Sep 09:47

What would a cross-platform .NET UI Framework look like? Exploring Avalonia

by Scott Hanselman

Many years ago before WPF was the "Windows Presentation Foundation" and introduced XAML as a UI markup language for .NET, Windows, and more, there was a project codenamed "Avalon." Avalon was WPF's codename. XAML is everywhere now, and the XAML Standard is a vocabulary specification.

Avalonia is an open source project that clearly takes its inspiration from Avalon and has an unapologetic love for XAML. Steven Kirk (GitHubber by day) and a team of nearly 50 contributors are asking what would a cross-platform .NET UI Framework look like. WPF without the W, if you will.

Avalonia (formerly known as Perspex) is a multi-platform .NET UI framework. It can run on Windows, Linux, Mac OS X, iOS and Android.

YOU can try out the latest build of Avalonia available for download here:https://ci.appveyor.com/project/AvaloniaUI/Avalonia/branch/master/artifacts and probably get the "ControlCatalog.Desktop" zip file at the bottom. It includes a complete running sample app that will let you explore the available controls.

Avalonia is cross-platform XAML ZOMG

It's important note that while Avalonia may smell like WPF, it's not WPF. It's not cross-platform WPF - it's Avalonia. Make sense? Avalonia does styles differently than WPF, and actually has a lot of subtle but significant syntax improvements.

Avalonia is a multi-platform windowing toolkit - somewhat like WPF - that is intended to be multi- platform. It supports XAML, lookless controls and a flexible styling system, and runs on Windows using Direct2D and other operating systems using Gtk & Cairo.

It's in an alpha state but there's an active community excited about it and there's even a Visual Studio Extension (VSIX) to help you get File | New Project support and create an app fast. You can check out the source for the sample apps here https://github.com/AvaloniaUI/Avalonia/tree/master/samples.

Just in the last few weeks you can see commits as they explore what a Linux-based .NET Core UI app would look like.

You can get an idea of what can be done with a framework like this by taking a look at how someone forked the MSBuildStructuredLog utility and ported it to Avalonia - making it cross-platform - in just hours. You can see a video of the port in action on Twitter. There is also a cross-platform REST client you can use to call your HTTP Web APIs at https://github.com/x2bool/restofus written with Avalonia.

The project is active but also short on documentation. I'm SURE that they'd love to hear from you on Twitter or in the issues on GitHub. Perhaps you could start contributing to open source and help Avalonia out!

What do you think?


Sponsor: Get the latest JetBrains Rider preview for .NET Core 2.0 support, Value Tracking and Call Tracking, MSTest runner, new code inspections and refactorings, and the Parallel Stacks view in debugger.



© 2017 Scott Hanselman. All rights reserved.
     
18 Sep 19:44

Soviet air defense officer who saved the world dies at age 77

by Sean Gallagher

Enlarge / Former Soviet Colonel Stanislav Petrov sits at home on March 19, 2004 in Moscow. Petrov was in charge of Soviet nuclear early warning systems on the night of September 26, 1983, and decided not to retaliate when a false "missile attack" signal appeared to show a US nuclear launch. He is feted by nuclear activists as the man who "saved the world" by determining that the Soviet system had been spoofed by a reflection off the Earth. (credit: Scott Peterson/Getty Images)

Former Soviet Air Defense Colonel Stanislav Petrov, the man known for preventing an accidental nuclear launch by the Soviet Union at the height of Cold War tensions, has passed away. Karl Schumacher, a German political activist who first met Petrov in 1998 and helped him visit Germany a year later, published news of Petrov's death after learning from Petrov's son that he had died in May. Petrov was 77.

Petrov's story has since been recounted several times by historians, including briefly in William Taubman's recent biography of former Soviet leader Mikhail Gorbachev, Gorbachev: His Life and Times. Ars also wrote about Petrov in our 2015 feature on Exercise Able Archer. On the night of September 26, 1983, Petrov was watch officer in charge of the Soviet Union's recently completed US-KS nuclear launch warning satellite network, known as "Oko" (Russian for "eye"). To provide instant warning of an American nuclear attack, the system was supposed to catch the flare of launching missiles as they rose.

That night, just past midnight, the Oko system signaled that a single US missile had been launched. "When I first saw the alert message, I got up from my chair," Petrov told RT in a 2010 interview. "All my subordinates were confused, so I started shouting orders at them to avoid panic. I knew my decision would have a lot of consequences."

Read 5 remaining paragraphs | Comments

07 Sep 04:45

Tap water from around the world contains tiny bits of plastic, survey finds

by Beth Mole

Enlarge / Mmmmm, plastic-y. (credit: Getty | Cate Gillon)

Tiny bits of plastic commonly come rushing out of water taps around the world, according to a new survey of 159 water samples collected from more than a dozen nations.

Overall, 83 percent of the 159 samples contained some amount of microplastics. Those samples came from various places in the US, Europe, Indonesia, Uganda, Beirut, India, and Ecuador. No country was without a plastic-positive water sample. In fact, after testing a handful of samples from each place, the lowest contamination rate was 72 percent. The highest—found in the US—was 94-percent positive rate.

The microplastic pieces found are tiny, as small as 2.5 micrometers in size. The amounts were tiny, too. When researchers looked at the average number of plastic bits per 500mL water sample in each nation, the highest average was from US water samples—with 4.8 plastic scraps per sample. A sample taken from the US Capitol had 16 plastic fragments in it, for instance. The lowest average was 1.9 microplastic shards per 500mL sample, seen in those from Indonesia and Europe.

Read 5 remaining paragraphs | Comments