Three decades ago, the long-fought Iran-Iraq war had reached a deadly stalemate, the stock markets took a huge hit on Black Monday in October, American politicians were gearing up for the 1988 presidential race, Baby Jessica was rescued from a well, broadcast live on CNN, and much more. Photographers were also busy documenting the lives of Pee-wee Herman, Menudo, Mikhail Gorbachev, Howard Stern, Princess Diana, Donald Trump, Bernie Goetz, and many others. Take a step into a visual time capsule now, for a brief look at the year 1987.
Matthieu DUFOURNEAUD
Shared posts
Want To Raise Brave Girls? Science Says Do This (Most Parents Are Afraid To)
Visual Studio 2017 Update Preview and Windows 10 Creators Update SDK
It’s only been a week since we released Visual Studio 2017 and we’re already working on an update: Visual Studio 2017 Update Preview. This Update preview includes two main changes: improvements to the Universal Windows Platform (UWP) tools to support the Creators Update SDK and the addition of the Python tools. For full details of what’s in this preview, you can read the release notes.
A Note on Updates
There’s a notable change we’ve made with previews in Visual Studio 2017: you’re able to install previews side by side with the released version of Visual Studio. That means you can use Visual Studio 2017 for your stable production work and Visual Studio 2017 Update Preview to get a peek into what we have in the pipeline. When you install both, you’ll see two different task bar icons so you can distinguish them – one that’s solid (for the released version) and one that’s not (for the preview). Similarly, if you happen to load the Visual Studio installer and you have both the released and Update preview versions of VS installed, you’ll even see both of them in the UI.
Just like the released version of Visual Studio, these previews will update, so expect to see the familiar update flag in the title bar letting you know an update is ready for you to download and install.
One note about the “side by side” experience: while most of Visual Studio RTM and these Update previews will be side by side, there are some components that have only one on the system like the C runtime, the .NET Framework, and the Visual Studio installer itself.
Astute observers may also note that, starting in Visual Studio 2017, we moved to a simpler versioning system – you’ll see minor versions like 15.1, 15.2, and so on, with patch updates identified with a longer build number. This first update will be versioned 15.1, and eventually you will see patches identified (e.g. 15.1.12345) when they become available. This versioning system is a bit simpler than the “Visual Studio Update 1” terminology we used previously and does a better job matching the speed and frequency that we will be releasing.
Universal Windows Platform (UWP) Tools
The Universal Windows Platform Tools have been updated to add support for the upcoming Creators Update SDK. We have also made several other improvements listed below.
Windows SDK Preview Support
Starting with the Creators Update SDK, we will enable side-by-side installation of the SDK. This will allow you to create production ready packages targeting the released versions of the SDK from the same machine, even when preview SDKs are installed. We hope this feature will help you to test new APIs and functionality available in preview SDKs without having to setup a separate machine. This preview version of Visual Studio 2017 will install the 10.0.15052.0 preview version of the Windows SDK. You will get the best experience if you target this SDK with Visual Studio 2017 running on Windows Insider build 15055 or later, available in the fast ring.
Please note that targeting the Creators Update SDK is only supported in Visual Studio 2017.
PackageReference support in UWP projects
In the past, NuGet packages were managed in two different ways – packages.config and project.json – each with their own sets of advantages and limitations. With the introduction of PackageReference, we have enhanced the NuGet package management experience significantly with features such as deep MSBuild integration, improved performance for everyday tasks such as install and restore, and multi-targeting.
While we remain fully committed to maintaining compatibility of UWP projects created with Visual Studio 2015 in Visual Studio 2017, we will provide new experiences in Visual Studio 2017 to help migrate your projects to use the new PackageReference format. This release starts us down this path by ensuring that new projects targeting the Creators Update SDK use PackageReference by default for expressing NuGet dependencies. Projects that target SDKs prior to the Creators Update will still continue to use project.json to express their NuGet dependencies so they can be edited in Visual Studio 2015. In an upcoming Preview release, if you re-target older UWP projects in Visual Studio 2017 to target the Creators Update SDK, we will automatically migrate all references from project.json to the new MSBuild-based PackageReference format.
New .NET Native compiler distributed as a NuGet package
The .NET Native compiler has a number of new improvements and fixes that you can read more about on the .NET blog. In particular, the .NET Native compiler is now distributed as a NuGet package bundled with the Microsoft.NETCore.UniversalWindowsPlatform package. This will allow future updates to the compiler without requiring updates to Visual Studio.
In an upcoming preview release, new projects created in Visual Studio 2017 that target the Creators Update SDK will reference the 5.3.x Universal Windows Platform NuGet package by default, and hence take advantage of the .NET Native compiler improvements by default. Projects that target an SDK prior to Creators Update still use the 5.2.x version of the Universal Windows Platform NuGet package, and can be manually updated to the newer version. Please note that making this update means the project can no longer be built using the Visual Studio 2015 tools.
Better Visual Studio integration for XAML controls delivered as NuGet packages
We have made several improvements across Visual Studio to have better support for NuGet packages when they contain XAML controls and libraries. The toolbox in Visual Studio now lists controls with custom icons from the NuGet package as soon as the package is added to the project. The XAML designer is also capable of extracting styles/template for such controls, as well as reading design-time metadata to enhance your productivity of working with the designer.
You can see an example of such a NuGet package here.
Support for detecting SDK version specific code in XAML
In this release we now detect XAML types and properties that only exist in certain versions of the SDK and show them with squiggles in the XAML editor. This lets you make an informed decision on whether the XAML you specify will work on all versions of Windows that you expect your app to run on. In future releases, we will be adding more safe-guards to the XAML authoring experience and will provide you the ability to write conditional expressions to use these properties and types in newer version of the platform.
Command Line Arguments in Debug Mode
If your app accepts activation arguments, you can now configure these values from the new Command Line Arguments property in the Debug Configuration page. The screenshot below shows this in action for C# and VB projects, but you will soon be able to do this with C++ projects as well. You will receive these values in the App.OnLaunched event in the LaunchActivatedEventArgs.Arguments property – thereby allowing you to dynamically change the execution patterns of your app (for example, running some test code).
UWP streaming install support
This version of Visual Studio 2017 is the first to let you create streaming UWP packages. To support streaming install in your app, you will need to create a SourceAppxContentGroupMap.xml file that defines your content groups and the files within them. From there, you can use Visual Studio to create the final AppxContentGroupMap.xml by right clicking on the project, and selecting Store -> Convert Content Group Map File. You can learn more here.
Python Tools
This preview includes the Python development workload, which includes support for editing and debugging Python scripts and web sites, as well as iterative development using the interactive window.
We want your feedback!
Go ahead and install Visual Studio Preview to start developing with the latest tools. Please keep in mind that Visual Studio Preview is for non-production development only. If you encounter any issues, please let us know using our Report-a-Problem tool to add or upvote an existing issue in our Developer Community. To make a suggestion or request to our engineering team, go to our UserVoice page.
Daniel Jacobson, Program Manager for Visual Studio @pmatmic Daniel is a Program Manager on Visual Studio focused on tools for Universal Windows Platform developers and NuGet. He found his passion in software development after graduating with an M.S. in Mechanical Engineering from Case Western Reserve University in 2014. |
Visual Studio 2017 Poster
Since we launched Visual Studio 2017 last week, hundreds of thousands of customers like you have already installed and started using Visual Studio 2017, and we’re excited to see what you create. Visual Studio 2017 contains so many new features. From productivity enhancements to new C++ capabilities. Start using our ASP.NET Core Tooling along with Live Unit Testing. Explore C# 7 and our new database DevOps features that help improve your productivity.
We thought it would be fun to put together a poster that shows off some of our favorite parts of Visual Studio 2017. You’ll also see some helpful tips and tricks to improve your productivity!
Go ahead and download this poster as a PDF, hang it up in your team room or wherever you code and tweet us a picture at @VisualStudio with the caption #MyVSStory. We also have a web infographic version, and if you just need a cheat sheet of handy keyboard shortcuts, you can just download that page for easy printing.
A quick reminder – we’d love your feedback on Visual Studio 2017. If you run into an issue, you can Report-a-Problem directly from within Visual Studio to our Developer Community. We are always listening so you can also submit suggestions or feature ideas to be considered on our UserVoice.
Tim Sneath, Principal Lead Program Manager, Visual Studio
Tim leads a team focused on Visual Studio acquisition and extensibility. His mission is to see developers create stunning applications built on the Microsoft platform, and to persuade his mother that her computer is not an enemy. Amongst other strange obsessions, Tim collects vintage releases of Windows, and has a near-complete set of shrink-wrapped copies that date back to the late 80s. |
Publishing a .NET class library as a Function App
With our new Azure Functions runtime support for precompiled functions, you can now use familiar Visual Studio features such IntelliSense and unit testing. These features were among the top requests after we launched the preview Visual Studio Tools for Azure Functions. This also enables an improved development workflow for customers who feel that that class libraries are a better fit for their application.
Azure Functions now supports publishing a class library as the implementation for one or more functions in a Function App. We’ve also recently made several runtime improvements to make assemblies much more useful, such as shadow-copying managed assemblies to avoid file locking issues and restarting the Functions runtime to pick up newly deployed binaries.
Using Visual Studio 2015 or 2017, you can try precompiled functions today with a few manual tweaks to your project. The manual steps are needed because the existing Functions project type (.funproj) has no build action, since it contains only script files. With the setup below, you’ll instead use a web project which will provide the full development experience of IntelliSense, local debugging, and publishing to Azure.
The next version of Visual Studio tooling for Azure Functions won’t require this manual configuration, as it will be directly based on class libraries. In fact, the Visual Studio experience will be optimized around class libraries, while the Functions portal will continue to use script files.
Performance benefits of class libraries
Aside from the tooling experience, another advantage of using a class library is that you’ll see improvements to “cold start,” particularly when running on a Consumption plan. Cold start occurs on the first request to a Function App after it has gone idle, which happens after 5 minutes of inactivity. When a Function App starts up, it indexes all functions and compiles C# and F# scripts into an assembly in memory. So, compilation time will add to cold start time. Customers aren’t charged for cold start since function execution hasn’t started, but it does cause a delay in event processing.
Use a Web Application project with a customized start action
A Web Application project is really a special kind of class library, which means it produces an executable .dll as the build output. To enable local running and debugging, modify the start action of the web project to launch the Azure Functions CLI.
Here are some sample projects set up this way:
- HttpTrigger sample
- CoderCards – trading card generator with blob trigger
Project setup – Visual Studio 2015 or 2017
- From the New Project dialog, choose ASP.NET Web Application (.NET Framework) with the empty site template.
- Open the project properties. In the Web tab, choose Start External Program
- For the program path, enter the path to func.exe for the Azure Functions CLI.
- OR, if you’ve installed the Visual Studio Functions Tooling, the path will look something like
C:\Users\USERNAME\AppData\Local\Azure.Functions.Cli\1.0.0-beta.93\func.exe
- If you’ve installed the Azure Functions CLI through NPM, the path will be something like
C:\Users\USERNAME\AppData\Roaming\npm\node_modules\azure-functions-cli\bin\func.exe
- OR, if you’ve installed the Visual Studio Functions Tooling, the path will look something like
- For Command line arguments, set
host start
- For Working directory, specify the root of the web project on your machine (unfortunately, this isn’t set automatically.)
Here’s a YouTube video that walks through these steps.
Downloading existing function code
If you’ve already developed your functions in the Azure Portal, you can download your code and settings with the following steps.
- Install the Azure Functions CLI from npm.
- If you’ve installed the Visual Studio Tools for Azure Functions, just add func.exe to your path from
%USERPROFILE%\AppData\Local\Azure.Functions.Cli\1.0.0-beta.93
(or the latest version on your machine).
- If you’ve installed the Visual Studio Tools for Azure Functions, just add func.exe to your path from
- Go to the Kudu console for your Function App in Function App Settings -> Kudu. Navigate to site and click on the download icon to the left of wwwroot (click on the animated gif below). Or, from an authenticated session, go to
https://your-function-app.scm.azurewebsites.net/api/zip/site/wwwroot/
. - Unzip the file wwwroot.zip on your local machine. From that directory, run the following:
func azure login (optional) func azure account list (optional) func azure account set func azure functionapp list func azure functionapp fetch-app-settings [name]
This will create a local file called appsettings.json
. These settings are only used locally by the Functions CLI. Since this file contains secrets, be sure not to check this file in to source control! (The Azure Functions CLI adds appsettings.json
to .gitignore
for you.)
Animated gif:
Project content
- Copy your downloaded files to the web project folder (including appsettings.json). Include the script files and function.json in the project.
- F5 should now work and successfully attach a debugger. But, you’re still using script files.
Converting to class files
To convert script files to C# class files, most changes are straightforward. The one manual step is to modify function.json to point to a class library instead of a script file (step #7 below). The next version of the Visual Studio tooling will eliminate this manual step.
- Rename .csx to .cs and add a class declaration and optional namespace declaration. Remove #load and replace #r with assembly references.
- If you’re using TraceWriter, add the NuGet package Microsoft.Azure.WebJobs and the statement:
using Microsoft.Azure.WebJobs.Host
- If you’re using timer triggers, add the NuGet package Microsoft.Azure.WebJobs.Extensions.
- Add NuGet package references for the packages that are automatically referenced, such as
Newtonsoft.Json
andMicrosoft.ServiceBus
. - Add explicit namespace import statements if you’re using any of the automatically imported namespaces, such as
System.Threading.Tasks
. - Add any other NuGet packages specified in project.json using NuGet package manager.
-
For each function, modify function.json and specify the
scriptFile
andentryPoint
properties.scriptFile
is a path to the assembly andentryPoint
is the qualified name of the method. Make sure that function.json is in a directory that matches the function name. See documentation on precompiled functions."scriptFile": "..\\bin\\Assembly.dll", "entryPoint": "Namespace.ClassName.Run"
See sample function.json.
Once you compile and run, you should see your class library functions being loaded by the Functions Runtime.
Publish from Visual Studio
You can publish the project to a Function App using the same experience as App Service publish. The project template generates web.config, but publishing this file has no effect (it is not used by the Functions runtime). Each web project maps to one Function App. If you need to publish functions independently of one another, we recommend that you use separate Function Apps.
You can use the Azure portal to run precompiled functions and view execution logs. To make code changes, you should re-publish from Visual Studio.
Publish from Azure Functions CLI
The Azure Functions CLI also provides a publish command, via the following:
func azure login func azure functionapp publish [name]
Building on the server using continuous integration and deployment
There’s another big advantage to using a web project—continuous integration with build on the server just works! Just follow the same steps as continuous deployment for Azure Functions. Your project will be built whenever there is a sync from source control. This will only work if you’re using a Web Application project, but not if you’re using a Functions Project (.funproj).
To see this in action, just fork the HttpTrigger sample project and set up GitHub as your continuous integration source.
Provide feedback
Ask questions in the comments section below or reach out to me on Twitter @lindydonna. For general feedback and questions about Azure Functions:
- Ask product questions on the Azure Functions MSDN forum and StackOverflow, where you’ll get answers directly from the engineering team.
- Submit feature requests on feedback.azure.com or the Azure Functions GitHub repo.
- Reach out to us on Twitter via @Azure with the hashtag #AzureFunctions.
Google reduces JPEG file size by 35%
Google has developed and open-sourced a new JPEG algorithm that reduces file size by about 35 percent—or alternatively, image quality can be significantly improved while keeping file size constant. Importantly, and unlike some of its other efforts in image compression (WebP, WebM), Google's new JPEGs are completely compatible with existing browsers, photo editing apps, and the JPEG standard.
The new JPEG encoder is called Guetzli, which is Swiss German for cookie (the project was led by Google Research's Zurich office). Don't pay too much attention to the name: after extensive analysis, I can't find anything in the Github repository related to cookies or indeed any other baked good.
There are numerous ways of tweaking JPEG image quality and file size, but Guetzli focuses on the quantization stage of compression. Put simply, quantization is a process that tries to reduce a large amount of disordered data, which is hard to compress, into ordered data, which is very easy to compress. In JPEGs, this process usually reduces gentle colour gradients to single blocks of colour, and often obliterates small details entirely.
Options for CSS and JS Bundling and Minification with ASP.NET Core
Maria and I were updating the NerdDinner sample app (not done yet, but soon) and were looking at various ways to do bundling and minification of the JSS and CS. There's runtime bundling on ASP.NET 4.x but in recent years web developers have used tools like Grunt or Gulp to orchestrate a client-side build process to squish their assets. The key is to find a balance that gives you easy access to development versions of JS/CSS assets when at dev time, while making it "zero work" to put minified stuff into production. Additionally, some devs don't need the Grunt/Gulp/npm overhead while others absolutely do. So how do you find balance? Here's how it works.
I'm in Visual Studio 2017 and I go File | New Project | ASP.NET Core Web App. Bundling isn't on by default but the configuration you need IS included by default. It's just minutes to enable and it's quite nice.
In my Solution Explorer is a "bundleconfig.json" like this:
// Configure bundling and minification for the project.
// More info at https://go.microsoft.com/fwlink/?LinkId=808241
[
{
"outputFileName": "wwwroot/css/site.min.css",
// An array of relative input file paths. Globbing patterns supported
"inputFiles": [
"wwwroot/css/site.css"
]
},
{
"outputFileName": "wwwroot/js/site.min.js",
"inputFiles": [
"wwwroot/js/site.js"
],
// Optionally specify minification options
"minify": {
"enabled": true,
"renameLocals": true
},
// Optionally generate .map file
"sourceMap": false
}
]
Pretty simple. Ins and outs. At the top of the VS editor you'll see this yellow prompt. VS knows you're in a bundleconfig.json and in order to use it effectively in VS you grab a small extension. To be clear, it's NOT required. It just makes it easier. The source is at https://github.com/madskristensen/BundlerMinifier. Slip this UI section if you just want Build-time bundling.
If getting a prompt like this bugs you, you can turn all prompting off here:
Look at your Solution Explorer. See under site.css and site.js? There are associated minified versions of those files. They aren't really "under" them. They are next to them on the disk, but this hierarchy is a nice way to see that they are associated, and that one generates the other.
Right click on your project and you'll see this Bundler & Minifier menu:
You can manually update your Bundles with this item as well as see settings and have bundling show up in the Task Runner Explorer.
Build Time Minification
The VSIX (VS extension) gives you the small menu and some UI hooks, but if you want to have your bundles updated at build time (useful if you don't use VS!) then you'll want to add a NuGet package called BuildBundlerMinifier.
You can add this NuGet package SEVERAL ways. Which is awesome.
- Add it from the Manage NuGet Packages menu
- Add it from the command line via "dotnet add package BuildBundlerMinifier"
- Note that this adds it to your csproj without you having to edit it! It's like "nuget install" but adds references to projects! The dotnet CLI is lovely.
- If you have the VSIX installed, just right-click the bundleconfig.json and click "Enable bundle on build..." and you'll get the NuGet package.
Now bundling will run on build...
c:\WebApplication8\WebApplication8>dotnet build
Microsoft (R) Build Engine version 15
Copyright (C) Microsoft Corporation. All rights reserved.
Bundler: Begin processing bundleconfig.json
Bundler: Done processing bundleconfig.json
WebApplication8 -> c:\WebApplication8\bin\Debug\netcoreapp1.1\WebApplication8.dll
Build succeeded.
0 Warning(s)
0 Error(s)
...even from the command line with "dotnet build." It's all integrated.
This is nice for VS Code or users of other editors. Here's how it would work entirely from the command prompt:
$ dotnet new mvc
$ dotnet add package BuildBundlerMinifier
$ dotnet restore
$ dotnet run
Advanced: Using Gulp to handle Bundling/Minifying
If you outgrow this bundler or just like Gulp, you can right click and Convert to Gulp!
Now you'll get a gulpfile.js that uses the bundleconfig.json and you've got full control:
And during the conversion you'll get the npm packages you need to do the work automatically:
I've found this to be a good balance that can get quickly productive with a project that gets bundling without npm/node, but I can easily grow to a larger, more npm/bower/gulp-driven front-end developer-friendly app.
Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Join Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Register now!
© 2017 Scott Hanselman. All rights reserved.
10 Things Happy People Don't Do When Life Becomes Difficult
Want to Be Genuinely Admired and Well-Liked? Do Any One of These 9 Things
Planet scale aggregates with Azure DocumentDB
We’re excited to announce that we have expanded the SQL grammar in DocumentDB to support aggregate functions with the last service update. Support for aggregates is the most requested feature on the user voice site, so we are thrilled to roll this out everyone that's voted for it.
Azure DocumentDB is a fully managed NoSQL database service built for fast and predictable performance, high availability, elastic scaling, global distribution, and ease of development. DocumentDB provides rich and familiar SQL query capabilities with consistent low latencies on JSON data. These unique benefits make DocumentDB a great fit for web, mobile, gaming, IoT, and many other applications that need seamless scale and global replication.
DocumentDB is truly schema-free. By virtue of its commitment to the JSON data model directly within the database engine, it provides automatic indexing of JSON documents without requiring explicit schema or creation of secondary indexes. DocumentDB supports querying JSON documents using SQL. DocumentDB query is rooted in JavaScript's type system, expression evaluation, and function invocation. This, in turn, provides a natural programming model for relational projections, hierarchical navigation across JSON documents, self joins, spatial queries, and invocation of user defined functions (UDFs) written entirely in JavaScript, among other features. We have now expanded the SQL grammar to include aggregations in addition to these capabilities.
Aggregates for planet scale applications
Whether you’re building a mobile game that needs to calculate statistics based on completed games, designing an IoT platform that triggers actions based on the number of occurrences of a certain event, or building a simple website or paginated API, you need to perform aggregate queries against your operational database. With DocumentDB you can now perform aggregate queries against data of any scale with low latency and predictable performance.
Aggregate support has been rolled out to all DocumentDB production datacenters. You can start running aggregate queries against your existing DocumentDB accounts or provision new DocumentDB accounts via the SDKs, REST API, or the Azure Portal. You must however download the latest version of the SDKs in order to perform cross-partition aggregate queries or use LINQ aggregate operators in .NET.
Aggregates with SQL
DocumentDB supports the SQL aggregate functions COUNT, MIN, MAX, SUM, and AVG. These operators work just like in relational databases, and return the computed value over the documents that match the query. For example, the following query retrieves the number of readings from the device xbox-1001 from DocumentDB:
SELECT VALUE COUNT(1) FROM telemetry T WHERE T.deviceId = "xbox-1001"
(If you’re wondering about the VALUE keyword – all queries return JSON fragments back. By using VALUE, you can get the scalar value of count e.g., 100, instead of the JSON document {"$1": 100})
We extended aggregate support in a seamless way to work with the existing query grammar and capabilities. For example, the following query returns the average temperature reading among devices within a specific polygon boundary representing a site location (combines aggregation with geospatial proximity searches):
SELECT VALUE AVG(T.temperature?? 0) FROM telemetry T WHERE ST_WITHIN(T.location, {"type": "polygon": … })
As an elastically scalable NoSQL database, DocumentDB supports storing and querying data of any storage or throughput. Regardless of the size or number of partitions in your collection, you can submit a simple SQL query and DocumentDB handles the routing of the query among data partitions, runs it in parallel against the local indexes within each matched partition, and merges intermediate results to return the final aggregate values. You can perform low latency aggregate queries using DocumentDB.
In the .NET SDK, this can be performed via the CreateDocumentQuery<T> method as shown below:
client.CreateDocumentQuery<int>( "/dbs/devicedb/colls/telemetry", "SELECT VALUE COUNT(1) FROM telemetry T WHERE T.deviceId = 'xbox-1001'", new FeedOptions { MaxDegreeOfParallelism = -1 });
For a complete example, you can take a look at our query samples in Github.
Aggregates with LINQ
With the .NET SDK 1.13.0, you can query for aggregates using LINQ in addition to SQL. The latest SDK supports the operators Count, Sum, Min, Max, Average and their asynchronous equivalents CountAsync, SumAsync, MinAsync, MaxAsync, AverageAsync. For example, the same query shown previously can be written as the following LINQ query:
client.CreateDocumentQuery<DeviceReading>("/dbs/devicedb/colls/telemetry", new FeedOptions { MaxDegreeOfParallelism = -1 }) .Where(r => r.DeviceId == "xbox-1001") .CountAsync();
Learn more about DocumentDB’s LINQ support, including how asynchronous pagination is performed during aggregate queries.
Aggregates using the Azure Portal
You can also start running aggregate queries using the Azure Portal right away.
Next Steps
In this blog post, we looked at support for aggregate functions and query in Azure DocumentDB. To get started running queries, create a new DocumentDB account from the Azure Portal.
Stay up-to-date on the latest DocumentDB news and features by following us on Twitter @DocumentDB or reach out to us on the developer forums on Stack Overflow.
Holi 2017: The Festival of Colors (28 photos)
This week Hindus around the world celebrate Holi, the Festival of Colors. Holi is a springtime celebration observed on the last full moon of the lunar month. Revelers traditionally throw bright colored powders at friends and strangers alike as they celebrate the arrival of spring, commemorate Krishna's pranks, and allow each other a momentary freedom—a chance to drop their inhibitions and simply play and dance. Gathered here are images of this year's Holi festival from India, Nepal, Pakistan, and Sri Lanka.
Les Open Space sont mauvais pour la Productivité et la Mémoire
Un récent article de la BBC a relancé la discussion sur le meilleur format pour les espaces de travail, comment l'agencement affecte la culture, la productivité, ainsi que la collaboration.
By Shane Hastie Translated by Nicolas FrankelNew Git Features in Visual Studio 2017
We’ve added new Git features to Visual Studio 2017 that allow you to do more of your end-to-end workflow without leaving the IDE. You can perform a force push to complete a rebase or push an amended commit, easily view the diff for outgoing commits, unset your upstream branch, and continue patch rebase from VS. Additionally, because we moved to git.exe–which allows us to provide the most up-to-date features–we support SSH, respect your config options, and show in Team Explorer exactly what you see in the command line. Learn more about all of our Git features in Visual Studio and check out the Visual Studio release notes for what’s new in Visual Studio 2017.
Push force
When you rebase your branch or amend a commit, you’ll need to force push your changes to the remote branch. In Visual Studio 2017, you can now push --force-with-lease
from the IDE. Rather than expose push --force
, we’ve built the safer push --force-with-lease
option. With --force-with-lease
, your force push will only complete if the upstream branch has not been updated by someone else. This option safeguards you from accidentally overwriting someone else’s work.
To enable push --force-with-lease
, you will need to enable it in Git Global Settings:
Now, when you initiate a push that will require --force-with-lease
(such as a rebase or amend), you’ll be notified and asked to confirm if you want to proceed with push --force-with lease
:
View Commit Diff
When you’re ready to push your commits, it can be helpful to review your changes. With Visual Studio 2017, you can now easily view the diff for your outgoing commits. If you go to the Sync page and choose “View Summary” under Outgoing Commits, you will see the diff.
You can also view the diff between any two commits. On the History page, select two commits then choose “Compare Commits…” in the context menu. In Team Explorer, you will now see the diff between these two commits.
Unset Upstream Branch
In the event that you want to stop tracking an upstream branch, go to the Branches page, right click on a local branch, and unset its upstream:
SSH Support
Visual Studio 2017 supports SSH! In Repository Settings, you can set your remotes to use SSH:
There is a known issue where cloning via SSH on the Connect page does not work. A fix will be available in an update.
As always, please leave your feedback in the comments, on UserVoice, or Report a Problem in the top right of Visual Studio.
New Features in C# 7.0
Here is a description of all the new language features in C# 7.0, which came out last Tuesday as part of the Visual Studio 2017 release.
C# 7.0 adds a number of new features and brings a focus on data consumption, code simplification and performance. Perhaps the biggest features are tuples, which make it easy to have multiple results, and pattern matching which simplifies code that is conditional on the shape of data. But there are many other features big and small. We hope that they all combine to make your code more efficient and clear, and you more happy and productive.
If you are curious about the design process that led to this feature set, you can find design notes, proposals and lots of discussion at the C# language design GitHub site.
If this post feels familiar, it may be because a preliminary version went out last August. In the final version of C# 7.0 a few details have changed, some of them in response to great feedback on that post.
Have fun with C# 7.0, and happy hacking!
Mads Torgersen, C# Language PM
Out variables
In older versions of C#, using out parameters isn’t as fluid as we’d like. Before you can call a method with out parameters you first have to declare variables to pass to it. Since you typically aren’t initializing these variables (they are going to be overwritten by the method after all), you also cannot use var
to declare them, but need to specify the full type:
In C# 7.0 we have added out variables; the ability to declare a variable right at the point where it is passed as an out argument:
Note that the variables are in scope in the enclosing block, so that the subsequent line can use them. Many kinds of statements do not establish their own scope, so out variables declared in them are often introduced into the enclosing scope.
Since the out variables are declared directly as arguments to out parameters, the compiler can usually tell what their type should be (unless there are conflicting overloads), so it is fine to use var
instead of a type to declare them:
A common use of out parameters is the Try...
pattern, where a boolean return value indicates success, and out parameters carry the results obtained:
We allow "discards" as out parameters as well, in the form of a _
, to let you ignore out parameters you don’t care about:
Pattern matching
C# 7.0 introduces the notion of patterns, which, abstractly speaking, are syntactic elements that can test that a value has a certain "shape", and extract information from the value when it does.
Examples of patterns in C# 7.0 are:
-
Constant patterns of the form
c
(wherec
is a constant expression in C#), which test that the input is equal toc
-
Type patterns of the form
T x
(whereT
is a type andx
is an identifier), which test that the input has typeT
, and if so, extracts the value of the input into a fresh variablex
of typeT
-
Var patterns of the form
var x
(wherex
is an identifier), which always match, and simply put the value of the input into a fresh variablex
with the same type as the input.
This is just the beginning – patterns are a new kind of language element in C#, and we expect to add more of them to C# in the future.
In C# 7.0 we are enhancing two existing language constructs with patterns:
-
is
expressions can now have a pattern on the right hand side, instead of just a type -
case
clauses in switch statements can now match on patterns, not just constant values
In future versions of C# we are likely to add more places where patterns can be used.
Is-expressions with patterns
Here is an example of using is
expressions with constant patterns and type patterns:
As you can see, the pattern variables – the variables introduced by a pattern – are similar to the out variables described earlier, in that they can be declared in the middle of an expression, and can be used within the nearest surrounding scope. Also like out variables, pattern variables are mutable. We often refer to out variables and pattern variables jointly as "expression variables".
Patterns and Try-methods often go well together:
Switch statements with patterns
We’re generalizing the switch statement so that:
- You can switch on any type (not just primitive types)
- Patterns can be used in case clauses
- Case clauses can have additional conditions on them
Here’s a simple example:
There are several things to note about this newly extended switch statement:
- The order of case clauses now matters: Just like catch clauses, the case clauses are no longer necessarily disjoint, and the first one that matches gets picked. It’s therefore important that the square case comes before the rectangle case above. Also, just like with catch clauses, the compiler will help you by flagging obvious cases that can never be reached. Before this you couldn’t ever tell the order of evaluation, so this is not a breaking change of behavior.
-
The default clause is always evaluated last: Even though the
null
case above comes last, it will be checked before the default clause is picked. This is for compatibility with existing switch semantics. However, good practice would usually have you put the default clause at the end. -
The null clause at the end is not unreachable: This is because type patterns follow the example of the current
is
expression and do not match null. This ensures that null values aren’t accidentally snapped up by whichever type pattern happens to come first; you have to be more explicit about how to handle them (or leave them for the default clause).
Pattern variables introduced by a case ...:
label are in scope only in the corresponding switch section.
Tuples
It is common to want to return more than one value from a method. The options available in older versions of C# are less than optimal:
- Out parameters: Use is clunky (even with the improvements described above), and they don’t work with async methods.
-
System.Tuple<...>
return types: Verbose to use and require an allocation of a tuple object. - Custom-built transport type for every method: A lot of code overhead for a type whose purpose is just to temporarily group a few values.
-
Anonymous types returned through a
dynamic
return type: High performance overhead and no static type checking.
To do better at this, C# 7.0 adds tuple types and tuple literals:
The method now effectively returns three strings, wrapped up as elements in a tuple value.
The caller of the method will receive a tuple, and can access the elements individually:
Item1
etc. are the default names for tuple elements, and can always be used. But they aren’t very descriptive, so you can optionally add better ones:
Now the recipient of that tuple have more descriptive names to work with:
You can also specify element names directly in tuple literals:
Generally you can assign tuple types to each other regardless of the names: as long as the individual elements are assignable, tuple types convert freely to other tuple types.
Tuples are value types, and their elements are simply public, mutable fields. They have value equality, meaning that two tuples are equal (and have the same hash code) if all their elements are pairwise equal (and have the same hash code).
This makes tuples useful for many other situations beyond multiple return values. For instance, if you need a dictionary with multiple keys, use a tuple as your key and everything works out right. If you need a list with multiple values at each position, use a tuple, and searching the list etc. will work correctly.
Tuples rely on a family of underlying generic struct types called ValueTuple<...>
. If you target a Framework that doesn’t yet include those types, you can instead pick them up from NuGet:
- Right-click the project in the Solution Explorer and select "Manage NuGet Packages…"
- Select the "Browse" tab and select "nuget.org" as the "Package source"
- Search for "System.ValueTuple" and install it.
Deconstruction
Another way to consume tuples is to deconstruct them. A deconstructing declaration is a syntax for splitting a tuple (or other value) into its parts and assigning those parts individually to fresh variables:
In a deconstructing declaration you can use var
for the individual variables declared:
Or even put a single var
outside of the parentheses as an abbreviation:
You can also deconstruct into existing variables with a deconstructing assignment:
Deconstruction is not just for tuples. Any type can be deconstructed, as long as it has an (instance or extension) deconstructor method of the form:
The out parameters constitute the values that result from the deconstruction.
(Why does it use out parameters instead of returning a tuple? That is so that you can have multiple overloads for different numbers of values).
It will be a common pattern to have constructors and deconstructors be "symmetric" in this way.
Just as for out variables, we allow "discards" in deconstruction, for things that you don’t care about:
Local functions
Sometimes a helper function only makes sense inside of a single method that uses it. You can now declare such functions inside other function bodies as a local function:
Parameters and local variables from the enclosing scope are available inside of a local function, just as they are in lambda expressions.
As an example, methods implemented as iterators commonly need a non-iterator wrapper method for eagerly checking the arguments at the time of the call. (The iterator itself doesn’t start running until MoveNext
is called). Local functions are perfect for this scenario:
If Iterator
had been a private method next to Filter
, it would have been available for other members to accidentally use directly (without argument checking). Also, it would have needed to take all the same arguments as Filter
instead of having them just be in scope.
Literal improvements
C# 7.0 allows _
to occur as a digit separator inside number literals:
You can put them wherever you want between digits, to improve readability. They have no effect on the value.
Also, C# 7.0 introduces binary literals, so that you can specify bit patterns directly instead of having to know hexadecimal notation by heart.
Ref returns and locals
Just like you can pass things by reference (with the ref
modifier) in C#, you can now return them by reference, and also store them by reference in local variables.
This is useful for passing around placeholders into big data structures. For instance, a game might hold its data in a big preallocated array of structs (to avoid garbage collection pauses). Methods can now return a reference directly to such a struct, through which the caller can read and modify it.
There are some restrictions to ensure that this is safe:
- You can only return refs that are "safe to return": Ones that were passed to you, and ones that point into fields in objects.
- Ref locals are initialized to a certain storage location, and cannot be mutated to point to another.
Generalized async return types
Up until now, async methods in C# must either return void
, Task
or Task<T>
. C# 7.0 allows other types to be defined in such a way that they can be returned from an async method.
For instance we now have a ValueTask<T>
struct type. It is built to prevent the allocation of a Task<T>
object in cases where the result of the async operation is already available at the time of awaiting. For many async scenarios where buffering is involved for example, this can drastically reduce the number of allocations and lead to significant performance gains.
There are many other ways that you can imagine custom "task-like" types being useful. It won’t be straightforward to create them correctly, so we don’t expect most people to roll their own, but it is likely that they will start to show up in frameworks and APIs, and callers can then just return and await
them the way they do Tasks today.
More expression bodied members
Expression bodied methods, properties etc. are a big hit in C# 6.0, but we didn’t allow them in all kinds of members. C# 7.0 adds accessors, constructors and finalizers to the list of things that can have expression bodies:
This is an example of a feature that was contributed by the community, not the Microsoft C# compiler team. Yay, open source!
Throw expressions
It is easy to throw an exception in the middle of an expression: just call a method that does it for you! But in C# 7.0 we are directly allowing throw
as an expression in certain places:
Password Rules Are Bullshit
Of the many, many, many bad things about passwords, you know what the worst is? Password rules.
If we don't solve the password problem for users in my lifetime I am gonna haunt you from beyond the grave as a ghost pic.twitter.com/Tf9EnwgoZv
— Jeff Atwood (@codinghorror) August 11, 2015
Let this pledge be duly noted on the permanent record of the Internet. I don't know if there's an afterlife, but I'll be finding out soon enough, and I plan to go out mad as hell.
The world is absolutely awash in terrible password rules:
But I don't need to tell you this. The more likely you are to use a truly random password generation tool, like us über-geeks are supposed to, the more likely you have suffered mightily – and daily – under this regime.
Have you seen the classic XKCD about passwords?
We can certainly debate whether "correct horse battery staple" is a viable password strategy or not, but the argument here is mostly that length matters.
No, seriously, it does. I'll go so far as to say your password is too damn short. These days, given the state of cloud computing and GPU password hash cracking, any password of 8 characters or less is perilously close to no password at all.
So then perhaps we have one rule, that passwords must not be short. A long password is much more likely to be secure than a short one … right?
What about this four character password?
✅🐎🔋🖇️
What about this eight character password?
正确马电池订书钉
Or this (hypothetical, but all too real) seven character password?
ش导พิ한✌︎🚖
@codinghorror I'm sorry but your password must contain 1 char each from: Arabic, Chinese, Thai, Korean, Klingon, Wingdings and an emoji
— Finley Creative (@FinleyCreative) March 3, 2016
You may also be surprised, if you paste the above four Unicode emojis into your favorite login dialog (go ahead – try it), to discover that it … isn't in fact four characters.
Oh dear.
"💩".length === 2
Our old pal Unicode strikes again.
As it turns out, even the simple rule that "your password must be of reasonable length" … ain't necessarily so. Particularly if we stop thinking like Ugly ASCII Americans.
And what of those nice, long passwords? Are they always secure?
aaaaaaaaaaaaaaaaaaa
0123456789012345689
passwordpassword
usernamepassword
Of course not, because have you met any users lately?
They consistently ruin every piece of software I've ever written. Yes, yes, I know you, Mr. or Ms. über-geek, know all about the concept of entropy. But expressing your love of entropy as terrible, idiosyncratic password rules …
- must contain uppercase
- must contain lowercase
- must contain a number
- must contain a special character
… is a spectacular failure of imagination in a world of Unicode and Emoji.
As we built Discourse, I discovered that the login dialog was a remarkably complex piece of software, despite its surface simplicity. The primary password rule we used was also the simplest one: length. Since I wrote that, we've already increased our minimum password default length from 8 to 10 characters. And if you happen to be an admin or moderator, we decided the minimum has to be even more, 15 characters.
I also advocated checking passwords against the 100,000 most common passwords. If you look at 10 million passwords from data breaches in 2016, you'll find the top 25 most used passwords are:
123456 123456789 qwerty 12345678 111111 1234567890 1234567 password 123123 987654321 qwertyuiop mynoob |
123321 666666 18atcskd2w 7777777 1q2w3e4r 654321 555555 3rjs1la7qe google 1q2w3e4r5t 123qwe zxcvbnm 1q2w3e |
Even this data betrays some ASCII-centrism. The numbers are the same in any culture I suppose, but I find it hard to believe the average Chinese person will ever choose the passwords "password", "quertyuiop", or "mynoob". So this list has to be customizable, localizable.
(One interesting idea is to search for common shorter password matches inside longer passwords, but I think this would cause too many false positives.)
If you examine the data, this also turns into an argument in favor of password length. Note that only 5 of the top 25 passwords are 10 characters, so if we require 10 character passwords, we've already reduced our exposure to the most common passwords by 80%. I saw this originally when I gathered millions and millions of leaked passwords for Discourse research, then filtered the list down to just those passwords reflecting our new minimum requirement of 10 characters or more.
It suddenly became a tiny list. (If you've done similar common password research, please do share your results in the comments.)
I'd like to offer the following common sense advice to my fellow developers:
1. Password rules are bullshit
- They don't work.
- They heavily penalize your ideal audience, people that use real random password generators. Hey guess what, that password randomly didn't have a number or symbol in it. I just double checked my math textbook, and yep, it's possible. I'm pretty sure.
- They frustrate average users, who then become uncooperative and use "creative" workarounds that make their passwords less secure.
- They are often wrong, in the sense that the rules chosen are grossly incomplete and/or insane, per the many shaming links I've shared above.
- Seriously, for the love of God, stop with this arbitrary password rule nonsense already. If you won't take my word for it, read this 2016 NIST password rules recommendation. It's right there, "no composition rules". However, I do see one error, it should have said "no bullshit composition rules".
2. Enforce a minimum Unicode password length
One rule is at least easy to remember, understand, and enforce. This is the proverbial one rule to bring them all, and in the darkness bind them.
- It's simple. Users can count. Most of them, anyway.
- It works. The data shows us it works; just download any common password list of your choice and group by password length.
- The math doesn't lie. All other things being equal, a longer password will be more random – and thus more secure – than a short password.
- Accept that even this one rule isn't inviolate. A minimum password length of 6 on a Chinese site might be perfectly reasonable. A 20 character password can be ridiculously insecure.
- If you don't allow (almost) every single unicode character in the password input field, you are probably doing it wrong.
- It's a bit of an implementation detail, but make sure maximum password length is reasonable as well.
3. Check for common passwords
As I've already noted, the definition of "common" depends on your audience, and language, but it is a terrible disservice to users when you let them choose passwords that exist in the list of 10k, 100k, or million most common known passwords from data breaches. There's no question that a hacker will submit these common passwords in a hack attempt – and it's shocking how far you can get, even with aggressive password attempt rate limiting, using just the 1,000 most common passwords.
- 1.6% have a password from the top 10 passwords
- 4.4% have a password from the top 100 passwords
- 9.7% have a password from the top 500 passwords
- 13.2% have a password from the top 1,000 passwords
- 30% have a password from the top 10,000 passwords
Lucky you, there are millions and millions of real breached password lists out there to sift through. It is sort of fun to do data forensics, because these aren't hypothetical synthetic Jack the Ripper password rules some bored programmer dreamed up, these are real passwords used by real users.
Do the research. Collect the data. Protect your users from themselves.
4. Check for basic entropy
No need to get fancy here; pick the measure of entropy that satisfies you deep in the truthiness of your gut. But remember you have to be able to explain it to users when they fail the check, too.
I had a bit of a sad when I realized that we were perfectly fine with users selecting a 10 character password that was literally "aaaaaaaaaa". In my opinion, the simplest way to do this is to ensure that there are at least (x) unique characters out of (y) total characters. And that's what we do as of the current beta version of Discourse. But I'd love your ideas in the comments, too. The simpler and clearer the better!
5. Check for special case passwords
I'm embarrassed to admit that when building the Discourse login, as I discussed in The God Login, we missed two common cases that you really have to block:
- password equal to username
- password equal to email address
🤦 If you are using Discourse versions earlier than 1.4, I'm so sorry and please upgrade immediately.
Similarly, you might also want to block other special cases like
- password equal to URL or domain of website
- password equal to app name
In short, try to think outside the password input box, like a user would.
🔔 Clarification
A few people have interpreted this post as "all the other password rules are bullshit, except these four I will now list." That's not what I'm trying to say here.
The idea is to focus on the one understandable, simple, practical, works-in-real-life-in-every-situation rule: length. Users can enter (almost) anything, in proper Unicode, provided it's long enough. That's the one rule to bind them all that we need to teach users: length!
Items #3 through #5 are more like genie-special-exception checks, a you can't wish for infinite wishes kind of thing. It doesn't need to be discussed up front because it should be really rare. Yes, you must stop users from having comically bad passwords that equal their username, or
aaaaaaaaaaa
or0123456789
, but only as post-entry checks, not as rules that need to be explained in advance.So TL;DR: one rule. Length. Enter whatever you want, just make sure it's long enough to be a reasonable password.
[advertisement] Building out your tech team? Stack Overflow Careers helps you hire from the largest community for programmers on the planet. We built our site with developers like you in mind. |
Announcing Microsoft Azure Storage Explorer 0.8.9
We just released Microsoft Azure Storage Explorer 0.8.9 last week. You can download it from http://storageexplorer.com/.
Recent new features in the past two releases:
- Automatically download the latest version when it is available
- Create, manage, and promote blob snapshots
- Sign-in to Sovereigh Clouds like Azure China, Azure Germany and Azure US Government
- Zoom In, Zoom Out, and Reset Zoom from View menu
Try out and send us feedback from the links on the bottom left corner of the app.
The 2017 Sony World Photography Awards (20 photos)
The Sony World Photography Awards, an annual competition hosted by the World Photography Organisation, just announced its shortlist of winners for 2017. This year's contest attracted 227,596 entries from 183 countries. The organizers have again been kind enough to share some of their shortlisted and commended images with us, gathered below. Overall winners are scheduled to be announced on April 20. All captions below come from the photographers.
New Azure Storage JavaScript client library for browsers - Preview
Today we are announcing our newest library: Azure Storage Client Library for JavaScript. The demand for the Azure Storage Client Library for Node.js, as well as your feedback, has encouraged us to work on a browser-compatible JavaScript library to enable web development scenarios with Azure Storage. With that, we are now releasing the preview of Azure Storage JavaScript Client Library for Browsers.
Enables web development scenarios
The JavaScript Client Library for Azure Storage enables many web development scenarios using storage services like Blob, Table, Queue, and File, and is compatible with modern browsers. Be it a web-based gaming experience where you store state information in the Table service, uploading photos to a Blob account from a Mobile app, or an entire website backed with dynamic data stored in Azure Storage.
As part of this release, we have also reduced the footprint by packaging each of the service APIs in a separate JavaScript file. For instance, a developer who needs access to Blob storage only needs to require the following scripts:
<script type=”javascript/text” src=”azure-storage.common.js”/> <script type=”javascript/text” src=”azure-storage.blob.js”/>
Full service coverage
The new JavaScript Client Library for Browsers supports all the storage features available in the latest REST API version 2016-05-31 since it is built with Browserify using the Azure Storage Client Library for Node.js. All the service features you would find in our Node.js library are supported. You can also use the existing API surface, and the Node.js Reference API documents to build your app!
Built with Browserify
Browsers today don’t support the require method, which is essential in every Node.js application. Hence, including a JavaScript written for Node.js won’t work in browsers. One of the popular solutions to this problem is Browserify. The Browserify tool bundles your required dependencies in a single JS file for you to use in web applications. It is as simple as installing Browserify and running browserify node.js -o browser.js and you are set. However, we have already done this for you. Simply download the JavaScript Client Library.
Recommended development practices
We highly recommend use of SAS tokens to authenticate with Azure Storage since the JavaScript Client Library will expose the authentication token to the user in the browser. A SAS token with limited scope and time is highly recommended. In an ideal web application it is expected that the backend application will authenticate users when they log on, and will then provide a SAS token to the client for authorizing access to the Storage account. This removes the need to authenticate using an account key. Check out the Azure Function sample in our Github repository that generates a SAS token upon an HTTP POST request.
Use of the stream APIs are highly recommended due to the browser sandbox that blocks users from accessing the local filesystem. This makes the stream APIs like getBlobToLocalFile, createBlockBlobFromLocalFile unusable in browsers. See the samples in the link below that use createBlockBlobFromStream API instead.
Sample usage
Once you have a web app that can generate a limited scope SAS Token, the rest is easy! Download the JavaScript files from the repository on Github and include in your code.
Here is a simple sample that can upload a blob from a given text:
1. Insert the following script tags in your HTML code. Make sure the JavaScript files located in the same folder.
<script src="azure-storage.common.js"></script/> <script src="azure-storage.blob.js"></script/>
2. Let’s now add a few items to the page to initiate the transfer. Add the following tags inside the BODY tag. Notice that the button calls uploadBlobFromText method when clicked. We will define this method in the next step.
<input type="text" id="text" name="text" value="Hello World!" /> <button id="upload-button" onclick="uploadBlobFromText()">Upload</button>
3. So far, we have included the client library and added the HTML code to show the user a text input and a button to initiate the transfer. When the user clicks on the upload button, uploadBlobFromText will be called. Let’s define that now:
<script> function uploadBlobFromText() { // your account and SAS information var sasKey ="...."; var blobUri = "http://<accountname>.blob.core.windows.net"; var blobService = AzureStorage.createBlobServiceWithSas(blobUri, sasKey).withFilter(new AzureStorage.ExponentialRetryPolicyFilter()); var text = document.getElementById('text'); var btn = document.getElementById("upload-button"); blobService.createBlockBlobFromText('mycontainer', 'myblob', text.value, function(error, result, response){ if (error) { alert('Upload filed, open browser console for more detailed info.'); console.log(error); } else { alert('Upload successfully!'); } }); } </script>
Of course, it is not that common to upload blobs from text. See the following samples for uploading from stream as well as a sample for progress tracking.
• JavaScript Sample for Blob
• JavaScript Sample for Queue
• JavaScript Sample for Table
• JavaScript Sample for File
Share
Finally, join our Slack channel to share with us your scenarios, issues, or anything, really. We’ll be there to help!
TDD Harms Architecture
The idea that TDD damages design and architecture is not new. DHH suggested as much several years ago with his notion of Test Induced Design Damage; in which he compares the design he prefers to a design created by Jim Weirich that is “testable”. The argument, boils down to separation and indirection. DHH’s concept of good design minimizes these attributes, whereas Weirich’s maximizes them.
I strongly urge you to read DHH’s article, and watch Weirich’s video, and judge for yourself which design you prefer.
Recently I’ve seen the argument resurface on twitter; though not in reference to DHH’s ideas; but instead in reference to a very old interview between James Coplien and myself. In this case the argument is about using TDD to allow architecture to emerge. As you’ll discover, if you read through that interview, Cope and I agree that architecture does not emerge from TDD. The term I used, in that interview was, I believe – Horse shit.
Still another common argument is that as the number of tests grows, a single change to the production code can cause hundreds of tests to require corresponding changes. For example, if you add an argument to a method, every test that calls that method must be changed to add the new argument. This is known as The Fragile Test Problem.
A related argument is: The more tests you have, the harder it is to change the production code; because so many tests can break and require repair. Thus, tests make the production code rigid.
What’s behind this?
Is there anything to these concerns? Are they real? Does TDD really damage design and architecture?
There are too many issues to simply disregard. So what’s going on here?
Before I answer that, let’s look at a simple diagram. Which of these two designs is better?
Yes, it’s true, I’ve given you a hint by coloring the left (sinister) side red, and the right (dexter) side green. I hope it is clear that the right hand solution is generally better than the left.
Why? Coupling, of course. In the left solution the users are directly coupled to a multitude of services. Any change to a service, regardless of how trivial, will likely cause many users to require change. So the left side is fragile.
Worse, the left side users act as anchors that impede the ability of the developers to make changes to the services. Developers fear that too many users may be affected by simple changes. So the left side is rigid.
The right side, on the other hand, decouples the users from the services by using an API. What’s more, the services implement the API using inheritance, or some other form of polymorphism. (That is the meaning of the closed triangular arrows – a UMLism.) Thus a large number of changes can be made to the services without affecting either the API or the users. What’s more the users are not an anchor making the services rigid.
The principles at play here are the Open-Closed Principle (OCP) and the Dependency Inversion Principle (DIP).
Note, that the design on the left is the design that DHH was advocating in his article; whereas the design on the right was the topic of Weirich’s exploration. DHH likes the directness of the design on the left. Weirich likes the separation and isolation of the design on the right.
The Critical Substitution
Now, in your mind, I want you to make a simple substitution. Look at that diagram, and substitute the word “TEST” for the word “USER” – and then think.
Yes. That’s right. Tests need to be designed. Principles of design apply to tests just as much as they apply to regular code. Tests are part of the system; and they must be maintained to the same standards as any other part of the system.
One-to-One Correspondence.
If you’ve been following me for any length of time you know that I describe TDD using three laws. These laws force you to write your tests and your production code simultaneously, virtually line by line. One line of test, followed by one line of production code, around, and around and around. If you’ve never seen or experienced this, you might want to watch this video.
Most people who are new to TDD, and the three laws, end up writing tests that look like the diagram on the left. They create a kind of one-to-one correspondence between the production code and the test code. For example, they may create a test class for every production code class. They may create test methods for every production code method.
Of course this makes sense, at first. After all, the goal of any test suite is to test the elements of the system. Why wouldn’t you create tests that had a one-to-one correspondence with those elements? Why wouldn’t you create a test class for each class, and a set of test methods for each method? Wouldn’t that be the correct solution?
And, indeed, most of the books, articles, and demonstrations of TDD show precisely that approach. They show tests that have a strong structural correlation to the system being tested. So, of course, developers trying to adopt TDD will follow that advice.
The problem is – and I want you to think carefully about this next statement – a one-to-one correspondence implies extremely tight coupling.
Think of it! If the structure of the tests follows the structure of the production code, then the tests are inextricably coupled to the production code – and they follow the sinister red picture on the left!
FitNesse
It, frankly, took me many years to realize this. If you look at the structure of FitNesse, which we began writing in 2001, you will see a strong one-to-one correspondence between the test classes and the production code classes. Indeed, I used to tout this as an advantage because I could find every unit test by simply putting the word “Test” after the class that was being tested.
And, of course, we experienced some of the problems that you would expect with such a sinister design. We had fragile tests. We had structures made rigid by the tests. We felt the pain of TDD. And, after several years, we started to understand that the cause of that pain was that we were not designing our tests to be decoupled.
If you look at part of FitNesse written after 2008 or so, you’ll see that there is a significant drop in the one-to-one correspondence. The tests and code look more like the green design on the right.
Emergence.
The idea that the high level design and architecture of a system emerge from TDD is, frankly, absurd. Before you begin to code any software project, you need to have some architectural vision in place. TDD will not, and can not, provide this vision. That is not TDD’s role.
However, this does not mean that designs do not emerge from TDD – they do; just not at the highest levels. The designs that emerge from TDD are one or two steps above the code; and they are intimately connected to the code, and to the red-green-refactor cycle.
It works like this: As some programmers begin to develop a new class or module, they start by writing simple tests that describe the most degenerate behaviors. These tests check the absurdities, such as what the system does when no input is provided. The production code that solves these tests is trivial, and gradually grows as more and more tests are added.
At some point, relatively early in the process, the programmers look at the production code and decide that the structure is a bit messy. So the programmers extract a few methods, rename a few others, and generally clean things up. This activity will have little or no effect on the tests. The tests are still testing all that code, regardless of the fact that the structure of that code is changing.
This process continues. As tests of ever greater complexity and constraint are added to the suite, the production code continues to grow in response. From time to time, relatively frequently, the programmers clean that production code up. They may extract new classes. They may even pull out new modules. And yet the tests remain unchanged. The tests still cover the production code; but they no longer have a similar structure.
And so, to bridge the different structure between the tests and the production code, an API emerges. This API serves to allow the two streams of code to evolve in very different directions, responding to the opposing forces that press upon tests and production code.
Forces in Opposition
I said, above, that the tests remain unchanged during the process. This isn’t actually true. The tests are also refactored by the programmers on a fairly frequent basis. But the direction of the refactoring is very different from the direction that the production code is refactored. The difference can be summarized by this simple statement:
As the tests get more specific, the production code gets more generic.
This is (to me) one of the most important revelations about TDD in the last 16 years. These two streams of code evolve in opposite directions. Programmers refactor tests to become more and more concrete and specific. They refactor the production code to become more and more abstract and general.
Indeed, this is why TDD works. This is why designs can emerge from TDD. This is why algorithms can be derived by TDD. These things happen as a direct result of programmers pushing the tests and production code in opposite directions.
Of course designs emerge, if you are using design principles to push the production code to be more and more generic. Of course APIs emerge if you are pulling these two streams of communicating code towards opposite extremes of specificity and generality. Of course algorithms can be derived if the tests grow ever more constraining while the production code grows ever more general.
And, of course, highly specific code cannot have a one-to-one correspondence with highly generic code.
Conclusion
What makes TDD work? You do. Following the three laws provides no guarantee. The three laws are a discipline, not a solution. It is you, the programmer, who makes TDD work. And you make it work by understanding that tests are part of the system, that tests must be designed, and that test code evolves towards ever greater specificity, while production code evolves towards ever greater generality.
Can TDD harm your design and architecture? Yes! If you don’t employ design principles to evolve your production code, if you don’t evolve the tests and code in opposite directions, if you don’t treat the tests as part of your system, if you don’t think about decoupling, separation and isolation, you will damage your design and architecture – TDD or no TDD.
You see, it is not TDD that creates bad designs. It is not TDD that creates good designs. It’s you. TDD is a discipline. It’s a way to organize your work. It’s a way to ensure test coverage. It is a way to ensure appropriate generality in response to specificity.
TDD is important. TDD works. TDD is a professional discipline that all programmers should learn and practice. But it is not TDD that causes good or bad designs. You do that.
Is is only programmers, not TDD, that can do harm to designs and architectures.
Necessary Comments
It is well known that I prefer code that has few comments. I code by the principle that good code does not require many comments. Indeed, I have often suggested that every comment represents a failure to make the code self explanatory. I have advised programmers to consider comments as a last resort.
I recently came across that last resort.
I was working with a member of our cleancoders.com
team, on a particular issue with our site. We were trying to implement a feature that would inform a customer who bought one of our videos, what other videos they might want to purchase as well. The feature would find all the videos that had been purchased by others who had bought that video, and would pick the most popular and recommend it to the customer.
The algorithm to do that can take several seconds to run; and we didn’t want the customer to have to wait. So we decided to cache the result and run the function no more often than every N minutes.
Our strategy was to wrap the long running function in another function that would return the previous result from the cache and if more than the N minute had passed, would run the algorithm in a separate thread, and cache the new result. We called this “choking”.
The long running function was called the “Chokeable Function”, and the wrapping function was called the “Choked Function”. The Choked Function had the same signature and return value as the Chokeable function; but implemented our choking behavior.
Trying to explain this in code is very difficult. So we wrote the following comment at the start of the choking module:
; A Choked function is a way to throttle the execution of a long running
; algorithm -- a so-called "Chokeable Function". The Choked function
; returns the last result from the Chokeable Function; and only allows
; the Chokeable function to be called if more than the Choke-time has
; elapsed since its last invocation.
Now imagine the unit tests! At first, as we were writing them, we tried to name those tests with nice long names. But the names kept getting longer and more obscure. Moreover, as we tried to explain the tests to each other, we found that we needed to fall back on diagrams and pictures.
In the end we drew a timing diagram that showed all the conditions that we’d have to deal with in the unit tests.
We realized that nobody else would understand our tests unless they could see that timing diagram as well. So we drew that timing diagram, along with explanatory text, into the comments in the test module.
; Below is the timing diagram for how Choked Functions are tested.
; (See the function-choke module for a description of Choked Functions.)
;
; The first line of the timing diagram represents calls to the choked
; function. Each such call is also the subject of a unit test; so they
; are identified by a test#
;
; In this test, the chokeable function counts the number of times it has
; been invoked, and the choked function returns that counter. The expected
; return value is the below the first horizontal line in the timing
; diagram.
; 1 2 3 3.5 4 5 6 Test#
; aaaaa
;---------------------------
; n n 1 1 1 1 2
;---------------------------------------------
;AAAAA AAAAA
; CCCCCCC CCCCCCC
;
; Where: A is the algorithm time. (The chokeable function run time)
; C is the choke time
; n is nil
; 1 is "result-1"
; 2 is "result-2"
The names of the tests then became, simply, test1
and test2
, up to test6
; referring back to the diagram.
We were pretty pleased with the result; both the code and the comments.
So, this is one of those cases where there was no good way for the code to be self documenting. Had we left the comments out of these modules, we would have lost the intent and the rationale behind what we did.
This doesn’t happen all the time. Indeed, I have found this kind of thing to be relatively rare. But it does happen; and when it does nothing can be more helpful than a well written, well thought through, comment.
What’s brewing in Visual Studio Team Services: March 2017 Digest
This post series provides the latest updates and news for Visual Studio Team Services and is a great way for Azure users to keep up-to-date with new features being released every three weeks. Visual Studio Team Services offers the best DevOps tooling to create an efficient continuous integration and release pipeline to Azure. With the rapidly expanding list of features in Team Services, teams can start to leverage it more efficiently for all areas of their Azure workflow, for apps written in any language and deployed to any OS.
Delivery Plans
We are excited to announce the preview of Delivery Plans! Delivery Plans help you drive alignment across teams by overlaying several backlogs onto your delivery schedule (iterations). Tailor plans to include the backlogs, teams, and work items you want to view. 100% interactive plans allow you to make adjustments as you go. Head over to the marketplace to install the new Delivery Plans extension. For more information, see our blog post.
Mobile Work Item Form Preview
We’re releasing a preview of our mobile-friendly work item form for Visual Studio Team Services! This mobile work item form brings an optimized look and feel that’s both modern and useful. See our blog post for more information.
Updated Package Management experience
We’ve updated the Package Management user experience to make it faster, address common user-reported issues, and make room for upcoming package lifecycle features. Learn more about the update, or turn it on using the toggle in the Packages hub.
Release Views in Package Management
We’ve added a new feature to Package Management called release views. Release views represent a subset of package-versions in your feed that you’ve promoted into that release view. Creating a release view and sharing it with your package’s consumers enables you to control which versions they take a dependency on. This is particularly useful in continuous integration scenarios where you’re frequently publishing updated package versions, but may not want to announce or support each published version.
By default, every feed has two release views: Prerelease
and Release
.
To promote a package-version into the release view:
- Select the package
- Click the Promote button
- Select the view to promote to and select Promote
Check out the docs to get started.
Build editor preview
We’re offering a preview of a new design aimed at making it easier for you to create and edit build definitions. Click the switch to give it a try.
If you change your mind, you can toggle it off. However, eventually after we feel it’s ready for prime time, the preview editor will replace the current editor. So please give it a try and give us feedback.
The new editor has all the capabilities of the old editor along with several new capabilities and enhancements to existing features:
Search for a template
Search for the template you want and then apply it, or start with an empty process.
Quickly find and add a task right where you want it
Search for the task you want to use, and then after you’ve found it, you can add it after the currently selected task on the left side, or drag and drop it where you want it to go.
You can also drag and drop a task to move it, or drag and drop while holding the Ctrl key to copy the task.
Use process parameters to pass key arguments to your tasks
You can now use process parameters to make it easier for users of your build definition or template to specify the most important bits of data without having to go deep into your tasks.
For more details, see the post about the preview of our new build editor.
Pull Request: Improved support for Team Notifications
Working with pull requests that are assigned to teams is getting a lot easier. When a PR is created or updated, email alerts will now be sent to all members of all teams that are assigned to the PR.
This feature is in preview and requires an account admin to enable it from the Preview features panel (available under the profile menu).
After selecting for this account, switch on the Team expansion for notifications feature.
In a future release, we’ll be adding support for PRs assigned to Azure Active Directory (AAD) groups and teams containing AAD groups.
Pull Request: Actionable comments
In a PR with more than a few comments, it can be hard to keep track of all of the conversations. To help users better manage comments, we’ve simplified the process of resolving items that have been addressed with a number of enhancements:
- In the header for every PR, you’ll now see a count of the comments that have been resolved.
- When a comment has been addressed, you can resolve it with a single click.
- If you have comments to add while you’re resolving, you can reply and resolve in a single gesture.
Automatic Github Pull Request Builds
For a while we’ve provided CI builds from your GitHub repo. Now we’re adding a new trigger so you can build your GitHub pull requests automatically. After the build is done, we report back with a comment in your GitHub pull request.
For security, we only build pull requests when both the source and target are within the same repo. We don’t build pull requests from a forked repo.
Extension of the Month: Azure Build and Release Tasks
This extension has really been trending over the last month and it’s not hard to see why. If you’re building and publishing your applications with Microsoft Azure you’ll definitely want to give this 4.5 star rated extension a look. It is a small gold mine of tasks to use in your Build and Release definitions.
- Azure Web App Slots Swap: Swap two deployment slots of an Azure Web App
- Azure Web App Start: Start an Azure Web App, or one of its slot
- Azure Web App Stop: Stop an Azure Web App, or one of its slot
- Azure SQL Execute Query: Execute a SQL query on an Azure SQL Database
- Azure SQL Database Restore: Restore an Azure SQL Database to another Azure SQL Database on the same server using the latest point-in-time backup
- Azure SQL Database Incremental Deployment: Deploy an Azure SQL Database using multiple DACPAC and performing incremental deployments based on current Data-Tier Application version
- AzCopy: Copy blobs across Azure Storage accounts using AzCopy
Go to the Visual Studio Team Services Marketplace and install the extension.
There are many more updates, so I recommend taking a look at the full list of new features in the release notes for January 25th and February 15th.
Happy coding!
Azure Command Line 2.0 now generally available
Back in September, we announced Azure CLI 2.0 Preview. Today, we’re announcing the general availability of the vm, acs, storage and network commands in Azure CLI 2.0. These commands provide a rich interface for a large array of use cases, from disk and extension management to container cluster creation.
Today’s announcement means that customers can now use these commands in production, with full support by Microsoft both through our Azure support channels or GitHub. We don’t expect breaking changes for these commands in new releases of Azure CLI 2.0.
This new version of Azure CLI should feel much more native to developers who are familiar with command line experiences in the bash enviornment for Linux and macOS with simple commands that have smart defaults for most common operations and that support tab completion and pipe-able outputs for interacting with other text-parsing tools like grep, cut, jq and the popular JMESpath query syntax. It’s easy to install on the platform of your choice and learn.
During the preview period, we’ve received valuable feedback from early adopters and have added new features based on that input. The number of Azure services supported in Azure CLI 2.0 has grown and we now have command modules for sql, documentdb, redis, and many other services on Azure. We also have new features to make working with Azure CLI 2.0 more productive. For example, we’ve added the "--wait" and "--no-wait" capabilities that enable users to respond to external conditions or continue the script without waiting for a response.
We’re also very excited about some new features in Azure CLI 2.0, particularly the combination of Bash and CLI commands, and support for new platform features like Azure Managed Disks.
Here’s how to get started using Azure CLI 2.0.
Installing the Azure CLI
The CLI runs on Mac, Linux, and of course, Windows. Get started now by installing the CLI on whatever platform you use. Also, review our documentation and samples for full details on getting started with the CLI, and how to access to services provided via Azure using the CLI in scripts.
Here’s an example of the features included with the "vm command":
Working with the Azure CLI
Accessing Azure and starting one or more VMs is easy. Here are two lines of code that will create a resource group (a way to group and Manage Azure resources) and a Linux VM using Azure’s latest Ubuntu VM Image in the westus2 region of Azure.
az group create -n MyResourceGroup -l westus2 az vm create -g MyResourceGroup -n MyLinuxVM --image ubuntults
Using the public IP address for the VM (which you get in the output of the vm create command or can look up separately using "az vm list-ip-addresses" command), connect directly to your VM from the command line:
ssh <public ip address>
For Windows VMs on Azure, you can connect using remote desktop ("mstsc <public ip address>" from Windows desktops).
The "create vm" command is a long running operation, and it may take some time for the VM to be created, deployed, and be available for use on Azure. In most automation scripting cases, waiting for this command to complete before running the next command may be fine, as the result of this command may be used in next command. However, in other cases, you may want to continue using other commands while a previous one is still running and waiting for the results from the server. Azure CLI 2.0 now supports a new "--no-wait" option for such scenarios.
az vm create -n MyLinuxVM2 -g MyResourceGroup --image UbuntuLTS --no-wait
As with Resource Groups and a Virtual Machines, you can use the Azure CLI 2.0 to create other resource types in Azure using the "az <resource type name> create" naming pattern.
For example, you can create managed resources on Azure like WebApps within Azure AppServices:
# Create an Azure AppService that we can use to host multiple web apps az appservice plan create -n MyAppServicePlan -g MyResourceGroup # Create two web apps within the appservice (note: name param must be a unique DNS entry) az appservice web create -n MyWebApp43432 -g MyResourceGroup --plan MyAppServicePlan az appservice web create -n MyWEbApp43433 -g MyResourceGroup --plan MyAppServicePlan
Read the CLI 2.0 reference docs to learn more about the create command options for various Azure resource types. The Azure CLI 2.0 lets you list your Azure resources and provides different output formats.
--output Description json json string. json is the default. Best for integrating with query tools etc jsonc colorized json string. table table with column headings. Only shows a curated list of common properties for the selected resource type in human readable form. tsv tab-separated values with no headers. optimized for piping to other tex-processing commands and tools like grep, awk, etc.
You can use the "--query" option with the list command to find specific resources, and to customize the properties that you want to see in the output. Here are a few examples:
# list all VMs in a given Resource Group az vm list -g MyResourceGroup --output table # list all VMs in a Resource Group whose name contains the string ‘My’ az vm list --query “[?contains(resourceGroup,’My’)]” --output tsv # same as above but only show the 'VM name' and 'osType' properties, instead of all default properties for selected VMs az vm list --query “[?contains(resourceGroup,’My’)].{name:name, osType:storageProfile.osDisk.osType}” --output table
Azure CLI 2.0 supports management operations against SQL Server on Azure. You can use it to create servers, databases, data warehouses, and other data sources; and to show usage, manage administrative logins, and run other management operations.
# Create a new SQL Server on Azure az sql server create -n MySqlServer -g MyResourceGroup --administrator-login <admin login> --administrator-login-password <admin password> -l westus2 # Create a new SQL Server database az sql db create -n MySqlDB -g MyResourceGroup --server-name MySqlServer -l westus2 # list available SQL databases on Server within a Resource Group az sql db list -g MyResourceGroup --server-name MySqlServer
Scripting with the new Azure CLI 2.0 features
The new ability to combine Bash and Azure CLI 2.0 commands in the same script can be a big time saver, especially if you’re already familiar with Linux command-line tools like grep, cut, jq and JMESpath queries.
Let’s start with a simple example that stops a VM in a resource group using a VM’s resource ID (or multiple IDs by spaces):
az vm stop –ids ‘<one or more ids>’
You can also stop a VM in a resource group using the VM’s name. Here’s how to stop the VM we created above:
az vm stop -g resourceGroup -n simpleVM
For a more complicated use case, let’s imagine we have a large number of VMs in a resource group, running Windows and Linux. To stop all running Linux VMs in that resource group, we can use a JMESpath query, like this:
os="Linux" rg="resourceGroup" ps="VM running" rvq="[].{resourceGroup: resourceGroup, osType: storageProfile.osDisk.osType, powerState: powerState, id:id}| [?osType=='$os']|[?resourceGroup=='$rg']| [?powerState=='$ps']|[].id" az vm stop --ids $(az vm list --show-details --query "$rvq" --output tsv)
This script issues an az vm stop command, but only for VMs that are returned in the JMESpath query results (as defined in the rvq variable). The osType, resourceGroup and powerState parameters are provided values. The resourceGroup parameter is compared to a VM’s resourceGroup property, and the osType parameter is compared to a VM’s storageProfile.osDisk.osType property, and all matching results are returned (in tsv format) for use by the "az vm stop" command.
Azure Container Services in the CLI
Azure Container Service (ACS) simplifies the creation, configuration, and management of a cluster of virtual machines that are preconfigured to run container applications. You can use Docker images with DC/OS (powered by Apache Mesos), Docker Swarm or Kubernetes for orchestration.
The Azure CLI supports the creation and scaling of ACS clusters via the az acs command. You can discover full documentation for Azure Container Services, as well as a tutorial for deploying an ACS DC/OS cluster with Azure CLI commands.
Scale with Azure Managed Disks using the CLI
Microsoft recently announced the general availability of Azure Managed Disks to simplify the management and scaling of Virtual Machines. You can create a Virtual Machine with an implicit Managed Disk for a specific disk image, and also create managed disks from blob storage or standalone with the az vm disk command. Updates and snapshots are easy as well -- check out what you can do with Managed dDisks from the CLI.
Start using Azure CLI 2.0 today!
Whether you are an existing CLI user or starting a new Azure project, it’s easy to get started with the CLI at http://aka.ms/CLI and master the command line with our updated docs and samples. Check out topics like installing and updating the CLI, working with Virtual Machines, creating a complete Linux environment including VMs, Scale Sets, Storage, and network, and deploying Azure Web Apps – and let us know what you think!
Azure CLI 2.0 is open source and on GitHub.
In the next few months, we’ll provide more updates. As ever, we want your ongoing feedback! Customers using the vm, storage and network commands in production can contact Azure Support for any issues, reach out via StackOverflow using the azure-cli tag, or email us directly at azfeedback@microsoft.com.
A Husband Captures His Wife's World (31 photos)
This photo essay features images of Melissa Eich, a speech pathologist in Charlottesville, Virginia, taken by her husband Matt Eich. In Matt’s words, here’s how the essay came about:
When we met in 2005, I was 19 and Melissa was 18. I was a sophomore studying photojournalism and she was a freshman pursuing early childhood education. She has been my best friend — and muse — ever since.
Fast-forward 11 years: We are now 30 and 29. I work as a freelance photographer and my wife is a speech-language pathologist. We live in a modest townhouse with our two children, ages nine and four.
Like many couples, we part ways each morning with little knowledge of our partner’s day. For The Atlantic’s project, I proposed something close to home. Instead of documenting the work/life balance of a stranger, I wanted to better understand how my wife managed her work/life balance.
On an average day, Melissa is up before the sun, her alarm set for 5:18AM. Some mornings, her alarm does not even go off, she just shoots out of bed in a panic at 5:15AM. I bury myself deeper in the sheets while she dresses in the dark. She cherishes this quiet period; she reads, gathers her thoughts, and caffeinates. By 6:15AM the girls are up, and the morning ritual of getting them dressed and out the door commences.
At work, her job is essentially to trick kids into communicating clearly. Some of the students have articulation issues, while others have more difficult obstacles with expressive language skills. She treats them all with equality and fairness, but sets different goals for each.
When working with students, Melissa becomes animated and full of excitement. When working on paperwork, she is subdued and focused. It is easy to allow our jobs to influence every aspect of our lives and Melissa’s happiness is directly linked to how things went at work that day. Was there a conflict? Was it resolved? Did she experience a breakthrough with a student? Was there fun therapy time, or too much paperwork?
I feel incredibly fortunate to have a partner who loves her job, even when she is challenged by it. As we grow older, I continue to learn new things about her, and how she manages the delicate balance of work and motherhood. I never want to stop learning about her.”
Announcing new Azure Functions capabilities to accelerate development of serverless applications
Ever since the introduction of Azure Functions, we have seen customers build interesting and impactful solutions using it. The serverless architecture, ability to easily integrate with other solutions, streamlined development experience and on-demand scaling enabled by Azure Functions continue to find great use in multiple scenarios.
Today we are happy to announce preview support for some new capabilities that will accelerate development of serverless applications using Azure Functions.
Integration with Serverless Framework
Today we’re announcing preview support for Azure Functions integration with the Serverless Framework. The Serverless Framework is a popular open source tool which simplifies the deployment and monitoring of serverless applications in any cloud. It helps abstract away the details of the serverless resources and lets developers focus on the important part – their applications. This integration is powered by a provider plugin, that now makes Azure Functions a first-class participant in the serverless framework experience. Contributing to this community effort was a very natural choice, given the origin of Azure Functions was in the open-source Azure WebJobs SDK.
You can learn more about the plugin in the Azure Functions Serverless Framework documentation and in the Azure Functions Serverless Framework blog post.
Azure Functions Proxies
Functions provide a fantastic way to quickly express actions that need to be performed in response to some triggers (events). That sounds an awfully lot like an API, which is what several customers are already using Functions for. We’re also seeing customers starting to use Functions for microservices architectures, with a need for deployment isolation between individual components.
Today, we are pleased to announce the preview of Azure Functions Proxies, a new capability that makes it easier to develop APIs using Azure Functions. Proxies let you define a single API surface for multiple function apps. Any function app can now define an endpoint that serves as a reverse proxy to another API, be that another function app, an API app, or anything else.
You can learn more about Azure Functions Proxies by going to our documentation page and in the Azure Functions Proxies public preview blog post. The feature is free while in preview, but standard Functions billing applies to proxy executions. See the Azure Functions pricing page for more information.
Integration with PowerApps and Flow
PowerApps and Flow are services that enable business users within an organization to turn their knowledge of business processes into solutions. Without writing any code, users can easily create apps and custom automated workflows that interact with a variety of enterprise data and services. While they can leverage a wide variety of built-in SaaS integrations, users often find the need to incorporate company-specific business processes. Such custom logic has traditionally been built by professional developers, but it is now possible for business users building apps to consume such logic in their workflows.
Azure App Service and Azure Functions are both great for building organizational APIs that express important business logic needed by many apps and activities. We've now extended the API Definition feature of App Service and Azure Functions to include an "Export to PowerApps and Microsoft Flow" gesture. This walks you through all the steps needed to make any API in App Service or Azure Functions available to PowerApps and Flow users. To learn more, see our documentation and read the APIs for PowerApps and Flow blog post.
We are excited to bring these new capabilities into your hands and look forward to hearing from you through our forums, StackOverFlow, or Uservoice.
Announcing TypeScript 2.2
For those who haven’t yet heard of it, TypeScript is a simple extension to JavaScript to add optional types along with all the new ECMAScript features. TypeScript builds on the ECMAScript standard and adds type-checking to make you way more productive through cleaner code and stronger tooling. Your TypeScript code then gets transformed into clean, runnable JavaScript that even older browsers can run.
While there are a variety of ways to get TypeScript set up locally in your project, the easiest way to get started is to try it out on our site or just install it from npm:
npm install -g typescript
If you’re a Visual Studio 2015 user with update 3, you can install TypeScript 2.2 from here. You can also grab this release through NuGet. Support in Visual Studio 2017 will come in a future update.
If you’d rather not wait for TypeScript 2.2 support by default, you can configure Visual Studio Code and our Sublime Text plugin to pick up whatever version you need.
As usual, we’ve written up about new features on our what’s new page, but we’d like to highlight a couple of them.
More quick fixes
One of the areas we focus on in TypeScript is its tooling – tooling can be leveraged in any editor with a plugin system. This is one of the things that makes the TypeScript experience so powerful.
With TypeScript 2.2, we’re bringing even more goodness to your editor. This release introduces some more useful quick fixes (also called code actions) which can guide you in fixing up pesky errors. This includes
- Adding missing imports
- Adding missing properties
- Adding forgotten
this.
to variables - Removing unused declarations
- Implementing abstract members
With just a few of these, TypeScript practically writes your code for you.
As you write up your code, TypeScript can give suggestions each step of the way to help out with your errors.
Expect similar features in the future. The TypeScript team is committed to ensuring that the JavaScript and TypeScript community gets the best tooling we can deliver.
With that in mind, we also want to invite the community to take part in this process. We’ve seen that code actions can really delight users, and we’re very open to suggestions, feedback, and contributions in this area.
The object
type
The object
type is a new type in 2.2 that matches any types except for primitive types. In other words, you can assign anything to the object
type except for string
, boolean
, number
, symbol
, and, when using strictNullChecks
, null
and undefined
.
object
is distinct from the {}
type and Object
types in this respect due to structural compatibility. Because the empty object type ({}
) also matches primitives, it couldn’t model APIs like Object.create
which truly only expect objects – not primitives. object
on the other hand does well here in that it can properly reject being assigned a number
.
We’d like to extend our thanks to members of our community who proposed and implemented the feature, including François de Campredon and Herrington Darkholme.
Easier string indexing behavior
TypeScript has a concept called index signatures. Index signatures are part of a type, and tell the type system what the result of an element access should be. For instance, in the following:
interface Foo { // Here is a string index signature: [prop: string]: boolean; } declare const x: Foo; const y = x["hello"];
Foo
has a string index signature that says “whenever indexing with a string
, the output type is a boolean
.” The core idea is that index signatures here are meant to model the way that objects often serve as maps/dictionaries in JavaScript.
Before TypeScript 2.2, writing something like x["propName"]
was the only way you could make use of a string index signature to grab a property. A little surprisingly, writing a property access like x.propName
wasn’t allowed. This is slightly at odds with the way JavaScript actually works since x.propName
is semantically the same as x["propName"]
. There’s a reasonable argument to allow both forms when an index signature is present.
In TypeScript 2.2, we’re doing just that and relaxing the old restriction. What this means is that things like testing properties on a JSON object has become dramatically more ergonomic.
interface Config { [prop: string]: boolean; } declare const options: Config; // Used to be an error, now allowed! if (options.debugMode) { // ... }
Better class support for mixins
We’ve always meant for TypeScript to support the JavaScript patterns you use no matter what style, library, or framework you prefer. Part of meeting that goal involves having TypeScript more deeply understand code as it’s written today. With TypeScript 2.2, we’ve worked to make the language understand the mixin pattern.
We made a few changes that involved loosening some restrictions on classes, as well as adjusting the behavior of how intersection types operate. Together, these adjustments actually allow users to express mixin-style classes in ES2015, where a class can extend anything that constructs some object type. This can be used to bridge ES2015 classes with APIs like Ember.Object.extend
.
As an example of such a class, we can write the following:
type Constructable = new (...args: any[]) => object; function Timestamped<BC extends Constructable>(Base: BC) { return class extends Base { private _timestamp = new Date(); get timestamp() { return this._timestamp; } }; }
and dynamically create classes
class Point { x: number; y: number; constructor(x: number, y: number) { this.x = x; this.y = y; } } const TimestampedPoint = Timestamped(Point);
and even extend from those classes
class SpecialPoint extends Timestamped(Point) { z: number; constructor(x: number, y: number, z: number) { super(x, y); this.z = z; } } let p = new SpecialPoint(1, 2, 3); // 'x', 'y', 'z', and 'timestamp' are all valid properties. let v = p.x + p.y + p.z; p.timestamp.getMilliseconds()
The react-native
JSX emit mode
In addition to the preserve
and react
options for JSX, TypeScript now introduces the react-native
emit mode. This mode is like a combination of the two, in that it emits to .js
files (like --jsx react
), but leaves JSX syntax alone (like --jsx preserve
).
This new mode reflects React Native’s behavior, which expects all input files to be .js
files. It’s also useful for cases where you want to just leave your JSX syntax alone but get .js
files out from TypeScript.
Support for new.target
With TypeScript 2.2, we’ve implemented ECMAScript’s new.target
meta-property. new.target
is an ES2015 feature that lets constructors figure out if a subclass is being constructed. This feature can be handy since ES2015 doesn’t allow constructors to access this
before calling super()
.
What’s next?
Our team is always looking forward, and is now hard at work on TypeScript 2.3. While our team’s roadmap should give you an idea of what’s to come, we’re excited for our next release, where we’re looking to deliver
- default types for generics
- async iterator support
- downlevel generator support
Of course, that’s only a preview for now.
We hope TypeScript 2.2 makes you even more productive, and allows you to be even more expressive in your code. Thanks for taking the time to read through, and as always, happy hacking!
What do you mean by “Event-Driven”
Towards the end of last year I attended a workshop with my colleagues in ThoughtWorks to discuss the nature of “event-driven” applications. Over the last few years we've been building lots of systems that make a lot of use of events, and they've been often praised, and often damned. Our North American office organized a summit, and ThoughtWorks senior developers from all over the world showed up to share ideas.
The biggest outcome of the summit was recognizing that when people talk about “events”, they actually mean some quite different things. So we spent a lot of time trying to tease out what some useful patterns might be. This note is a brief summary of the main ones we identified.
What’s new in Azure Active Directory B2C
Over the past few weeks, we have introduced new features in Azure AD B2C, a cloud identity service for app developers. Azure AD B2C handles all your app’s identity management needs, including sign-up, sign-in, profile management and password reset. In this post, you’ll read about these features:
- Single-page app (SPA) support
- Usage reporting APIs
- Friction-free consumer sign-up
Single-page app (SPA) support
A single-page app (SPA) is a web app that loads a single HTML page and dynamically updates the page as the consumer interacts with the app. It is written primarily in JavaScript, typically using a framework like AngularJS or Ember.js. Gmail and Outlook are two popular consumer-facing SPAs.
Since JavaScript code runs in a consumer’s browser, a SPA has different requirements for securing the frontend and calls to backend web APIs, compared to a traditional web app. To support this scenario, Azure AD B2C added the OAuth 2.0 implicit grant flow. Read more about using the OAuth 2.0 implicit grant flow or try out our samples:
- A SPA, implemented with an ASP.NET Web API backend
- A SPA, implemented with a Node.js Web API backend
Both samples use an open-source JavaScript SDK (hello.js). Note that the OAuth 2.0 implicit grant flow support is still in preview.
Usage reporting APIs
A frequent ask from developers is to get access to rich consumer activity reports on their Azure AD B2C tenants. We’ve now made those available to you, programmatically, via REST-based Azure AD reporting APIs. You can easily pipe the data from these reports into business intelligence and analytics tools, such as Microsoft’s Power BI, for detailed analyses. With the current release, 4 activity reports are available:
- tenantUserCount: Total number of consumers in your Azure AD B2C tenant (per day for the last 30 days). You can also get a breakdown by the number of local accounts (password-based accounts) and social accounts (Facebook, Google, etc.).
- b2cAuthenticationCount: Total number of successful authentications (sign-up, sign-in, etc.) within a specified period.
- b2cAuthenticationCountSummary: Daily count on successful authentications for the last 30 days.
- b2cMfaRequestCountSummary: Daily count of multi-factor authentications for the last 30 days.
Get started using the steps outlined in this article.
Friction-free consumer sign-up
By default, Azure AD B2C verifies email addresses provided by consumers during the sign-up process. This is to ensure that valid, and not fake, accounts are in use on your app. However, some developers prefer to skip the upfront email verification step and doing it themselves later. This friction-free sign-up experience makes sense for certain app types. We’ve added a way for you to do this on your “Sign-up policies” or “Sign-up or sign-in policies”. Learn more about disabling email verification during consumer sign-up.
Feedback
Keep your great feedback coming on UserVoice or Twitter (@azuread, @swaroop_kmurthy). If you have questions, get help on Stack Overflow (use the ‘azure-active-directory’ tag).
Dear #MongoDB users, we welcome you in #Azure #DocumentDB
First and foremost, security is our priority
Microsoft makes security a priority at every step, from code development to incident response. Azure code development adheres to Microsoft’s Security Development Lifecycle (SDL) - a software development process that helps developers build more secure software and address security compliance requirements while reducing development cost. Azure Security Center makes Azure the only public cloud platform to offer continuous security-health monitoring. Azure is ubiquitous, with a global footprint approaching nearly 40 geographical regions and continuously expanding. With its worldwide presence, one of the differentiated capabilities Azure offers is the ability to easily build, deploy, and manage globally distributed data-driven applications that are secure.
Azure DocumentDB is Microsoft's multi-tenant, globally distributed database system designed to enable developers to build planet scale applications. DocumentDB allows you to elastically scale both throughput and storage across any number of geographical regions. The service offers guaranteed low latency at P99 - 99.99% high availability, predictable throughput, and multiple well-defined consistency models – all backed by comprehensive enterprise-level SLAs. By virtue of its schema-agnostic and write optimized database engine, by default DocumentDB is capable of automatically indexing all the data it ingests and serve SQL, MongoDB, and JavaScript language-integrated queries in a scale-independent manner.
DocumentDB has a number of powerful security features built-in. To secure data stored in an Azure DocumentDB database account, DocumentDB provides support for a secret-based authorization model that utilizes a strong hash-based message authentication code (HMAC). In addition to the secret based authorization model, DocumentDB also supports policy driven IP-based access controls for inbound firewall support. This model is very similar to the firewall rules of a traditional database system and provides an additional level of security to the DocumentDB database account. With this model, you can now configure a DocumentDB database account to be accessible only from an approved set of machines and/or cloud services. Once this configuration is applied, all requests originating from machines outside this allowed list will be blocked by the server. Access to DocumentDB resources from these approved sets of machines and services still require the caller to present a valid authorization token. All communication inside the cluster in DocumentDB (e.g., replication traffic) is using SSL. All communication from Mongo (or any other clients) to DocumentDB service is always using SSL.To learn more about securing access to your data in DocumentDB, see Securing Access to DocumentDB Data.
The table below maps current DocumentDB features to the security checklist that MongoDB recommends.
Checklist Item |
Status |
Enable Access Control and Enforce Authentication |
Enabled by default Only discovery/authentication commands like IsMaster/GetLastError/WhatsMyUri are supported before authentication |
Configure Role-Based Access Control |
Each DatabaseAccount has its own key. Support for ReadOnly keys to limit access. No default user/account present. |
Encrypt Communication |
We do not allow non-SSL communication – all communication to service is always over SSL. DocumentDB requires TLS1.2 which is more secure than TLS1, SSL3 |
Encrypt and Protect Data |
Encryption at rest |
Limit Network Exposure |
IP Filtering |
Audit System Activity |
We audit all APIs and all system activities, and plan to expose it to customers using Portal shortly (today we already expose it to customers when they ask for it). |
Run MongoDB with a Dedicated User |
DocumentDB is a multi-tenant service so no account has direct access to the core operating system resources. |
Run MongoDB with Secure Configuration Options |
DocumentDB only support MongoDB wire protocol and does not enable HTTP/JSONP endpoints |
The capabilities offered by DocumentDB span beyond that of traditional geographical disaster recovery (Geo-DR) offered by "single-site" databases. Single site databases offering Geo-DR capability are a strict subset of globally distributed databases. With DocumentDB's turnkey global distribution, developers do not have to build their own replication scaffolding by employing either the Lambda pattern (for example, AWS DynamoDB replication) over the database log or by doing "double writes" across multiple regions. We do not recommend these approaches since it is impossible to ensure correctness of such approaches and provide sound SLAs.
DocumentDB enables you to have policy-based geo-fencing capabilities. Geo-fencing is an important capability that ensures data governance and compliance restrictions and may prevent associating a specific region with your account. Examples of geo-fencing include (but are not restricted to), scoping global distribution to the regions within a sovereign cloud (for example, China and Germany), or within a government taxation boundary (for example, Australia). The policies are controlled using the metadata of your Azure subscription.
For failover, you can specify an exact sequence of regional failovers if there is a multi-regional outage and you can associate the priority to various regions associated with the database account. DocumentDB will ensure that the automatic failover sequence occurs in the priority order you specified.
We are also working on encryption-at-rest and in-motion. Customers will be able to encrypt data in DocumentDB to align with best practices for protecting confidentiality and data integrity. Stay tuned for that.
Second, you don’t have to rewrite your Apps
Moving to DocumentDB doesn’t require you to rewrite your apps or throw away your existing tools. DocumentDB supports protocol for MongoDB, which means DocumentDB databases can now be used as the data store for apps written for MongoDB. This also means that by using existing drivers for MongoDB databases, your applications written for MongoDB can now communicate with DocumentDB and use DocumentDB databases instead of MongoDB databases. In many cases, you can switch from using MongoDB to DocumentDB by simply changing a connection string. Using this functionality, you can easily build and run MongoDB database applications in the Azure cloud - leveraging DocumentDB's fully managed and scalable NoSQL databases, while continuing to use familiar skills and tools for MongoDB. Furthermore, we only support SSL for Mongo (not http) for the benefit of all users. Other benefits that you can get right away (that you can’t get anywhere else) include:
- No Server Management - DocumentDB is a fully managed service, which means you do not have to manage any infrastructure or Virtual Machines yourself. And DocumentDB is available in all Azure Regions, so your data will be available globally instantly.
- Limitless Scale - You can scale throughput and storage independently and elastically. You can add capacity to serve millions of requests per second with ease.
- Enterprise grade - DocumentDB supports multiple local replicas to deliver 99.99% availability and data protection in the face of both local and regional failures. You automatically get enterprise grade compliance certifications and security features.
- MongoDB Compatibility - DocumentDB protocol support for MongoDB is designed for compability with MongoDB. You can use your existing code, applications, drivers, and tools to work with DocumentDB.
Third, we do it with love…
Modern developers rely on dozens of different technologies to build apps, and that number is constantly expanding. These apps are often mission-critical and demand the best tools and technologies, regardless of vendor. That’s why we work so hard to find elegant, creative and simple ways to enable our customers build any app, using any model, with any language (e.g., Node.js, Java, Python, JavaScript, .NET, .NET core, SQL) against DocumentDB. And that’s why there are thousands of apps built on top of DocumentDB for everything from IoT, advertising, marketing, e-commerce, customer support, games, to power grid surveillance. We are deeply committed to making your experience on DocumentDB simply stellar! We offer a platform that brings everything together into one to simplify the process of building distributed apps at planet scale . We agonize over the best way to give developers the best experience, making sure our service works together seamlessly with all other services in Azure like Azure Search, Azure Stream Analytics, Power BI, Azure HDInsight and more. We strive for nearly instantaneous, yet thoughtful, human responses to each inquiry about DocumentDB that you post online. For us, this is not going above and beyond, it’s how we do it. This is who we are.
Welcome to real planet-scale NoSQL revolution!
We’re thrilled you’re going to be helping us define our NoSQL product (which capabilities to add, which APIs to support, and how to integrate with other products and services) to make our service even better. DocumentDB powers the businesses of banking and capital markets, professional services and discrete manufacturers, startups and health solutions. It is used everywhere in the world, and we’re just getting started. We’ve created something that both customers and developers really love and something we are really proud of! The revolution that is leading thousands of developers to flock to Azure DocumentDB has just started, and it is driven by something much deeper than just our product features. Building a product that allows for significant improvements in how developers build modern applications requires a degree of thoughtfulness, craftsmanship and empathy towards developers and what they are going through. We understand that, because we ourselves are developers.
We want to enable developers to truly transform the world we are living in through the apps they are building, which is even more important than the individual features we are putting into DocumentDB. Developing applications is hard, developing distributed applications at planet scale that are fast, scalable, elastic, always available and yet simple - is even harder. Yet it is a fundamental pre-requisite in reaching people globally in our modern world. We spend limitless hours talking to customers every day and adapting DocumentDB to make the experience truly stellar and fluid. The agility, performance and cost-effectiveness of apps built on top of DocumentDB is not an accident. Even tiny details make big differences.
So what are the next steps you should take? Here are a few that come to mind:
- First, go to the Create a DocumentDB account with protocol support for MongoDB tutorial to create a DocumentDB account.
- Then, follow the Connect to a DocumentDB account with protocol support for MongoDB tutorial to learn how to get your account connection string information.
- Afterwards, take a look at the Use MongoChef with a DocumentDB account with protocol support for MongoDB tutorial to learn how to create a connection between your DocumentDB database and MongoDB app in MongoChef.
- When you feel inspired (and you will be!), explore DocumentDB with protocol support for MongoDB samples.
Sincerely,
@rimmanehme + your friends @DocumentDB
Refugees Fleeing Into Canada From the United States (19 photos)
Reuters photographer Christinne Muschi recently spent time at the end of a small country road in Hemmingford, Quebec, that dead-ends at the U.S.-Canada border, just across from another dead-end road near Champlain, New York. She was photographing refugees, traveling alone or in small groups, who had taken taxis to the end of the road in the U.S., then walked across the border into Canada, into the custody of the RCMP. While the location is not an official border crossing, it is one of several spots that have become informal gateways to an increasing number of refugees choosing to leave the United States. Muschi reports that “in Quebec, 1,280 refugee claimants irregularly entered between April 2016 and January 2017, triple the previous year's total.” and that “the Canada Border Services Agency said in January that 452 people made a refugee claim at Quebec land border crossings.” Canadian advocacy groups say they are preparing for even more asylum-seekers, following increased anti-Muslim rhetoric in the U.S., and public expressions of welcome made by Canadian Prime Minister Justin Trudeau.
Winners of the 2017 World Press Photo Contest (35 photos)
The winning entries of the 60th annual World Press Photo Contest have just been announced. The 2017 Photo of the Year was taken by photographer Burhan Ozbilici as the Russian ambassador to Turkey was assassinated right in front of him. This year, according to organizers, 80,408 photos were submitted for judging, made by 5,034 photographers from 125 different countries. Winners in eight categories were announced, including Contemporary Issues, Daily Life, General News, Long-Term Projects, Nature, People, Sports, and Spot News. World Press Photo has once again been kind enough to allow me to share some of this year’s winning photos here with you.