Shared posts

20 May 06:27

Wizards and warriors, part one

by ericlippert

A common problem I see in object-oriented design is:

  • A wizard is a kind of player.
  • A warrior is a kind of player.
  • A staff is a kind of weapon.
  • A sword is a kind of weapon.
  • A player has a weapon.

But before we get into the details, I just want to point out that I am not really talking about anything specific to the fantasy RPG genre here. Everything in this series applies equally well to Papers and Paychecks, but wizards and warriors are more fun to write about, so there you go.

OK, great, we have five bullet points so let’s write some classes without thinking about it! What could possibly go wrong?

abstract class Weapon { }
sealed class Staff : Weapon { }
sealed class Sword : Weapon { }
abstract class Player 
{ 
  public Weapon Weapon { get; set; }
}
sealed class Wizard : Player { }
sealed class Warrior : Player { }

Designing good class hierarchies is all about capturing the semantics of the business domain in the type system, right? And we’ve done a great job here. If there is behavior common to all players, that goes in the abstract base class. If there is behavior unique to wizards or warriors, that can go in the derived classes. Clearly we’re on track for success here.

Right until we add…

  • A warrior can only use a sword.
  • A wizard can only use a staff.

What an unexpected development!

(As I have often pointed out, foreshadowing is the sign of a quality blog.)

Now what do we do? Readers familiar with type theory will know that the highfalutin name for the problem is that we’re in violation of the Liskov Substitution Principle. But we don’t need to understand the underlying type theory to see what’s going horribly wrong. All we have to do is try to modify the code to support these new criteria.

Attempt #1

abstract class Player 
{ 
  public abstract Weapon Weapon { get; set; }
}
sealed class Wizard : Player
{
  public override Staff Weapon { get; set; }
}

Nope, that’s illegal in C#. An overriding member must match the signature (and return type) of the overridden member.

Attempt #2

abstract class Player 
{ 
  public abstract Weapon { get; set; }
}
sealed class Wizard : Player
{
  private Staff weapon;
  public override Weapon Weapon 
  {
     get { return weapon; }
     set { weapon = (Staff) value; }
  } 
}

Now we’ve turned violations of the rule into runtime exceptions. This is incredibly error-prone; a caller could easily have a Wizard in hand and assign a Sword to Weapon. The whole point of capturing this thing in the type system is that the violation gets discovered at compile time.

Next time on FAIC: what other techniques do we have to try to represent this rule in the type system?


12 Nov 07:55

Dialog box may be displayed to users when opening projects in Microsoft Visual Studio after installation of Microsoft .NET Framework 4.6

by Xinyang Qiu (MSFT)

After the installation of the Microsoft .NET Framework 4.6, users may experience the following dialog box displayed in Microsoft Visual Studio when either creating new Web Site or Windows Azure project or when opening existing projects.

Configuring Web http://localhost:64886/ for ASP.NET 4.5 failed. You must manually configure this site for ASP.NET 4.5 in order for the site to run correctly. ASP.NET 4.0 has not been registered on the Web server. You need to manually configure your Web server for ASP.NET 4.0 in order for your site to run correctly.

NOTE: Microsoft .NET Framework 4.6 may also be referred to as Microsoft .NET Framework 4.5.3

This issue may impact the following Microsoft Visual Studio versions: Visual Studio 2013, Visual Studio 2012, Visual Studio 2010 SP1

Workaround:

Select “OK” when the dialog is presented. This dialog box is benign and there will be no impact to the project once the dialog box is cleared. This dialog will continue to be displayed when Web Site Project or Windows Azure Projects are created or opened until the fix has been installed on the machine.

Resolution:

Microsoft has published a fix for all impacted versions of Microsoft Visual Studio.

Visual Studio 2013 –

Visual Studio 2012

  • An update to address this issue for Microsoft Visual Studio 2012 has been published: KB3002339
  • To install this update directly from the Microsoft Download Center, here

Visual Studio 2010 SP1

  • An update to address this issue for Microsoft Visual Studio 2010 SP1 has been published: KB3002340
  • This update is available from Windows Update
    • To install this update directly from the Microsoft Download Center, here
07 Oct 11:40

Azure: Redis Cache, Disaster Recovery to Azure, Tagging Support, Elastic Scale for SQLDB, DocDB

Over the last few days we’ve released a number of great enhancements to Microsoft Azure.  These include:

  • Redis Cache: General Availability of Redis Cache Service
  • Site Recovery: General Availability of Disaster Recovery to Azure using Azure Site Recovery
  • Management: Tags support in the Azure Preview Portal
  • SQL DB: Public preview of Elastic Scale for Azure SQL Database (available through .NET lib, Azure service templates)
  • DocumentDB: Support for Document Explorer, Collection management and new metrics
  • Notification Hub: Support for Baidu Push Notification Service
  • Virtual Network: Support for static private IP support in the Azure Preview Portal
  • Automation updates: Active Directory authentication, PowerShell script converter, runbook gallery, hourly scheduling support

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

Redis Cache: General Availability of Redis Cache Service

I’m excited to announce the General Availability of the Azure Redis Cache. The Azure Redis Cache service provides the ability for you to use a secure/dedicated Redis cache, managed as a service by Microsoft. The Azure Redis Cache is now the recommended distributed cache solution we advocate for Azure applications.

Redis Cache

Unlike traditional caches which deal only with key-value pairs, Redis is popular for its support of high performance data types, on which you can perform atomic operations such as appending to a string, incrementing the value in a hash, pushing to a list, computing set intersection, union and difference, or getting the member with highest ranking in a sorted set.  Other features include support for transactions, pub/sub, Lua scripting, keys with a limited time-to-live, and configuration settings to make Redis behave more like a traditional cache.

Finally, Redis has a healthy, vibrant open source ecosystem built around it. This is reflected in the diverse set of Redis clients available across multiple languages. This allows it to be used by nearly any application, running on either Windows or Linux, that you host inside of Azure.

Redis Cache Sizes and Editions

The Azure Redis Cache Service is today offered in the following sizes:  250 MB, 1 GB, 2.8 GB, 6 GB, 13 GB, 26 GB, 53 GB.  We plan to support even higher-memory options in the future.

Each Redis cache size option is also offered in two editions:

  • Basic – A single cache node, without a formal SLA, recommended for use in dev/test or non-critical workloads.
  • Standard – A multi-node, replicated cache configured in a two-node Master/Replica configuration for high-availability, and backed by an enterprise SLA.

With the Standard edition, we manage replication between the two nodes and perform an automatic failover in the case of any failure of the Master node (because of either an un-planned server failure, or in the event of planned patching maintenance). This helps ensure the availability of the cache and the data stored within it. 

Details on Azure Redis Cache pricing can be found on the Azure Cache pricing page.  Prices start as low as $17 a month.

Create a New Redis Cache and Connect to It

You can create a new instance of a Redis Cache using the Azure Preview Portal.  Simply select the New->Redis Cache item to create a new instance. 

You can then use a wide variety of programming languages and corresponding client packages to connect to the Redis Cache you’ve provisioned.  You use the same Redis client packages that you’d use to connect to your own Redis instance as you do to connect to an Azure Redis Cache service.  The API + libraries are exactly the same.

Below we’ll use a .NET Redis client called StackExchange.Redis to connect to our Azure Redis Cache instance. First open any Visual Studio project and add the StackExchange.Redis NuGet package to it, with the NuGet package manager.  Then, obtain the cache endpoint and key respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal.

image

Once you’ve retrieved these, create a connection instance to the cache with the code below:

var connection = StackExchange.Redis.ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods (or their async counterparts – StringSetAsync and StringGetAsync).

cache.StringSet("Key1", "HelloWorld");

cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure. For an example of an end to end application using Azure Redis Cache, please check out the MVC Movie Application blog post.

Using Redis for ASP.NET Session State and Output Caching

You can also take advantage of Redis to store out-of-process ASP.NET Session State as well as to share Output Cached content across web server instances. 

For more details on using Redis for Session State, checkout this blog post: ASP.NET Session State for Redis

For details on using Redis for Output Caching, checkout this MSDN post: ASP.NET Output Cache for Redis

Monitoring and Alerting

Every Azure Redis cache instance has built-in monitoring support on by default. Currently you can track Cache Hits, Cache Misses, Get/Set Commands, Total Operations, Evicted Keys, Expired Keys, Used Memory, Used Bandwidth and Used CPU.  You can easily visualize these using the Azure Preview Portal:

image

You can also create alerts on metrics or events (just click the “Add Alert” button above). For example, you could create an alert rule to notify the cache administrator when the cache is seeing evictions. This in turn might signal that the cache is running hot and needs to be scaled up with more memory.

Learn more

For more information about the Azure Redis Cache, please visit the following links:

Site Recovery: Announcing the General Availability of Disaster Recovery to Azure

I’m excited to announce the general availability of the Azure Site Recovery Service’s new Disaster Recovery to Azure functionality.  The Disaster Recovery to Azure capability enables consistent replication, protection, and recovery of on-premises VMs to Microsoft Azure. With support for both Disaster Recovery and Migration to Azure, the Azure Site Recovery service now provides a simple, reliable, and cost-effective DR solution for enabling Virtual Machine replication and recovery between on-premises private clouds across different enterprise locations, or directly to the cloud with Azure.

This month’s release builds upon our recent InMage acquisition, and the integration of InMage Scout with Azure Site Recovery enables us to provide hybrid cloud business continuity solutions for any customer IT environment – regardless of whether it is Windows or Linux, running on physical servers or virtualized servers using Hyper-V, VMware or other virtualization solutions. Microsoft Azure is now the ideal destination for disaster recovery for virtually every enterprise server in the world.

In addition to enabling replication to and disaster recovery in Azure, the Azure Site Recovery service also enables the automated protection of VMs, remote health monitoring of them, no-impact disaster recovery plan testing, and single click orchestrated recovery - all backed by an enterprise-grade SLA. A new addition with this GA release is the ability to also invoke Azure Automation runbooks from within Azure Site Recovery Plans, enabling you to further automate your solutions.

image

Learn More about Azure Site Recovery

For more information on Azure Site Recovery, check out the recording of the Azure Site Recovery session at TechEd 2014 where we discussed the preview.  You can also visit the Azure Site Recovery forum on MSDN for additional information and to engage with the engineering team or other customers.

Once you’re ready to get started with Azure Site Recovery, check out additional pricing or product information, and sign up for a free Azure trial.

Beginning this month, Azure Backup and Azure Site Recovery will also be available in a convenient, and economical promotion offer available for purchase via a Microsoft Enterprise Agreement.  Each unit of the Azure Backup & Site Recovery annual subscription offer covers protection of a single instance to Azure with Site Recovery, as well as backup of data with Azure Backup.  You can contact your Microsoft Reseller or Microsoft representative for more information.

Management: Tag Support with Resources

I’m excited to announce the support of tags in the Azure management platform and in the Azure preview portal.

Tags provide an easy way to organize your Azure resources and resources groups, by allowing you to tag your resources with name/value pairs to further categorize and view resources across resource groups and across subscriptions.  For example, you could use tags to identify which of your resources are used for “production” versus “dev/test” – and enable easy filtering/searching of the resources based on which tag you were interested in – regardless of which application or resource group they were in. 

Using Tags

To get started with the new Tag support, browse to any resource or resource group in the Azure Preview Portal and click on the Tags tile on the resource.

image

On the Tags blade that appears, you'll see a list of any tags you've already applied. To add a new tag, simply specify a name and value and press enter. After you've added a few tags, you'll notice autocomplete options based on pre-existing tag names and values to better ensure a consistent taxonomy across your resources and to avoid common mistakes, like misspellings.

image

You can also use our command-line tools to tag resources as well.  Below is an example of using the Azure PowerShell module to quickly tag all of the resources in your Azure subscription:

image

Once you've tagged your resources and resource groups, you can view the full list of tags across all of your subscriptions using the Browse hub.

image

You can also “pin” tags to your Startboard for quick access.  This provides a really easy way to quickly jump to any resource in a tag you’ve pinned:

image 

SQL Databases: Public Preview of Elastic Scale Support

I am excited to announce the public preview of Elastic Scale for Azure SQL Database. Elastic Scale enables the data-tier of an application to scale out via industry-standard sharding practices, while significantly streamlining the development and management of your sharded cloud applications. The new capabilities are provided through .NET libraries and Azure service templates that are hosted in your own Azure subscription to manage your highly scalable applications. Elastic Scale implements the infrastructure aspects of sharding and thus allows you to instead focus on the business logic of your application.

Elastic Scale allows developers to establish a “contract” that defines where different slices of data reside across a collection of database instances.  This enables applications to easily and automatically direct transactions to the appropriate database (shard) and perform queries that cross many or all shards using simple extensions to the ADO.NET programming model. Elastic Scale also enables coordinated data movement between shards to split or merge ranges of data among different databases and satisfy common scenarios such as pulling a busy tenant into its own shard. 

image

We are also announcing the Federation Migration Utility which is available as part of the preview. This utility will help current SQL Database Federations customers migrate their Federations application to Elastic Scale without having to perform any data movement.

Get Started with the Elastic Scale preview today, and watch our Channel 9 video to learn more.

DocumentDB: Document Explorer, Collection management and new metrics

Last week we released a bunch of updates to the Azure DocumentDB service experience in the Azure Preview Portal. We continue to improve the developer and management experiences so you can be more productive and build great applications on DocumentDB. These improvements include:

  • Document Explorer: View and access JSON documents in your database account
  • Collection management: Easily add and delete collections
  • Database performance metrics and storage information: View performance metrics and storage consumed at a Database level
  • Collection performance metrics and storage information: View performance metrics and storage consumed at a Collection level
  • Support for Azure tags: Apply custom tags to DocumentDB Accounts

Document Explorer

Near the bottom of the DocumentDB Account, Database, and Collection blades, you’ll now find a new Developer Tools lens with a Document Explorer part.

image

This part provides you with a read-only document explorer experience. Select a database and collection within the Document Explorer and view documents within that collection.

image

Note that the Document Explorer will load up to the first 100 documents in the selected Collection. You can load additional documents (in batches of 100) by selecting the “Load more” option at the bottom of the Document Explorer blade. Future updates will expand Document Explorer functionality to enable document CRUD operations as well as the ability to filter documents.

Collection Management

The DocumentDB Database blade now allows you to quickly create a new Collection through the Add Collection command found on the top left of the Database blade.

image

Health Metrics

We’ve added a new Collection blade which exposes Collection level performance metrics and storage information. You can access this new blade by selecting a Collection from the list of Collections on the Database blade.

image

The Database and Collection level metrics are available via the Database and Collection blades.

image

image

As always, we’d love to hear from you about the DocumentDB features and experiences you would find most valuable within the Azure portal. You can submit your suggestions on the Microsoft Azure DocumentDB feedback forum.

Notification Hubs: support for Baidu Cloud Push

Azure Notification Hubs enable cross platform mobile push notifications for Android, iOS, Windows, Windows Phone, and Kindle devices. Thousands of customers now use Notification Hubs for instant cross platform broadcast, personalized notifications to dynamic segments of their mobile audience, or simply to reach individual customers of their mobile apps regardless which device they use.  Today I am excited to announce support for another mobile notifications platform, Baidu Cloud Push, which will help Notification Hubs customers reach the diverse family of Android devices in China. 

Delivering push notifications to Android devices in China is no easy task, due to a diverse set of app stores and push services. Pushing notifications to an Android device via Google Cloud Messaging Service (GCM) does not work, as most Android devices in China are not configured to use GCM.  To help app developers reach every Android device independent of which app store they’re configured with, Azure Notification Hubs now supports sending push notifications via the Baidu Cloud Push service.

To use Baidu from your Notification Hub, register your app with Baidu, and obtain the appropriate identifiers (UserId and ChannelId) for your application.

image

Then configure your Notification Hub within the Azure Management Portal with these identifiers:

image

For more details, follow the tutorial in English & Chinese. You can learn more about Push Notifications using Azure at the Notification Hubs dev center.

Virtual Machines: Instance-Level Public IPs generally available

Azure now supports the ability for you to assign public IP addresses to VMs and web or worker roles so they become directly addressable on the Internet - without having to map a virtual IP endpoint for access. With Instance-Level Public IPs, you can enable scenarios like running FTP servers in Azure and monitoring VMs directly using their IPs.

For more information, please visit the Instance-Level Public IP Addresses webpage.

Automation: Updates

Earlier this year, we introduced preview availability of Azure Automation, a service that allows you to automate the deployment, monitoring, and maintenance of your Azure resources. I am excited to announce several new features in Azure Automation:

  • Active Directory Authentication
  • PowerShell Script Converter
  • Runbook Gallery
  • Hourly Scheduling

Active Directory Authentication

We now offer an easier alternative to using certificates to authenticate from the Azure Automation service to your Azure environment. You can now authenticate to Azure using an Azure Active Directory organization identity which provides simple, credential-based authentication.

If you do not have an Active Directory user set up already, simply create a new user and provide the user with access to manage your Azure subscription. Once you have done this, create an Automation Asset with its credentials and reference the credential in your runbook. You need to do this setup only once and can then use the stored credentials going forward, greatly simplifying the number of steps that you need to take to start automating. You can read this blog to learn more about getting set up with Active Directory Authentication.

PowerShell Script Converter

Azure Automation now supports importing PowerShell scripts as runbooks. When a PowerShell script is imported that does not contain a single PowerShell Workflow, Automation will attempt to convert it from PowerShell script to PowerShell Workflow, and then create a runbook from the result. This allows the vast amount of PowerShell content and knowledge that exists today to be more easily leveraged in Azure Automation, despite the fact that Automation executes PowerShell Workflow and not PowerShell.

Runbook Gallery

The Runbook Gallery allows you to quickly discover Automation sample, utility, and scenario runbooks from within the Azure management portal. The Runbook Gallery consists of runbooks that can be used as is or with minor modification, and runbooks that can serve as examples of how to create your own runbooks. The Runbook Gallery features content not only by Microsoft, but also by active members of the Azure community. If you have created a runbook that you think other users may benefit from, you can share it with the community on Script Center and it will show up in the Gallery. If you are interested in learning more about the Runbook Gallery, this TechNet article describes how the Gallery works in more detail and provides information on how you can contribute.

You can access the Gallery from +New, and then selecting App Services > Automation > Runbook > From Gallery.

image

In the Gallery wizard, you can browse for runbooks by selecting the category in the left hand pane and then view the description of the selected runbook in the right pane. You can then preview the code and finally import the runbook into your personal space:

image

We will be adding the ability to expand the Gallery to include PowerShell scripts in the near future. These scripts will be converted to Workflows when they are imported to your Automation Account using the new PowerShell Script Converter. This means that you will have more content to choose from and a tool to help you get your PowerShell scripts running in Azure.

Hourly Scheduling

Based on popular request from our users, hourly scheduling is now available in Azure Automation. This feature allows you to schedule your runbook hourly or every X hours, making it that much easier to start runbooks at a regular frequency that is smaller than a day.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu omni

25 Sep 18:01

Confusing errors for a confusing feature, part one

by ericlippert

There’s a saying amongst programming language designers that every language is a response to previous languages; the designers of C# were, and still are, very deliberate about learning from the mistakes and successes of similar languages such as C, C++, Java, Scala and so on. One feature of C# that I have a love-hate relationship with is a direct response to a dangerous feature of C++, whereby the same name can be used to mean two different things throughout a block. I’ve already discussed the relevant rules of C# at length, so review my earlier posting before you read on.

OK, welcome back. Summing up:

  • C++ allows one name to mean two things when one local variable shadows another.
  • C++ allows one name to mean two things when one usage of a name refers to a member and a local variable of the same name is declared later.
  • Both of these features make it harder to understand, debug and maintain programs.
  • C# makes all that illegal; every simple name must have a unique meaning throughout its containing block, which implies that the name of a local variable may not shadow any other local or be used to refer to any member.

I have a love-hate relationship with this “unique meaning” feature, which we are going to look at in absurd depth in this series.

On the “love” side, this really does address a real source of bugs. I have myself accidentally-and-not-on-purpose introduced a local variable in one scope that shadows another and it took me literally hours of debugging to discover why it was that what I thought was one variable was suddenly changing values halfway through a function. On the “hate” side, based on my mail and questions on StackOverflow, misunderstanding what the feature is and why C# has this limitation is a frequent source of user confusion. It was also a rich source of bugs in the original C# compiler, spec problems and design problems in the Roslyn compiler.

But what makes it far, far worse for users is that the error messages given when a program violates these rules of C# are… well, you can see for yourself. Let’s start with the simplest possible case: a local variable which shadows another.

class C
{
  static void M1()
  {
    int x2;
    {
      int x2;
    }
  }

This gives the error on the inner x2:

 
error CS0136: A local variable named 'x2' cannot be declared in
this scope because it would give a different meaning to 'x2', which is
already used in a 'parent or current' scope to denote something else

Um… what?

It is no wonder I get mail from confused programmers when they see that crazy error message! And while we’re looking at this, why is 'parent or current' in quotes, and why doesn’t the compiler differentiate between whether it is the parent scope or the current scope? This is already weird, but don’t worry, it’ll get worse.

What if we cause the same bug the other way? Remember, a local variable is in scope throughout its block, not at the moment where it is declared; this means that you can move the location of a declaration around without changing the legality or meaning of the program, so we should expect this to be the same error:

  static void M3()
  {
    {
      int x4;
    }
    int x4;
  }

This gives the error on the outer x4:

 
error CS0136: A local variable named 'x4' cannot be declared in
this scope because it would give a different meaning to 'x4', which is
already used in a 'child' scope to denote something else

Aha! This is still terrible but at least now the silly quoted string is making some sense; the compiler is filling in either “parent or current” or “child” into the error message, and whoever wrote the error message felt that such a substitution ought to be called out in quotation marks for some reason.

It’s still weird that they went to all that trouble to differentiate between “parent or current” and “child” but didn’t bother to differentiate between “parent” and “current”. (And it is also weird that apparently scopes have a parent-child relationship; when a mommy and a daddy scope love each other very much… no, let’s not even go there. I think of scopes as having a container-contained relationship, not a parent-child relationship, but perhaps I am in a minority there.)

Another subtle thing to notice here: the error message is given on the second usage of the simple name in both cases; is that really the best choice? Ideally an error message should be on the line that needs to change. Which is more likely, that the user will wish to change the x4 in the inner scope or the outer? I would guess the inner, but here the error message is on the outer.

And finally: “something else” ?! Really, C#? That’s the best you can do? The compiler knows exactly what this conflicted with; if it didn’t then it wouldn’t know to make an error! Let’s just make the user guess what the conflict is, why don’t we? Yuck.

OK, that deals with local variables conflicting with other local variables. Next problem: a local variable conflicts with a simple name that was used to mean something else:

static int x5;
static void M6()
{
    int x7 = x5;
    {
        int x5;
    }
}

This gives the error on the declaration of the local:

error CS0136: A local variable named 'x5' cannot be declared in
this scope because it would give a different meaning to 'x5', which is
already used in a 'parent or current' scope to denote something else

Ah, same thing. This error message makes a lot more sense for this scenario than it does for the “hiding a local with a local” previous scenario. And of course if we reverse the order, putting the nested-block local before the field reference…

static int x8;
static void M9()
{
    {
        int x8;
    }
    int x10 = x8;
}

We get this error on the initialization of x10:

error CS0135: 'x8' conflicts with the declaration 'C.x8'

WHAT!?!?!?!?!? Why on earth did we not get the same message as before with “child” substituted in?!!?

This error message makes no sense at all; even an expert on the C# specification would be forgiven for not knowing what on earth this means.

So far I’ve shown cases where a local variable declaration conflicts with something else. Next time we’ll look at scenarios where the same simple name is used to mean two different things, but no local variable (or local constant, formal parameter or range variable) is involved. It might not be immediately obvious how it is possible that one simple name can be used to mean two completely different things without introducing a new name in a local scope, but it is possible. Your challenge is to find one that produces a compiler error before the next episode!


09 Sep 20:14

Farewell, EnableViewStateMac!

by levibroderick

The ASP.NET team is making an important announcement regarding the September 2014 security updates.

All versions of the ASP.NET runtime 1.1 - 4.5.2 now forbid setting <%@ Page EnableViewStateMac="false" %> and <pages enableViewStateMac="false" />.

If you have set EnableViewStateMac="false" anywhere in your application, your application will be affected by this change and you should read on. Otherwise there is no action you must take, and the behavior of your application will not be affected.

In December 2013, we released a security advisory warning that the configuration setting EnableViewStateMac=false is dangerous and could allow an elevation of privilege attack against the web site. At the time we issued a statement that the next version of the ASP.NET runtime would forbid setting this switch, and indeed when ASP.NET 4.5.2 was released it forbade applications from setting this insecure switch.

Along with the December 2013 advisory, we issued KB 2905247. This KB was an optional patch that customers could install if they immediately needed to deploy the "forbid setting this dangerous switch" logic throughout their infrastructure but couldn't wait for ASP.NET 4.5.2.

Today we are enforcing this previously optional patch for all versions of the ASP.NET framework. If you are running the ASP.NET framework on your machine, this behavior will be picked up automatically the next time you check for updates.

We're aware that this change could affect a substantial number of web applications. It is never our intention to break web applications in an in-place update, but we felt it necessary to address this issue head-on due to the prevalence of misinformation regarding this switch and the number of customers who are running with it set to an insecure setting.

Just what is a "view state MAC" in the first place?

MAC in this context stands for message authentication code, which is a cryptographic code generated by the server and appended to the __VIEWSTATE hidden form field. The MAC ensures that the client hasn't tampered with these fields.

When EnableViewStateMac is set to true, this code is validated by the server when the client submits the __VIEWSTATE hidden form field during post back. This setting has been enabled (true) by default for all versions of ASP.NET.

Why forbid setting EnableViewStateMac=false?

We're forbidding it because it is never safe to set EnableViewStateMac="false". As called out in the December 2013 advisory, an attacker may be able to leverage this setting to upload and execute arbitrary code on the web server. This would grant her complete control over the web application subject to the permissions of the web worker process.

After the initial advisory was released, we received questions from customers regarding conditional cases. What if I have set EnableViewState="false"? What if I'm running over SSL? What if I've only set this on one page and it's an admin-only page? What if my site just runs on the local intranet?

In all of these cases, the answer remains the same: not a single one of these "what ifs?" will prevent an attacker from exploiting this vulnerability. The web site is still open to remote code execution attack as long as EnableViewStateMac="false" is present.

Will installing this patch change break my application?

If you have set EnableViewStateMac=false anywhere in your application, this change will affect you. Whether the new behavior breaks the application is scenario-dependent. If the reason the MAC was disabled was to enable cross-page posts, the application will generally continue to work correctly. If the reason the MAC was disabled was to avoid synchronizing the <machineKey> setting in a web farm, the application will probably break. See the below sections for more information on these particular scenarios.

If you are using EnableViewStateMac=true throughout your application, this change will not affect you.

Additionally, if you never touch the EnableViewStateMac switch at all in your application, this change will not affect you. Remember: the default value for this setting is true, and this change only affects applications where the developer has explicitly set the value to false.

What about cross-page posts?

Once KB 2905247 is installed, the ASP.NET framework treats cross-page posts specially to minimize the risk of errors at postback time. However, setting <form action="some-different-page.aspx" /> has never been the recommendation for cross-page posts in WebForms. Consider using PostBackUrl instead to make this scenario work.

What if my servers are in a farm?

You are required to create a <machineKey> element and synchronize it across machines in your farm. See KB 2915218, Appendix A for full instructions on how to generate a <machineKey> element. That appendix contains a block of code that you can copy and paste into a PowerShell window. Then you can run the Generate-MachineKey command from within the PowerShell window to generate a <machineKey> element.

PS> Generate-MachineKey
<machineKey decryption="AES" decryptionKey="..." validation="HMACSHA256" validationKey="..." />

You can then copy and paste this <machineKey> element into your application's Web.config file.

See KB 2915218, Appendix A for more information on the parameters that can be passed to the Generate-MachineKey function. See also Appendix C for information about using protected configuration to encrypt the contents of the <machineKey> element in the Web.config file.

Note: these keys are sensitive. Anybody who has access to them could end up with full administrative control over your web application. For this reason we suggest that you never use an online "click here to generate a <machineKey> element" utility. Only ever use <machineKey> elements that you generated yourself on your own machine. And use caution not to leak these keys in public places like online forums.

What if I encounter MAC validation errors as a result of this change?

We updated the "validation of view state MAC failed" error message to contain a link to KB 2915218, which lists common reasons that the MAC verification step might fail. Check that article to see if it calls out an item specific to your scenario.

Are there any other behavioral changes as a result of this?

Yes, there is a minor behavioral difference as a result of this patch, but this behavior will not break existing applications. If a __VIEWSTATE form field is written out to the response, it will now be accompanied by a new form field <input type="hidden" name="__VIEWSTATEGENERATOR" ... />. This new form field is used by the ASP.NET runtime to help determine whether a postback is going to the same page or is cross-page. It's similar in concept to the __PREVIOUSPAGE form field that is written when a control's PostBackUrl property is set.

Is today's rerelease of KB 2905247 different than the original December 2013 release?

Yes. The original release of the patch contained a bug where the value of the Page.IsPostBack property could be incorrect. The new version of the patch released today contains a fix for this issue.

If you are a server administrator and you have already deployed an earlier version of the KB, it is recommended that you deploy the latest version so that you get this bug fix.

What versions of ASP.NET is KB 2905247 applicable to?

This KB is applicable to all versions of ASP.NET from 1.1 through 4.5.1. The KB is not applicable to ASP.NET 4.5.2 or later, as the ASP.NET runtime already contains the patch built-in starting with version 4.5.2.

Why is KB 2905247 being made available on Windows Update now? Why not back in December 2013?

There are many factors that are taken into account when making a decision like this: the severity of the vulnerability, the number of customers who are vulnerable, and the impact of patching affected customers' machines.

In the case of this specific vulnerability, the official Microsoft guidance many years ago was that web applications could sometimes get away with setting EnableViewStateMac="false" in certain cases and that performance would be improved if the application were to do this. The original guidance was bad: this setting is categorically insecure, and this setting doesn't improve performance. We have since corrected this guidance, but the damage was done. There was now a very large number of ASP.NET web sites which ran with this insecure setting.

Given the severity of the issue, when we released the original advisory back in December 2013, we wanted to push out the update to all customers worldwide and fix the issue immediately. However, given the large number of customers involved and the potential impact this change brings, we didn't want to risk surprising customers with something this large. So we compromised: release an advisory stating that the configuration is insecure, allowing customers to proactively (rather than reactively) fix their sites. At the same time we made available to system administrators an optional patch which they could deploy on their servers to close the vulnerability outright.

It has now been nine months since the original advisory. This has given customers who wanted to secure their sites proactively ample time to test and deploy their changes. For our many customers who simply depend on the normal Windows Update / Microsoft Update mechanisms, however, the time has come for us to make sure that their applications are also protected against this.

Further resources

More information about the patch can be found at the original advisory page or on the Microsoft Support site.

Information about resolving "validation of view state MAC failed" error messages can be found on the Microsoft Support site under KB 2915218.

The ASP.NET team also maintains a document on common mistakes we see in typical ASP.NET applications and they can be corrected.

Finally, you can get in touch with us via the ASP.NET web site. We're also active on StackOverflow. Prefer Twitter instead? The team is available at @aspnet.

Thanks for reading, and happy coding!

09 Sep 14:06

Als ITQ’er naar VMworld Barcelona!

by Paul Geerlings

Als virtualisatie consultant moet je aanwezig zijn op VMworld. Zo denken wij er bij ITQ in ieder geval over. Het is voor ons niet de vraag of je naar VMworld mag, maar meer naar welke versie je gaat (US of EU). Dit jaar gaat de hele ITQ vUnit naar VMworld in Barcelona. Als jij vindt dat je daar ook hoort te zijn, dan is dit je kans: we hebben nog een VMworld Barcelona kaart over voor een mogelijke kandidaat voor de vUnit (inclusief reis en verblijf).

Als jij de virtualisatie specialist bent die wij zoeken en je bent geïnteresseerd in een nieuwe uitdaging, meld je dan NU aan. Op basis van de meest interessante CV’s en bijbehorende motivatie zullen wij een selectie maken van de beste kandidaten en deze uitnodigen voor een sollicitatie gesprek.

We zoeken een nieuwe collega met de volgende kennis en ervaring:
VMware producten:
• vSphere
• Horizon View
• vCOPS
• SRM
Algemeen:
• Storage
• Networking
• Windows / Linux
• Scripting

Stuur zo snel mogelijk je CV en motivatie en wie weet zit jij vanaf 13 oktober op het evenement waar je als virtualisatie expert bij moet zijn!

The post Als ITQ’er naar VMworld Barcelona! appeared first on ITQ.

05 Sep 13:53

What the f*** were they thinking?! Crazy website biases exposed by naughty words lists (the NSFW version)

by Troy Hunt
Rwwilden

Hilarious :)

I’ve long held the view that passwords should consist of as many crazy things as the owner deems fit. If I want to create a password that looks like a dog just ate the keyboard and threw up all the keys, then good for me. (Chances are that Fido is going to cough up a pretty unique password too but before PETA gets on my case, try using a password manager like 1Password instead.)

Now I’m used to seeing all sorts of ridiculous limits on passwords – no “special” character, limit of 12 chars, no spaces, can’t use letters “q” or “z”, can’t use letters at all – but the banning of specific words is something else altogether. I don’t mean words like “select” or “drop” either, you know, the kind that shows someone has done a sloppy job of their SQL injection mitigations, I mean words like these:

A jar of Extreme Nut Butter

I’ll come back to the impact of passwords named after this particular sandwich spread. Banning certain words is one thing, but inadvertently publishing the entire list is quite another and it discloses some very interesting biases on behalf of the site.

Biases implied by the words a site allows versus those it blocks doesn’t need to remain the domain of passwords alone. There are other cases where words are blocked and again, the list is exposed publicly for (assumedly unintentional) scrutiny. When I say “biases”, I’m talking about everything from religious views to gender equality to which animals zoophilia may be off limits for. Yep, it’s that weird and it all begs the question – what the fuck are they thinking?! (Get used to the language, the title of the post warned you!)

The perpetrators

I’m going to single out two here and the first is Virgin. Hang on – Virgin has an issue with something naughty?! You mean the guys who thought this would be a good idea?!

Virgin's urinals which appear like an open woman's mouth

Surely I can’t be talking about the same company whose very ethos is grounded on the principles of clean living and a healthy respect for women?!

Richard Branson with bikini babes

Yep, those guys. As it happens, they’re more easily offended than you’d expect. Not only are they offended by naughty passwords, they were so offended that I pointed out they were easily offended that they removed the offending offensive passwords (for posterity sake, they were originally located here but have now been removed from within the remaining JavaScript). That’s right folks, the evidence of their dislike of “wanker”, “hardon” and “fart” (oh c’mon, even my 2 year old says fart!) has been removed from their client side script and relegated to the server side… except for the backup I put on Pastebin in case they need it again later.

And then there’s PayPal. Yep, those guys, the financial one. Turns out that PayPal has this Create Your Own site designed where you can sell some stuff or buy some stuff or, well, to be honest I don’t quite know, all I know is that it can’t be a “shlong” or a “sausage queen” or a “fart” (oh c’mon, you guys too?!) I know this because they too have a publicly accessible bad words list (and a Pastebin backup should they later think better of it).

Now this isn’t passwords per se, rather a publicly facing facility which attempts to censor words it deems inappropriate, for example:

Creating a PayPal entry for "Super awesome facial cream"

Clearly, this term is an affront to humanity and must be censored for our own protection:

PayPal's "Oop, no naughty words please" response

You may not have known that was a “naughty” word, am I right? Oh, which word? Facial, of course (you might need to Google some of these – from the privacy of your own home – after the kids go to bed – grab a stiff drink first – oh, and it’s ok to use the word “stiff”, just sayin’)

But on a (very slightly) more serious note, by exposing their swear list on the client side where it’s easy to grab hold of, both Virgin and PayPal also disclose some totally bizarre reasoning, teach us (or at least teach me) some terms I’d never heard of and in all likelihood, demonstrate some pretty discriminatory behaviour. Grab that stif… uh, har… ah stuff it, get a beer and read on.

Why ban naughty words in passwords?

This question has quite rightly been asked a few times now and the answer is both profound and simple – because you don’t want to upset the operator who reads it. Hang on – wait – what?! Yeah, you know that password you created on the website and expected to be hashed with a strong algorithm suitable for password storage (you were thinking this, right?), yeah, not so much and apparently they have some pretty serious security deficiencies when it comes to how they handle them.

Of course it must be people that are likely offended by words because frankly, computers couldn’t care less. No really, try it on a site we know is taking security seriously like Google or Microsoft or Appl… well try it on Google or Microsoft and see how you go. You see, the problem is that if you create a password using one of those “pass phrases” that are all the rage these days and you choose something like “this password is totally shit”, then you call up Virgin and they want to validate your identity so they ask for your password and you say “this password is totally shit”, you might crush the gentle spirit of the operator on the other end of the phone as they learn of your foul-mouthed preferences (and we’re starting out mildly here).

This, of course, raises all sorts of interesting questions; why can they see my password? Why do they need to see my password? And in the case of both Virgin and PayPal, how the hell do you even decide what’s in and what’s out without frankly, looking a bit ridiculous? Speaking of which, let’s just cover off some of the insights their word lists give us.

The insanity of banned word selection

When you see PayPal excluding such words as “assfukka” and “cocksukka” (and equivalents with only a single “k”), you have to wonder just what the point of it all is. I mean if it’s coming down to excluding words which might phonetically be off limits, you’re well and truly fighting a losing battle because there’s nothing to stop someone from choosing “assfukker” and “cocksukker”. It’s utter madness too because they’ve covered “assfucker” and “assfukka” but not ““assfukker” – consistency guys!

Then you’ve even got stuff like brand names – you can’t have “viagra” but you can have the competitor’s “cialis”. Pluralisation is also an oddity in that you can’t have one “blowjob” nor multiple “blowjobs” yet whilst you can’t have a “dick”, you can have multiple “dicks”. If your name really is “Dick”, you’re gonna have problems and if it’s “Dick Van Dyke” you’re completely stuffed!

And really – “bunny fucker”?! But don’t despair because if you have a penchant for small furry mammals because you’re ok with “rabbit fucker”. Maybe there’s a greater likelihood of “bunny” being the word of choice because of the whole Playboy thing, I don’t know, I can only guess at the insanity behind it.

While we’re on animals, you can’t choose “dog-fucker” but “dog fucker” without the hyphen hasn’t been called out as per the bunny equivalent. Another off limits animal is “pigfucker”, oddly enough all as one word and then just to add to all the fucking weirdness, you also can’t have “cyberfucker”, “cyberfuc”, “cyberfuck”, “cyberfucked” (one of a number of past tenses), “cyberfuckers” (because sometimes there’s more than one of them) or cyberfucking (in case you’re doing it right now).

You can’t imply that someone might be a “cnut” but they could be a “kunt” – no really, they literally could be a Kunt because apparently that’s a popular surname in Turkey and they wouldn’t want to exclude someone simply because of an ethnic name, would they? Right?

Other absurdities that are off limits include “nut butter” per the opening image in this blog, “flog the log” (ok, I get it, but where do you stop with euphemisms of that genre?!) and “kinky Jesus” (I had to Google it – it’s a thing).

Then there’s the really, really oddly specific stuff like “fuckingshitmotherfucker” – but you’re ok if you’d like to rearrange your profanities a little (apparently). The “bang (one's) box” is also a little odd in its specificity, all the way down to the punctuation which assumedly could be omitted if you genuinely wanted to express your fondness of box banging.

While I mention box banging, that’s fine so long as only one box is involved because if there’s two boxes (or presumably more), that opens up a whole other can of worms that may actually have a serious side.

On potential discrimination…

Let’s just get ever so slightly more serious for a moment and talk about gay. No, not the kind of gay that keeps popping up in the oldie worldy kids books and seems so out of place in the modern context, but the sexual orientation definition. At least that’s what I assume Virgin is referring to when they disallow the word “gay” in their list. They’ve also ruled out “queer”. No “lesbo” either, apparently, although I admit I’m unsure as to whether that’s accepted slang in today’s vernacular or if it has derogatory intent. “Lezbo” and “lezzer” are off limits but you’re fine if you prefer “lezzo” which, as I understand it, remains popular in some circles.  “Faggot” is out so if you’re fond of bundles of sticks then tough luck (you knew it once meant that, right?), but arguably it’s frequently used as an insult too so I can kind of get that.

The thing is though, some of these words are perfectly reasonable, socially acceptable ways of describing same-sex relationships. However, if you’ve chosen the more traditional man / woman path then you’re most welcome to use the words “hetro” or “hetrosexual” or “straight” or any other term that came to mind. Seems to me that this is the sort of thing that rightly or wrongly, often cops an organisation some seriously bad press.

In some ways, PayPal is even worse; you can have “sex”, but you can’t have “gaysex”. There’s also no “fag”, “fagging”, “faggitt”, “faggot”, “faggs”, “fagot”, “fagots” or “fags” – homophobes rejoice! But seriously, that’s a lot of effort to go to in order to keep someone with a homosexual persuasion from expressing their views.

Whilst we’re talking discrimination, PayPal has a distinct dislike of “god” (it’s not clear which one) yet “allah” is good to go. “Buddha” is also good as are a broad range of deities. “Jesus” is ok but as we’ve already established, only if he’s not kinky. How they decided whose belief system was in and whose was out remains a mystery.

Upsetting Scunthorpe

By now you’re probably looking at this and thinking “Oh yeah, I think a Scunthorpe is where the girl goes like this and the guy, well…”. No, not quite, Scunthorpe is actually a place and as you can see, it’s totally obscene. Hang on – wait – it’s what now? Yeah, can’t you see it? Read the word carefully…

Yes, the Scunthorpe Problem is a thing and it impedes otherwise well-intentioned citizens the world over. That may be because of a partial word match, the acronym the words form, or simply a dislike of the word because “reasons”.

Yes, a “flange” can have a phallic connotation but it’s also a pretty damn essential piece of a heap of very well engineered machinery. A “chink” might sometimes be a racial slur, but you can also get one in your armour and leave you vulnerable to a well-aimed sabre attack.

I get that “fook” is sometimes used as a colloquialism, but it’s also a very nice Chinese restaurant:

Kam Fook Restaurant

Other Scunthorpian errors mean you can’t have our tastiest cheese:

Coon cheese

Or this:

Horny lizard

Or even this:

Ballbag

That’s right, think about it.

Go forth and discover more craziness

Don’t for one moment think that naughty word shenanigans remain the domain of Virgin and PayPal, for from it. In fact all it takes is a casual GitHub search for some everyday words – try fuck shit dickhead – and along with comments I’ve made when trying to get CSS working in Internet Explorer, you’ll find a great many repositories with banned words (approaching 5k at the time of writing). Of course many of those appear in code that remains within the domain of the server without ever seeing the light of day by client script – and that’s a very smart idea!

I’m told Virgin has now moved to keep their bigotry on the server rather than expose it on the client and whilst I can’t test it myself without an account, it makes a lot of sense. It’s now a whole lot hard to discover which kind of homosexual they deem inappropriate or what other Scunthorpian faux pas they may have committed.

As for PayPal though, they’re still loudly and proudly announcing their displeasure of the “wang”, the “schlong” and the “boner”. Bur don’t despair if you want to express your name via their service and you’re a “Johnson” or a “Peter”, just so long as you’re not a “Willy”!

29 Aug 06:49

Azure Search Scenarios and Capabilities

by Pablo Castro

To follow up on the announcement of Azure Search last week, I wanted to take a minute to give a bit of framing on why we built Azure Search and how we picked its current capabilities.

Context

Many applications use search as the primary interaction pattern for their users. When it comes to full-text search, user expectations are high. Users are used to Web search engines, sophisticated ecommerce websites and social apps that offer great relevance, search suggestions as you type, faceted navigation, highlighting and more, all with near-instantaneous response times.

With Azure Search we wanted to enable developers to incorporate great search experiences into their applications without having to become expects on search. Solid search experiences bring challenges both in the information retrieval front where you need to deal with text analysis, ranking, etc. and on the distributed systems front where you have to manage scalability, reliability, etc. Offering search as a service is a natural way to address both of these aspects, freeing developers up to focus on their applications.

Scenarios

Search technology is used to solve a broad variety of challenges. We started with a specific set of target scenarios so we could focus our efforts on a cohesive feature set. Of course you can use Azure Search for a lot more than this, but these served as inspiration to choose a set of capabilities for the service:

Online retail/ecommerce. Most customers of ecommerce applications/sites will find products by using search first. The product catalog is what is indexed, sometimes expanded to include tags, descriptions, feedback from users, etc. This scenario is characterized by fine-tuned ranking models that take into account aspects such as star rating, price/margin and promotions. The search experience often includes faceted navigation (totals per category), filters and various sorting options. Frequent updates are also common. While the product catalog does not change that often, product prices, stock levels and whether items are on sale are attributes that can change many times within a day and need to surface quickly in search results.

We see a concrete example of using search at the intersection of ecommerce and modern mobile applications with AutoTrader.ca. Allen Wales, VP Technology, Trader Corporation captures this really well: “autoTRADER.ca is the largest new and used car marketplace in Canada. The autoTRADER.ca marketplace has had a tremendous shift towards mobile usage, with over 50% of our users now using mobile instead of desktop. Our mobile apps have been downloaded over two million times by Canadian automotive shoppers. Our mobile users visit more frequently and do more searches that the traditional desktop website visitor. We need a search engine that can scale with this massive increase in searches.”

User generated/social content. There are many different flavors of user-generated content applications, but most share similar requirements when it comes to search. Examples of these kind of applications include recipe sites, photo sharing sites, user-contributed news sites and social network applications that have a Web and mobile presence. These applications deal with a large volume of documents, sometimes many millions, particularly when they allow users to comment and discuss on items. Geo-spatial data is often involved, related to location of people or things. Relevance tends to be driven by text statistics in addition to domain-specific aspects such as document freshness and author popularity.

A great example of this is Photosynth, where users can capture and share panoramas and “synths”. They are already running on Azure Search for both keyword search and for their Geospatial experience. All assets contributed by users are indexed in Azure Search, including their titles and tags (for keyword search) and geolocation information (for bounding-box queries).

Business applications. Users of line of business applications often navigate through their content using pre-defined menus and other structured access paths. However, when search is incorporated into these applications a lot of friction can be removed from general user interaction making it quicker and more efficient to retrieve this information. This type of application typically has many different types of entities that need to be searched together, providing a single entry point to discover content throughout the system.

In all these scenarios the endpoints tend to be a mix of mobile devices and Web sites. Most modern applications either have both or simply blur the line between them. We’ve designed Azure Search so it can be used directly from clients or from a backend that supports either type of application.

Capabilities

A brief summary of the capabilities of Azure Search follows. The service documentation and future posts will drill into details, but I thought I’d include a summary here to put them in context of the scenarios listed above.

  • Simple HTTP/JSON API. Makes it accessible from any platform, any device.
  • Keyword, phrase and prefix search. Users can express what they are looking for, without needing to learn a query language. Just a few words will do, and they can use “+”, “-”, quotes and star (for prefix) if they’d like.
  • Hit highlighting. Helps when searching through lots of text, such as in forums or when documents have long descriptions.
  • Faceting. Computes hit counts by category as seen in most ecommerce websites.
  • Suggestions. Building block for implementing auto-complete, helping guide users to a successful search before they hit enter.
  • Rich structured queries. Combine search with structured filters, sorting, paging and projection to introduce application-defined restrictions and presentation options.
  • Geo-spatial support integrated in filtering, sorting and scoring.
  • Scoring profiles offer a simple way of modeling relevance based on aspects such as freshness, distance or numeric magnitudes such as popularity, star-rating, etc.
  • Elastic scalability. You can use APIs or the Azure portal to scale the service up and down both in terms of partitions (to handle more or less documents) or replicas (for higher queries/second and higher availability). This can be done without impact in availability.

We ran an early adopters program for some time to make sure we had our choice of features validated in real-world scenarios. This comment is from Jones Lang LaSalle (JLL). Sridhar Potineni, Director of Innovation for JLL said:

“As a global financial and professional services firm that specializes in commercial real estate services and investment management, JLL (Jones Lang LaSalle) has a large number of customer focused applications that make heavy use of search and can scale into the 10′s of queries per second (QPS). Since Azure Search is a cloud based managed service we are able to quickly build out new search based applications around the world to reduce latency for our users.
Azure Search matched all of the features we are using in our existing on-premises search solution such as faceted navigation, query completion and geo-spatial search, particularly polygon search. Polygon search allows applications such as “Property Search”, to allow customers to quickly find items within different geographic boundaries. We rely on features such as these to drive demand and inbound requests for the 10′s of thousands of properties that we manage.”

We’re excited to finally have Azure Search out there. We’re looking forward to hearing your feedback and will be writing about specific topics often. In the meanwhile, give it a try and let us know what you think.

22 Aug 07:16

Azure: New DocumentDB NoSQL Service, New Search Service, New SQL AlwaysOn VM Template, and more

Today we released a major set of updates to Microsoft Azure. Today’s updates include:

  • DocumentDB: Preview of a New NoSQL Document Service for Azure
  • Search: Preview of a New Search-as-a-Service offering for Azure
  • Virtual Machines: Portal support for SQL Server AlwaysOn + community-driven VMs
  • Web Sites: Support for Web Jobs and Web Site processes in the Preview Portal
  • Azure Insights: General Availability of Microsoft Azure Monitoring Services Management Library
  • API Management: Support for API Management REST APIs

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

DocumentDB: Announcing a New NoSQL Document Service for Azure

I’m excited to announce the preview of our new DocumentDB service - a NoSQL document database service designed for scalable and high performance modern applications.  DocumentDB is delivered as a fully managed service (meaning you don’t have to manage any infrastructure or VMs yourself) with an enterprise grade SLA.

As a NoSQL store, DocumentDB is truly schema-free. It allows you to store and query any JSON document, regardless of schema. The service provides built-in automatic indexing support – which means you can write JSON documents to the store and immediately query them using a familiar document oriented SQL query grammar. You can optionally extend the query grammar to perform service side evaluation of user defined functions (UDFs) written in server-side JavaScript as well. 

DocumentDB is designed to linearly scale to meet the needs of your application. The DocumentDB service is purchased in capacity units, each offering a reservation of high performance storage and dedicated performance throughput. Capacity units can be easily added or removed via the Azure portal or REST based management API based on your scale needs. This allows you to elastically scale databases in fine grained increments with predictable performance and no application downtime simply by increasing or decreasing capacity units.

Over the last year, we have used DocumentDB internally within Microsoft for several high-profile services.  We now have DocumentDB databases that are each 100s of TBs in size, each processing millions of complex DocumentDB queries per day, with predictable performance of low single digit ms latency.  DocumentDB provides a great way to scale applications and solutions like this to an incredible size.

DocumentDB also enables you to tune performance further by customizing the index policies and consistency levels you want for a particular application or scenario, making it an incredibly flexible and powerful data service for your applications.   For queries and read operations, DocumentDB offers four distinct consistency levels - Strong, Bounded Staleness, Session, and Eventual. These consistency levels allow you to make sound tradeoffs between consistency and performance. Each consistency level is backed by a predictable performance level ensuring you can achieve reliable results for your application.

DocumentDB has made a significant bet on ubiquitous formats like JSON, HTTP and REST – which makes it easy to start taking advantage of from any Web or Mobile applications.  With today’s release we are also distributing .NET, Node.js, JavaScript and Python SDKs.  The service can also be accessed through RESTful HTTP interfaces and is simple to manage through the Azure preview portal.

Provisioning a DocumentDB account

To get started with DocumentDB you provision a new database account. To do this, use the new Azure Preview Portal (http://portal.azure.com), click the Azure gallery and select the Data, storage, cache + backup category, and locate the DocumentDB gallery item.

image

Once you select the DocumentDB item, choose the Create command to bring up the Create blade for it.

In the create blade, specify the name of the service you wish to create, the amount of capacity you wish to scale your DocumentDB instance to, and the location around the world that you want to deploy it (e.g. the West US Azure region):

image

Once provisioning is complete, you can start to manage your DocumentDB account by clicking the new instance icon on your Azure portal dashboard. 

image

The keys tile can be used to retrieve the security keys to use to access the DocumentDB service programmatically.

Developing with DocumentDB

DocumentDB provides a number of different ways to program against it. You can use the REST API directly over HTTPS, or you can choose from either the .NET, Node.js, JavaScript or Python client SDKs.

The JSON data I am going to use for this example are two families:

// AndersonFamily.json file

{

    "id": "AndersenFamily",

    "lastName": "Andersen",

    "parents": [

        { "firstName": "Thomas" },

        { "firstName": "Mary Kay" }

    ],

    "children": [

        { "firstName": "John", "gender": "male", "grade": 7 }

    ],

    "pets": [

        { "givenName": "Fluffy" }

    ],

    "address": { "country": "USA", "state": "WA", "city": "Seattle" }

}

and

// WakefieldFamily.json file

{

    "id": "WakefieldFamily",

    "parents": [

        { "familyName": "Wakefield", "givenName": "Robin" },

        { "familyName": "Miller", "givenName": "Ben" }

    ],

    "children": [

        {

            "familyName": "Wakefield",

            "givenName": "Jesse",

            "gender": "female",

            "grade": 1

        },

        {

            "familyName": "Miller",

            "givenName": "Lisa",

            "gender": "female",

            "grade": 8

        }

    ],

    "pets": [

        { "givenName": "Goofy" },

        { "givenName": "Shadow" }

    ],

    "address": { "country": "USA", "state": "NY", "county": "Manhattan", "city": "NY" }

}

Using the NuGet package manager in Visual Studio, I can search for and install the DocumentDB .NET package into any .NET application. With the URI and Authentication Keys for the DocumentDB service that I retrieved earlier from the Azure Management portal, I can then connect to the DocumentDB service I just provisioned, create a Database, create a Collection, Insert some JSON documents and immediately start querying for them:

using (client = new DocumentClient(new Uri(endpoint), authKey))

{

    var database = new Database { Id = "ScottsDemoDB" };

    database = await client.CreateDatabaseAsync(database);

 

    var collection = new DocumentCollection { Id = "Families" };

    collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection);

 

    //DocumentDB supports strongly typed POCO objects and also dynamic objects

    dynamic andersonFamily =  JsonConvert.DeserializeObject(File.ReadAllText(@".\Data\AndersonFamily.json"));

    dynamic wakefieldFamily = JsonConvert.DeserializeObject(File.ReadAllText(@".\Data\WakefieldFamily.json"));

 

    //persist the documents in DocumentDB

    await client.CreateDocumentAsync(collection.SelfLink, andersonFamily);

    await client.CreateDocumentAsync(collection.SelfLink, wakefieldFamily);

 

    //very simple query returning the full JSON document matching a simple WHERE clause

    var query = client.CreateDocumentQuery(collection.SelfLink, "SELECT * FROM Families f WHERE f.id = 'AndersenFamily'");

    var family = query.AsEnumerable().FirstOrDefault();

 

    Console.WriteLine("The Anderson family have the following pets:");              

    foreach (var pet in family.pets)

    {

        Console.WriteLine(pet.givenName);

    }

 

    //select JUST the child record out of the Family record where the child's gender is male

    query = client.CreateDocumentQuery(collection.DocumentsLink, "SELECT * FROM c IN Families.children WHERE c.gender='male'");

    var child = query.AsEnumerable().FirstOrDefault();

 

    Console.WriteLine("The Andersons have a son named {0} in grade {1} ", child.firstName, child.grade);

 

    //cleanup test database

    await client.DeleteDatabaseAsync(database.SelfLink);

}

As you can see above – the .NET API for DocumentDB fully supports the .NET async pattern, which makes it ideal for use with applications you want to scale well. 

Server-side JavaScript Stored Procedures

If I wanted to perform some updates affecting multiple documents within a transaction, I can define a stored procedure using JavaScript that swapped pets between families. In this scenario it would be important to ensure that one family didn’t end up with all the pets and another ended up with none due to something unexpected happening. Therefore if an error occurred during the swap process, it would be crucial that the database rollback the transaction and leave things in a consistent state.  I can do this with the following stored procedure that I run within the DocumentDB service:

function SwapPets(family1Id, family2Id) {

    var context = getContext();

    var collection = context.getCollection();

    var response = context.getResponse();

 

    collection.queryDocuments(collection.getSelfLink(), 'SELECT * FROM Families f where f.id  = "' + family1Id + '"', {},

    function (err, documents, responseOptions) {

        var family1 = documents[0];

 

        collection.queryDocuments(collection.getSelfLink(), 'SELECT * FROM Families f where f.id = "' + family2Id + '"', {},

        function (err2, documents2, responseOptions2) {

            var family2 = documents2[0];

                   

            var itemSave = family1.pets;

            family1.pets = family2.pets;

            family2.pets = itemSave;

 

            collection.replaceDocument(family1._self, family1,

                function (err, docReplaced) {

                    collection.replaceDocument(family2._self, family2, {});

                });

 

            response.setBody(true);

        });

    });

}

 

If an exception is thrown in the JavaScript function due to for instance a concurrency violation when updating a record, the transaction is reversed and system is returned to the state it was in before the function began.

It’s easy to register the stored procedure in code like below (for example: in a deployment script or app startup code):

    //register a stored procedure

    StoredProcedure storedProcedure = new StoredProcedure

    {

        Id = "SwapPets",

        Body = File.ReadAllText(@".\JS\SwapPets.js")

    };

               

    storedProcedure = await client.CreateStoredProcedureAsync(collection.SelfLink, storedProcedure);

 

And just as easy to execute the stored procedure from within your application:

    //execute stored procedure passing in the two family documents involved in the pet swap              

    dynamic result = await client.ExecuteStoredProcedureAsync<dynamic>(storedProcedure.SelfLink, "AndersenFamily", "WakefieldFamily");

If we checked the pets now linked to the Anderson Family we’d see they have been swapped.

Learning More

It’s really easy to get started with DocumentDB and create a simple working application in a couple of minutes.  The above was but one simple example of how to start using it.  Because DocumentDB is schema-less you can use it with literally any JSON document.  Because it performs automatic indexing on every JSON document stored within it, you get screaming performance when querying those JSON documents later. Because it scales linearly with consistent performance, it is ideal for applications you think might get large.

You can learn more about DocumentDB from the new DocumentDB development center here.

Search: Announcing preview of new Search as a Service for Azure

I’m excited to announce the preview of our new Azure Search service.  Azure Search makes it easy for developers to add great search experiences to any web or mobile application.   

Azure Search provides developers with all of the features needed to build out their search experience without having to deal with the typical complexities that come with managing, tuning and scaling a real-world search service.  It is delivered as a fully managed service with an enterprise grade SLA.  We also are releasing a Free tier of the service today that enables you to use it with small-scale solutions on Azure at no cost.

Provisioning a Search Service

To get started, let’s create a new search service.  In the Azure Preview Portal (http://portal.azure.com), navigate to the Azure Gallery, and choose the Data storage, cache + backup category, and locate the Azure Search gallery item.

image

Locate the “Search” service icon and select Create to create an instance of the service:

image

You can choose from two Pricing Tier options: Standard which provides dedicated capacity for your search service, and a Free option that allows every Azure subscription to get a free small search service in a shared environment.

The standard tier can be easily scaled up or down and provides dedicated capacity guarantees to ensure that search performance is predictable for your application.  It also supports the ability to index 10s of millions of documents with lots of indexes.

The free tier is limited to 10,000 documents, up to 3 indexes and has no dedicated capacity guarantees. However it is also totally free, and also provides a great way to learn and experiment with all of the features of Azure Search.

Managing your Azure Search service

After provisioning your Search service, you will land in the Search blade within the portal - which allows you to manage the service, view usage data and tune the performance of the service:

image

I can click on the Scale tile above to bring up the details of the number of resources allocated to my search service. If I had created a Standard search service, I could use this to increase the number of replicas allocated to my service to support more searches per second (or to provide higher availability) and the number of partitions to give me support for higher numbers of documents within my search service.

Creating a Search Index

Now that the search service is created, I need to create a search index that will hold the documents (data) that will be searched. To get started, I need two pieces of information from the Azure Portal, the service URL to access my Azure Search service (accessed via the Properties tile) and the Admin Key to authenticate against the service (accessed via the Keys title).

image

Using this search service URL and admin key, I can start using the search service APIs to create an index and later upload data and issue search requests. I will be sending HTTP requests against the API using that key, so I’ll setup a .NET HttpClient object to do this as follows:

HttpClient client = new HttpClient();

client.DefaultRequestHeaders.Add("api-key", "19F1BACDCD154F4D3918504CBF24CA1F");

I’ll start by creating the search index. In this case I want an index I can use to search for contacts in my dataset, so I want searchable fields for their names and tags; I also want to track the last contact date (so I can filter or sort on that later on) and their address as a lat/long location so I can use it in filters as well. To make things easy I will be using JSON.NET (to do this, add the NuGet package to your VS project) to serialize objects to JSON.

var index = new

{

    name = "contacts",

    fields = new[]

    {

        new { name = "id", type = "Edm.String", key = true },

        new { name = "fullname", type = "Edm.String", key = false },

        new { name = "tags", type = "Collection(Edm.String)", key = false },

        new { name = "lastcontacted", type = "Edm.DateTimeOffset", key = false },

        new { name = "worklocation", type = "Edm.GeographyPoint", key = false },

    }

};

 

var response = client.PostAsync("https://scottgu-dev.search.windows.net/indexes/?api-version=2014-07-31-Preview",

                                new StringContent(JsonConvert.SerializeObject(index), Encoding.UTF8, "application/json")).Result;

response.EnsureSuccessStatusCode();

You can run this code as part of your deployment code or as part of application initialization.

Populating a Search Index

Azure Search uses a push API for indexing data. You can call this API with batches of up to 1000 documents to be indexed at a time. Since it’s your code that pushes data into the index, the original data may be anywhere: in a SQL Database in Azure, DocumentDb database, blob/table storage, etc.  You can even populate it with data stored on-premises or in a non-Azure cloud provider.

Note that indexing is rarely a one-time operation. You will probably have an initial set of data to load from your data source, but then you will want to push new documents as well as update and delete existing ones. If you use Azure Websites, this is a natural scenario for Webjobs that can run your indexing code regularly in the background.

Regardless of where you host it, the code to index data needs to pull data from the source and push it into Azure Search. In the example below I’m just making up data, but you can see how I could be using the result of a SQL or LINQ query or anything that produces a set of objects that match the index fields we identified above.

var batch = new

15 Jul 07:23

What’s New in ADAL v2 RC

by vibro

ADAL v2 RC is here, and is packed with new features! Those are the last planned changes we are doing to the library surface for v2, hence you should expect this to be the harbinger of what you’ll get at GA time.

Here there’s a list of the main changes. For some of the new features, the news are so significant that I wrote an entire post just for it – watch for the links in the descriptions.

The list is pretty long! We are in the process of updating the samples on GitHub: hopefully that will help you to follow the changes. As mentioned above, we are not planning any additional changes before GA – hence you can expect the current surface to be the one that will end up in the documentation.

If you have issues migrating beta code to RC, feel free to drop me a line.

Thanks and enjoy!