Shared posts

09 Dec 14:14

ISESteroids RC

by Jan Egil Ring

ISESteroids originally started as a simple add-on to add professional editor capabilities to the built-in PowerShell ISE editor. Meanwhile, it evolved to a slick high-end PowerShell script editor. PowerShell Magazine has covered the basic highlights … [visit site to read more]

01 Dec 16:36

Active Directory Week: Essential Steps for PowerShell when Upgrading

by The Scripting Guys

Summary: Learn three essential steps for Windows PowerShell when upgrading from Windows Server 2003.

Microsoft Scripting Guy, Ed Wilson, is here. Today we have the final post in the series about Active Directory PowerShell by Ashley McGlone. Before you begin, you might enjoy reading these posts from the series:

Over the years Microsoft has released a number of new features to enhance Active Directory functionality. (For more information, see Active Directory Features in Different Versions of Windows Server.) If you are just now upgrading from Windows Server 2003, you have much to be thankful for. You will get to use new features like the Active Directory Recycle Bin and “Protect from accidental deletion.” But first you must raise the forest functional level to at least Windows Server 2008 R2. Let’s look at how to turn on these features.

Raise the functional level

In the Windows Server 2008 R2 era, many new Active Directory features were dependent on domain or forest functional level. One significant change with Windows Server 2012 R2 and Windows Server 2012 is that the product group tried to reduce the dependency on functional level for new features. At a minimum, you want to move your forest functional level to the Windows Server 2008 R2. You can raise it to Windows Server 2012 R2 if all of your domain controllers are on the current release.

Of course, these steps can be done in the graphical interface, but this post is about Windows PowerShell. It is actually quite easy to do from the Windows PowerShell console. First, let’s check the current functional modes:

PS C:\> (Get-ADDomain).DomainMode

PS C:\> (Get-ADForest).ForestMode

   Note  If you are running these commands on Windows Server 2008 R2, you must first run this line:

Import-Module ActiveDirectory

DomainMode and ForestMode are properties of the ADDomain and ADForest, respectively. Lucky for us there is a cmdlet to set each of these. Look at this syntax:

$domain = Get-ADDomain

Set-ADDomainMode -Identity $domain -Server $domain.PDCEmulator -DomainMode Windows2012Domain

$forest = Get-ADForest

Set-ADForestMode -Identity $forest -Server $forest.SchemaMaster -ForestMode Windows2012Forest

   Note  You must target the PDC Emulator for domain mode changes and the Schema Master for forest mode changes.

The following table shows the available domain and forest mode parameter values:

Set-ADDomainMode

Set-ADForestMode

Win2003Domain

Win2008Domain

Win2008R2Domain

Win2012Domain

Win2012R2Domain

Windows2000Forest

Windows2003InterimForest

Windows2003Forest

Windows2008Forest

Windows2008R2Forest

Windows2012Forest

Windows2012R2Forest

Here are some points to consider:

  • If you raise the forest functional level, it will automatically attempt to raise the level of all the domains first.
  • Generally, these commands only raise functional level. You cannot lower the level. (There is a minor exception, which is documented in How to Revert Back or Lower the Active Directory Forest and Domain Functional Levels in Windows Server 2008 R2.)
  • All domain controllers must be at the same or higher operating system level as the functional mode.
  • Be sure that you have a good backup of the forest for any possible recovery scenario afterward.

For more information about raising functional level, see What is the Impact of Upgrading the Domain or Forest Functional Level?

Enable the Active Directory Recycle Bin

Hopefully, this feature is old news to you by now. The key point is that it is not automatic. You must enable the Active Directory Recycle Bin before you can restore a deleted account. Here is the easiest way to enable the Active Directory  Recycle Bin from Windows PowerShell:

Enable-ADOptionalFeature 'Recycle Bin Feature' -Scope ForestOrConfigurationSet `

    -Target (Get-ADForest).RootDomain -Server (Get-ADForest).DomainNamingMaster

This command is written so that it will work in any environment. Note that it must target the forest Domain Naming Master role holder.

For more information and potential troubleshooting steps, see:

Now you can use the Restore-ADObject cmdlet or the Active Directory Administrative Center (ADAC) graphical interface to recover deleted objects. This is so much easier than an Active Directory authoritative restore!

Protect from accidental deletion

Have you noticed a theme yet? “Recycle bin” and “accidental deletion”...

We want to help you recover faster. The “Protect from accidental deletion” feature will hopefully keep you from needing the Recycle Bin. The following image shows the check box for the setting in the graphical interface:

Image of menu

With the Active Directory cmdlets, we can find the status by using the ProtectedFromAccidentalDeletion object property like this:

Get-ADuser ProtectMe -Properties ProtectedFromAccidentalDeletion

This value will be True or False, depending on whether the box is selected. To turn on the protection, we can use this syntax:

Get-ADUser -Identity ProtectMe | Set-ADObject -ProtectedFromAccidentalDeletion:$true

It would be inefficient to do this one-at-a-time for all objects, wouldn’t it? Here are some commands you could use to turn it on more broadly across your environment:

Get-ADUser -Filter * | Set-ADObject -ProtectedFromAccidentalDeletion:$true

Get-ADGroup -Filter * | Set-ADObject -ProtectedFromAccidentalDeletion:$true

Get-ADOrganizationalUnit -Filter * | Set-ADObject -ProtectedFromAccidentalDeletion:$true

The next logical question would be, “OK. Then how do I delete something when it is not an accident?”

I am glad you asked. We can turn off the protection and delete an object like this:

Get-ADUser ProtectMe |

    Set-ADObject -ProtectedFromAccidentalDeletion:$false -PassThru |

    Remove-ADUser -Confirm:$false

Notice that we use the -PassThru switch to keep the user object moving through the pipeline after the Set command.

This delete protection is not enabled by default. It must be explicitly set on each object that you want to protect. For information about how to make this automatic for new objects, you can read the comments that follow this post on the Ask the Directory Services Team blog: Two lines that can save your AD from a crisis.

Note  If you would like to know more about how this feature works, we explain this topic in greater detail in Module 7 of the Microsoft Virtual Academy videos, Active Directory Attribute Recovery With PowerShell.

Bonus tips

In this post, we discussed three essentials steps when upgrading from Windows Server 2003:

  1. Raise the domain and forest functional level
  2. Enable Recycle Bin
  3. Protect from accidental deletion

Of course, there are many other new features to leverage. I recommend that you check out the following resources in the Microsoft Virtual Academy videos:

  • In Module 7, we discuss a recovery strategy that uses Active Directory snapshots. This is a friendly way to recover corrupted Active Directory properties without the hassle of a full authoritative restoration. I recommend that all customers start taking Active Directory snapshots (not to be confused with virtual machine snapshots) on a regular basis to aid in the recovery process.
  • In Module 8, we discuss three tips to help you deploy domain controllers faster during your upgrade. Note that DCPROMO was depreciated in Windows Server 2012 R2.

In addition, you should consider migrating SYSVOL from NTFRS to DFSR replication. This is another benefit after the functional level change, and it requires a manual step to turn it on. This is not addressed in the videos, but these steps are documented on TechNet and in a number of blog posts. For example, see, SYSVOL Replication Migration Guide: FRS to DFS Replication.

Congratulations on your move from Windows Server 2003! You will find that the later operating systems have many more features and tools to help with routine administration, maintenance, and security. With the tips from this post, you have a jumpstart for automating new features to aid in recovery scenarios.

Watch my free training videos for Active Directory PowerShell on Microsoft Virtual Academy to learn more insider tips on topics such as getting started with Active Directory PowerShell, routine administration, stale accounts, managing replication, disaster recovery, domain controller deployment.

~Ashley

And that ends our series about Active Directory PowerShell by Ashley McGlone! Join me tomorrow when I seek a way to find the latitude and longitude for a specific address.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

01 Dec 15:50

Set-NTP

by Mickaël LOPES
This script will be use to set NTP on computers. This is the answer of the question: How to configure my NTP server.

Created by: Mickaël LOPES
Published date: 11/26/2014
29 Nov 17:02

Writing Classes With PowerShell V5 – Part 2

by Thomas Lee
In a previous article, I set out what a class was and what it contains, and showed examples of using those classes in your PowerShell scripts. As I mentioned last time, you should use cmdlets where possible to create and manage objects. But when you can't you can always delve into the .NET Class Framework and it's huge class library.
But what if there are no applicable .NET objects and you need to create your own class? Some admins might be asking: why bother? The answer is one of flexibility and reuse. If you are writing scripts to automate your day to  day operations, you are inevitably passing objects between scripts, functions, cmdlets. There are always going to be cases where you'd like to create your own object simple as a means of transporting sets of data between hosts/scripts/etc.
In PowerShell V5, Microsoft has included the ability to create your own classes. When I started writing this set of articles, I had initially intended to just introduce Classes in V5, but as I looked at it, you can already create your own objects using earlier versions of PowerShell. These are not fully fledged classes, but are more than adequate when you just want to create a simple object to pass between your scripts/functions.
Creating Customised Objects
There are several ways you can achieve this. The first, but possibly hardest for the IT pro: use Visual Studio, author your classes in C# then compile them into a DLL. Then in PowerShell, you use Add-Type to add the classes to your PowerShell environment. The fuller details of this, and how to speed up loading by using Ngen.Exe are outside the scope of this blog post.
Bringing C# Into PowerShell
Now for the semi-developer audience amongst you, there's a kind of halfway house. In my experience, IT pros typically want what I call data-only classes. That is a class that just holds data and can be exchanged between scripts. For such cases, there's a simple way to create your class, although It does require a bit of C# knowledge (and some trial and error).
Here's a simple example of how to so this:
Add-Type @' 
public class Aeroplane
   {
     public string Model = "Boeing 737";   
     public int    InFleet = 12;
     public int    Range = 2400;
     public int    Pax   = 135;
   } 
'@
This code fragment defines a very small class – one with just 4 members (Model, number in fleet, range, and max number of passengers).  Once you run this, you can create objects of the type AeroPlane, like this:
image
As you can see from the screen shot, you can create a new instance of the class by using New-Object and selecting your newly created class.
If you are just creating a data-only class – one that you might pass from a  lower level working function or script to some higher level bit of code – then this method works acceptably. Of course, you have to be quite careful with C# syntax.  Little things like capitalising the token Namespace or String will create what I can only term unhelpful error messages.
Using Select-Object and Hash Tables
Another way to create a custom object is to use Select-Object. Usually, Select object is used to subset an occurrence – to just select a few properties from an object in order to reduce the amount of data that is to be transferred. In some cases, this may be good enough and would look like this:
Dir c:\foo\*.ps1 | Select-Object name,fullname| gm
  TypeName: Selected.System.IO.FileInfo
Name        MemberType   Definition                                  
----        ----------   ----------                                  
Equals      Method       bool Equals(System.Object obj)              
GetHashCode Method       int GetHashCode()                           
GetType     Method       type GetType()                              
ToString    Method       string ToString()                           
FullName    NoteProperty System.String FullName=C:\foo\RESTART-DNS.PS1
Name        NoteProperty System.String Name=RESTART-DNS.PS1          
Note that when you use Select-Object like this, the object's type name changes. In this case, the dir (Get-ChildItem) cmdlet was run against the File Store provider and yielded objects of the type: System.Io.FileInfo. The Select-Object, however, changes the type name to SELECTED.System.IO.FileInfo (emphasis here is mine). This usually is no big deal, but it might affect formatting in some cases. 
But you can also specify a hash table with the select object to create new properties, like this:
PSH [C:\foo]: $Filesize = @{
    Name = 'FileSize   '
    Expression = { '{0,8:0.0} kB' -f ($_.Length/1kB) }
}

Dir c:\foo\*.ps1 | Select-Object name,fullname, $Filesize
Name                   FullName                  FileSize               
----                   --------                  -----------               
RESTART-DNS.PS1       C:\foo\RESTART-DNS.PS1         1.1 kB               
s1.ps1                C:\foo\s1.ps1                  0.1 kB               
scope.ps1             C:\foo\scope.ps1               0.1 kB               
script1.ps1           C:\foo\script1.ps1             0.5 kB               

PSH [C:\foo]: Dir c:\foo\*.ps1 | Select-Object name,fullname, $Filesize| gm
   TypeName: Selected.System.IO.FileInfo
Name        MemberType   Definition                                  
----        ----------   ----------                                  
Equals      Method       bool Equals(System.Object obj)              
GetHashCode Method       int GetHashCode()                           
GetType     Method       type GetType()                              
ToString    Method       string ToString()                           
FileSize    NoteProperty System.String FileSize = 1.1 kB       
FullName    NoteProperty System.String FullName=C:\foo\RESTART-DNS.PS1
Name        NoteProperty System.String Name=RESTART-DNS.PS1          
As you can see form this code snippet, you can use Select-object to create subset objects and can extend the object using a hash table. One issue with this approach is that the member type for the selected properties (the ones included from the original object and those added) become NoteProperties, and not String, or Int, etc. In most cases, IT Pros will find this good enough.
In the next instalment in this series, I will be looking at using New-Object to create a bare bones new object and then adding members to it by using the Add-Member cmdlet and how to change the generated type name to be more format-friendly.
29 Nov 16:52

boosting the powershell ise with ise steroids

by marcus oh

Ever since the PowerShell ISE was released, I slowly moved away from using some of the other things I was pretty fond of like PowerShellPlus and PrimalScript. It’s mostly because it’s super convenient.

Along came ISE Steroids. I can’t really speak to 1.0 since I just started on 2.0 and just started very recently, actually. So far, I’m pretty impressed. The best part of using it, is it doesn’t force the convenience factor to change at all. Installing it is as simple as unzipping the files to your module path ($env:PSModulePath -split ';'). After that, you launch it with Start-Steroids. That gives me the convenience of using the plain ol’ ISE or switching into a hyper-capable ISE.

I’ve only begun scratching the surface of its capabilities though here are some things I’ve been using so far:

VERTICAL ADD-ON TOOLS PANE

Help. I love this feature. Anything I click on in a script, the help add-on will attempt to look up and present relevant information.

image

Variables. This is another feature I love. Having a variables window makes debugging so much easier.

image

REFACTORING

Is there someone on your team that codes in a manner that only their mother could love? If so, you might benefit from using the Refactor process. It’s basically a series of scripts that will comb the hair and wash behind the ears of your PowerShell script. It’s not perfect, but it performs admirably. It’s also configurable if you need to tune things down from default. Here’s an example:

Bad

foreach ($item in $smsobjects){
#write-host $item.name
$machinesfromSMS = $machinesfromSMS + $item.name}

foreach ($item in $sms2012objects){
#write-host $item.name
$machinesfromSMS = $machinesfromSMS + $item.name}

Better

foreach ($item in $smsobjects)
{
    #write-host $item.name
    $machinesfromsms = $machinesfromsms + $item.name
}

foreach ($item in $sms2012objects)
{
    #write-host $item.name
    $machinesfromsms = $machinesfromsms + $item.name
}

Which would you rather read and interpret?

ROOM FOR IMPROVEMENT

I would love to see the context sensitive help add-on retrieve things from the console or at least a search box to look up information manually. At this time, I have an empty script where I type in the command to make it show me help information.

SUMMARY

ISE Steroids isn’t a new shell, a giant development environment, or anything that fancy. It’s a lot of little things that tunes out the default PowerShell ISE into a highly functional shell and scripting environment. It’s extensible with other add-ons and supports launching applications from the ISE. (ILSpy and WinMerge come loaded.)

It’s my new favorite. I’m hooked. If you like the PowerShell ISE environment, you should check it out. There are many more features I haven’t brought up (signing, version control, wizards, etc.)

24 Nov 14:25

Active Directory Week: Get Started with Active Directory PowerShell

by The Scripting Guys

Summary: Microsoft premier field engineer (PFE), Ashley McGlone, discusses the Active Directory PowerShell cmdlets.

Microsoft Scripting Guy, Ed Wilson, is here. Today we start a series about Active Directory PowerShell, written by Ashley McGlone...

Ashley is a Microsoft premier field engineer (PFE) and a frequent speaker at events like PowerShell Saturday, Windows PowerShell Summit, and TechMentor. He has been working with Active Directory since the release candidate of Windows 2000. Today he specializes in Active Directory and Windows PowerShell, and he helps Microsoft premier customers reach their full potential through risk assessments and workshops. Ashley’s TechNet blog focuses on real-world solutions for Active Directory using Windows PowerShell.  You can follow Ashley on Twitter, Facebook, or TechNet as GoateePFE.

Since I joined Microsoft, the Scripting Guy and the Scripting Wife have become dear friends. Ed has mentored my career and opened doors for me as I engaged the Windows PowerShell community. It is an honor for me to write this week’s blog series about Active Directory PowerShell as Ed is taking some personal time off. Thank you, Ed.

Active Directory PowerShell

Perhaps your job responsibilities now include Active Directory, or perhaps you are finally moving off of Windows Server 2003. There is no better time than the present to learn how to use Windows PowerShell with Active Directory. You will find that you can quickly bulk load users, update attributes, install domain controllers, and much more by using the modules provided.

Background

The Active Directory PowerShell module was first released with Windows Server 2008 R2. Prior to that, we used the Active Directory Services Interface (ADSI) to script against Active Directory. I did that for years with VBScript, and I was glad to see the Windows PowerShell module. It certainly makes things much easier.

In Windows Server 2012, we added several cmdlets to round out the core functionality. But we also released a companion module called ADDSDeployment. This module replaced the functionality we had in DCPROMO. With Windows Server 2012 R2 we added some cmdlets for the new authentication security features.

Image of flow chart

Now we have a fairly robust set of cmdlets to manage directory services in Windows.

How do I get these cmdlets?

The version of the cmdlets you use depends on the Remote Server Administration Tools (RSAT) that you install, and that depends on the operating system you have. See the following graphic to determine which version of the cmdlets you should use.

Image of flow chart

For example, if you have Windows 7 on your administrative workstation, you can use the first release of the ActiveDirectory module. The cmdlets can target any domain controller that has the AD Web Service.  (Windows Server 2008 and Windows Server 2003 require the AD Management Gateway Service as a separate installation. For more information, see Step-by-Step: How to Use Active Directory PowerShell Cmdlets against Windows Server 2003 Domain Controllers.)

If you have a Windows 8.1 workstation, you can install the latest version of the RSAT and get all the fun new cmdlets, including the ADDSDeployment module. If you are stuck in Windows 7, and you want to use the latest cmdlets, see How to Use The 2012 Active Directory PowerShell Cmdlets from Windows 7 for a work around.

Alternatively, if you use a Windows Server operating system for your tools box, you can install the AD RSAT like this:

Install-WindowsFeature RSAT-AD-PowerShell

The following command will give you all of the graphical administrative tools and the Windows PowerShell modules:

Install-WindowsFeature RSAT-AD-Tools -IncludeAllSubFeature

Where do I begin?

I recommend for most people to start with the Active Directory Administrative Center (ADAC). This is the graphical management tool introduced in Windows Server 2012 that uses Windows PowerShell to run all administrative tasks.

The nice part is that you can see the actual Windows PowerShell commands at the bottom of the screen. Find the WINDOWS POWERSHELL HISTORY pane at the bottom of the tool, and click the Up arrow at the far right of the window. Select the Show All box. Then start clicking through the administrative interface. You can see the actual Windows PowerShell commands being used:

Image of menu

Yes. Read the Help.

If you are using Windows PowerShell 4.0 or Windows PowerShell 3.0 or newer, you need to install the Help content. From an elevated Windows PowerShell console type:

Update-Help -Module ActiveDirectory -Verbose

Although it is optional, I usually add the ‑Verbose switch so that I can tell what was updated. This also installs the Active Directory Help topics:

Get-Help about_ActiveDirectory -ShowWindow

Get-Help about_ActiveDirectory_Filter -ShowWindow

Get-Help about_ActiveDirectory_Identity -ShowWindow

Get-Help about_ActiveDirectory_ObjectModel -ShowWindow

Note that you may have to import the Active Directory module before you can discover the about_help topics:

Import-Module ActiveDirectory

You can also find these Help topics on TechNet: Active Directory for Windows PowerShell About Help Topics.

I strongly advise reading through these about_help topics as you get started. They explain a lot about how the cmdlets work, and it will save you much trial and error as you learn about new cmdlets.

Type your first commands

Now that you have the module imported, you can try the following commands from the Windows PowerShell console:

Get-ADForest

Get-ADDomain

Get-ADGroup “Domain Admins”

Get-ADUser Guest

Congratulations! You are now on your way to scripting Active Directory.

Move to Active Directory PowerShell cmdlets

The next step is to replace the familiar command-line utilities you have used for years with new Windows PowerShell commands. I have published a four page reference chart to help you get started: CMD to PowerShell Guide for Active Directory.

For example, instead of the DSGET or DSQUERY command-line utilities, you can use Get‑ADUser, Get‑ADComputer, or Get‑ADGroup. Instead of CSVDE, you can use Get‑ADUser | Export‑CSV.

With this knowledge, you can find some of your batch files or VBScripts for Active Directory and start converting them to Windows PowerShell. Beginning with a goal is a great way to learn.

Ready, Set, Go!

I hope you have enjoyed this quick start for Active Directory PowerShell. You now have the necessary steps to get started on the journey. Stay tuned this week for more posts about scripting with Active Directory. You can also check out four years of Active Directory scripts over at the GoateePFE blog.

~ Ashley

Thanks for the beginning of a great series, Ashley! Ashley recently recorded a full day of free Active Directory PowerShell training: Microsoft Virtual Academy: Using PowerShell for Active Directory. Watch these videos to learn more insider tips on topics like getting started with Active Directory PowerShell, routine administration, stale accounts, managing replication, disaster recovery, and domain controller deployment.

I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.

Ed Wilson, Microsoft Scripting Guy 

22 Nov 15:17

Managing Azure IaaS with Windows PowerShell: Part 5

by The Scripting Guys

Summary: Use Windows PowerShell to manage virtual machines in Azure.

Honorary Scripting Guy, Sean Kearney, is here flying through the digital stratosphere on our final day with Windows PowerShell and Azure!

We started by creating a virtual network for our Azure workstations and escalated to spinning up some virtual machines. To read more, see the previous topics in this series:

Pretty cool! But we're dealing with the fun stuff today...

Normally in Azure, if you were to initiate a shutdown in Windows or Linux, the operating system shuts down, but the Azure resources remain live in the service. The answer to this issue is to access your management portal and shut down the virtual machine from the Portal in the following manner.

First select the virtual machine (in this case, the one we previous created called 'brandnew1'):

Image of menu

Then choose the Shut Down option at the bottom of the management portal:

Image of menu

This process is pretty simple and it doesn't take more than a minute or so. But, of course, it would be so much nicer to have the ability to shut down or start environments by using a script. This allows for better cost control in Azure and less time with you at the mouse going clickity click click click.

In Hyper-V, we would normally identify the virtual machines with Get-VM and then parse the output to the pipeline to Stop-VM.

Azure is not too different—other than the names of the cmdlets and the visual output.

With Azure, we have a cmdlet called Get-AzureVM. If you are properly authenticated, you can get a list of all virtual machines that are tied to your subscription.

With a magic wave of my wand, I cast the magical cmdlet:

Get-AzureVM

Image of command output

Now we need to note the state of the virtual machine is different on the eyes. In Hyper-V, I would see a virtual machine that is operating like this:

Image of command output

In Hyper-V, it shows up as Running. In Azure, it shows up as ReadyRole. But filtering is similar. In Azure, if I need to show the virtual machines that are running, I run this command:

Get-AzureVM | where { $_.Status –eq 'ReadyRole' }

This will produce only the virtual machines that are operating. I can now pipe this directly to a cmdlet from Azure called (Oh! Hello, Captain Obvious!) Stop-AzureVM:

Get-AzureVM | where { $_.Status –eq 'ReadyRole' } | Stop-AzureVM

Odds are that you don't want to shut down your entire Azure infrastructure in one command. You're probably trying to shut down a single virtual machine (or a set of virtual machines).

In that case, target the name provided by the Get-AzureVM cmdlet. In our example, it's called 'brandnew1'.

There's one more piece. We need to tell Azure which Azure service we are targeting. Remember, you can identify your current service by using the Get-AzureService cmdlet:

$Service=(Get-AzureService).Label

You can then stop that virtual machine by its name and Azure service, for example:

Stop-AzureVM –Name 'brandnew1' –service $service

This will eventually yield a state of 'StoppedDeallocated', which means that you have the configuration and data from your C: partition, but the virtual machine is no longer active, consuming CPU time or active monitoring within the Azure environment.

You would think 'StoppedDeallocated' and 'ReadyRole' are the only two states to consider. But this is not the case.

Let's start the previous virtual machine in Azure and review the states it reveals via the Get-AzureVM cmdlet.

To start a virtual machine in Azure, we use the Start-AzureVM cmdlet and provide the virtual machine name and Azure service name. It's identical to Stop-AzureVM in its use.

Start-AzureVM –name 'Brandnew1' –service $service

As the virtual machine is starting up in Azure, note the two additional states it yields. You can see a Status of CreatingVM. In the management portal, this would be displayed as "Starting (Provisioning)":

Image of command output

It will be followed by "StartingVM" (normally viewed as simply "Starting" in the management portal. At this point, the virtual machine has been created and the internal operating system is starting:

Image of command output

When the process is complete, the virtual machine will return to its normal state of 'ReadyRole'.

A third condition to be aware of is when the computer has received a shutdown command within the operating system. In this condition, the resources are not deallocated within Azure and the virtual machine is not in a 'ReadyRole' state. Its status will show up as a 'StoppedVM':

Image of command output

This is going to be quicker to start; but be aware that in this state, your virtual machine is still pulling money on your account and you are being billed for it. If you'd like to turn it off properly, run Start-AzureVM. If you'd like to save some money, use Stop-AzureVM.

Pretty simple.

If you're trying to pull information on using the Azure cmdlets, I invite you to check out the Microsoft Azure PowerShell Reference Guide on Michael Washam's blog (he is the author of the cmdlets). In addition, there are some excellent posts from Keith Mayer at Microsoft on the subject, including Microsoft Azure Virtual Machines: Reset Forgotten Admin Passwords with Windows PowerShell.

I invite you to follow The Scripting Guys on Twitter and Facebook. If you have any questions, send an email to The Scripting Guys at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, remember to eat your vmdlets every day with a dash of creativity.

Sean Kearney, Windows PowerShell MVP and Honorary Scripting Guy

20 Nov 14:21

How to Configure Internet Explorer 11 Enterprise Mode Logging

by Russell Smith

In a previous Ask the Admin, I showed you how to configure Enterprise Mode for Internet Explorer 11 on Windows 7 SP1 and Windows 8.1 Update. Today, I’m going to show you how configure a web server to capture Enterprise Mode logs.

Sponsored

Install Internet Information Services (IIS)

Internet Explorer (IE) Enterprise Mode doesn’t use the Windows Event Log, but instead sends messages to an Active Server Pages (ASP) web page, which can be read in the web server’s log files. The quickest way to set up IIS on Windows Server 2012 R2 is to run the following PowerShell command as a local administrator:

install-windowsfeature -name web-server, web-asp –includemanagementtools

The cmdlet installs IIS 8.5 on Windows Server 2012 R2 with all the default features, management tools, and ASP components.

Configure Internet Information Services

Once the installation has completed, follow the instructions below to set up an ASP page that will listen for messages from your IE Enterprise Mode clients. In the example below, we’ll configure the website to use a non-standard port to make it easier to separate Enterprise Mode log traffic.

Configure Internet Explorer 11 Enterprise Mode Logging

Configure IIS for Internet Explorer Enterprise Mode logging (Image Credit: Russell Smith)

  • Open Server Manager from the Start screen or icon on the desktop taskbar.
  • In Server Manager, open Internet Information Services (IIS) Manager from the Tools menu.
  • In the left pane of IIS Manager, expand the local server, Sites and click Default Web Site.
  • On the right in the Actions pane, click Bindings under Edit Site.
  • In the Site Bindings dialog, select the http binding and click Edit… on the right.
  • In the Edit Site Binding dialog, type 81 in the Port field and click OK.
  • Click Close in the Site Bindings dialog.
  • In the central pane, double click the Logging icon.
  • Click Select Fields in the Log File section.
  • In the W3C Logging Fields dialog, make sure that only Date, Client IP, User Name, and URI Query are checked, and then click OK.
  • In the Actions pane on the right, click Apply.

To check that the webserver is working, open Internet Explorer on the server and type http://localhost:81/ in the address bar. You should see the default IIS welcome page in IE. To test connectivity from a remote machine, replace localhost with the name of the server.

Sponsored

Create the ASP Page

Now that IIS is configured, let’s put the ASP page into the root of our webserver. The root of the default website is c:\inetpub\wwwroot. In the wwwroot folder, save a text file called EmIE.asp containing the code shown below:

<% @ LANGUAGE=javascript %> 
<% Response.AppendToLog(" ;" + Request.Form("URL") + " ;" + Request.Form("EnterpriseMode")); %>

Create a Group Policy Object

The Let users turn on and use Enterprise Mode from the Tools menu policy setting can be enabled to allow users to manually toggle Enterprise Mode on and off. The logging URL in the Options section is the .asp web page that listens for POST messages that we created in the previous steps. If you just want to enable the toggle option in IE, leave the logging field blank.

Configure Group Policy for Internet Explorer Enterprise Mode logging (Image Credit: Russell Smith)

Configure Group Policy for Internet Explorer Enterprise Mode logging (Image Credit: Russell Smith)

  • In the Group Policy Management Editor window for your Group Policy Object, expand Computer Configuration > Policies > Administrative Templates > Windows Components
  • In the left pane, click Internet Explorer.
  • In the right pane, scroll down the list of policy settings and double click Let users turn on and use Enterprise Mode from the Tools menu.
  • In the Let users turn on and use Enterprise Mode from the Tools menu policy setting window, check Enabled.
  • In the Options section, type the URL for the ASP page created in the previous steps, for example http://contosodc1:81/emIE.asp on my server, and then click OK.
  • Close the Group Policy Management Editor window.

Link the GPO to a site, domain or Organizational Unit (OU) in the Group Policy Management Console (GPMC) as required.

Sponsored

View the IIS Logs

Assuming the default logging location hasn’t been changed, you’ll find the IIS log files in c:\inetpub\logs\logfiles\w3svc1 on the web server.

Internet Explorer Enterprise Mode log data (Image Credit: Russell Smith)

Internet Explorer Enterprise Mode log data (Image Credit: Russell Smith)

Note that logs can take a few minutes to be updated, so be patient if you don’t see any new entries immediately.

The post How to Configure Internet Explorer 11 Enterprise Mode Logging appeared first on Petri.

12 Nov 21:53

Building a PowerShell Troubleshooting Toolkit Revisited

by Jeff Hicks

Recently we posted an article about a PowerShell script you could use to build a USB key that contains free troubleshooting and diagnostic utilities. Those scripts relied on a list of links that were processed sequentially by the Invoke-Webrequest cmdlet. The potential downside is that it might take a bit of time to completely download everything. Wouldn’t it be much nicer if we you could download files, let’s say in batches of five?

Sponsored

Unfortunately, there are no cmdlets or parameters that you can use to throttle a set of commands. You could try to create some sort of throttling mechanism with PowerShell’s job infrastructure. If you are proficient with .NET, then you could try your hand at runspaces and runspace pools. Frankly, those make my head hurt, and I don’t expect IT pros to have to be .NET developers to use PowerShell. Fortunately, there is an alternative that I think is a good compromise between usability and systems programming: workflow.

PowerShell 3.0 brought us the ability to create workflows in PowerShell script. The premise of a workflow is that you can orchestrate a series of activities that can run unattended on 10, 100, or 1,000 remote computers. I don’t have space here to fully explain workflows. There is a chapter on workflow in the PowerShell in Depth book from Manning. But one of the great features in my opinion is the ability to execute multiple commands simultaneously.

In a workflow, you can use a ForEach construct with the –Parallel parameter.

Foreach -parallel ($item in $items) {
do-something $item
}

The –Parallel parameter only works in a workflow. If there are 10 items, then the Do-Something command will run all 10 at the same time. Or you can throttle the number of simultaneous commands.

Foreach –parallel –throttle 5 ($item in $items) {
do-something $item
}

Sponsored

Now only five commands will run at a time. As one command finishes, the next one in the queue is processed. You don’t have to write any complicated .NET code to take advantage of this feature. Use a workflow. And there’s no rule that says I have to use a workflow with a remote computer. I can create a workflow and run it locally, which is what I’ve done with my original script.

#requires -version 4.0

#create a USB tool drive using a PowerShell Workflow

Workflow Get-MyToolsWF {
<#

.Synopsis
Download tools from the Internet.
.Description
This PowerShell workflow will download a set of troubleshooting and diagnostic tools from the Internet. The path will typically be a USB thumbdrive. If you use the -Sysinternals parameter, then all of the SysInternals utilities will also be downloaded to a subfolder called Sysinternals under the given path.

You can limit the number of concurrent downloads with the ThrottleLimit parameter which has a default value of 5.
.Example
PS C:> Get-MyToolsWF -path G: -sysinternals

Download all tools from the web and the Sysinternals utilities. Save to drive G:.
.Notes
Last Updated: September 30, 2014
Version     : 1.0

  ****************************************************************
  * DO NOT USE IN A PRODUCTION ENVIRONMENT UNTIL YOU HAVE TESTED *
  * THOROUGHLY IN A LAB ENVIRONMENT. USE AT YOUR OWN RISK.  IF   *
  * YOU DO NOT UNDERSTAND WHAT THIS SCRIPT DOES OR HOW IT WORKS, *
  * DO NOT USE IT OUTSIDE OF A SECURE, TEST SETTING.             *
  ****************************************************************

.Link
Invoke-WebRequest

#>

[cmdletbinding()]
Param(
[Parameter(Position=0,Mandatory=$True,HelpMessage="Enter the download path")]
[ValidateScript({Test-Path $_})]
[string]$Path,
[switch]$Sysinternals,
[int]$ThrottleLimit=5
)

Write-Verbose -Message "$(Get-Date) Starting $($workflowcommandname)"

Function _download {
    [cmdletbinding()]
    param([string]$Uri,[string]$Path)

    $out = Join-path -path $path -child (split-path $uri -Leaf)

    Write-Verbose -Message  "Downloading $uri to $out"

    #hash table of parameters to splat to Invoke-Webrequest
    $paramHash = @{
     UseBasicParsing = $True
     Uri = $uri
     OutFile = $out
     DisableKeepAlive = $True
     ErrorAction = "Stop"
    }

    Try {
       Invoke-WebRequest @paramHash
       Get-Item -Path $out
        }
    Catch {
        Write-Warning -Message "Failed to download $uri. $($_.exception.message)"
    }
    } #end function

Sequence {

<#
csv data of downloads
The product should be a name or description of the tool.
The URI is a direct download link. The link must end in the executable file name (or zip or msi).
The file will be downloaded and saved locally using the last part of the URI.
#>

$csv = @"
product,uri
HouseCallx64,http://go.trendmicro.com/housecall8/HousecallLauncher64.exe
HouseCallx32,http://go.trendmicro.com/housecall8/HousecallLauncher.exe
"RootKit Buster x32",http://files.trendmicro.com/products/rootkitbuster/RootkitBusterV5.0-1180.exe
"Rootkit Buster x64",http://files.trendmicro.com/products/rootkitbuster/RootkitBusterV5.0-1180x64.exe
RUBotted,http://files.trendmicro.com/products/rubotted/RUBottedSetup.exe
"Hijack This",http://go.trendmicro.com/free-tools/hijackthis/HiJackThis.exe
WireSharkx64,http://wiresharkdownloads.riverbed.com/wireshark/win64/Wireshark-win64-1.12.1.exe
WireSharkx32,http://wiresharkdownloads.riverbed.com/wireshark/win32/Wireshark-win32-1.12.1.exe
"WireShark Portable",http://wiresharkdownloads.riverbed.com/wireshark/win32/WiresharkPortable-1.12.1.paf.exe
SpyBot,http://spybotupdates.com/files/spybot-2.4.exe
CCleaner,http://download.piriform.com/ccsetup418.exe
"Malware Bytes",http://data-cdn.mbamupdates.com/v2/mbam/consumer/data/mbam-setup-2.0.2.1012.exe
"Emisoft Emergency Kit",http://download11.emsisoft.com/EmsisoftEmergencyKit.zip
"Avast! Free AV",http://files.avast.com/iavs5x/avast_free_antivirus_setup.exe
"McAfee Stinger x32",http://downloadcenter.mcafee.com/products/mcafee-avert/Stinger/stinger32.exe
"McAfee Stinger x64",http://downloadcenter.mcafee.com/products/mcafee-avert/Stinger/stinger64.exe
"Microsoft Fixit Portable",http://download.microsoft.com/download/E/2/3/E237A32D-E0A9-4863-B864-9E820C1C6F9A/MicrosoftFixit-portable.exe
"Cain and Abel",http://www.oxid.it/downloads/ca_setup.exe
"@

#convert CSV data into objects
#save converted objects to a variable seen in the entire workflow
$workflow:download  = $csv | ConvertFrom-Csv

} #end sequence


Sequence {

foreach -parallel -throttle $ThrottleLimit ($item in $download) {

    Write-Verbose -message "Downloading $($item.product)"
    _download -Uri $item.uri -Path $path

} #foreach item


} #end sequence


Sequence {
#region SysInternals

if ($Sysinternals) {
    #test if subfolder exists and create it if missing
    $subfolder = Join-Path -Path $path -ChildPath Sysinternals

    if (-Not (Test-Path -Path $subfolder)) {
        New-item -ItemType Directory -Path $subfolder
    }

    #get the page
    $sysint = Invoke-WebRequest "http://live.sysinternals.com/Tools" -DisableKeepAlive -UseBasicParsing

    #get the links
    $links = $sysint.links | Select -Skip 1

    foreach -parallel -throttle $ThrottleLimit ($item in $links) {
     #download files to subfolder
     $uri = "http://live.sysinternals.com$($item.href)"
     Write-Verbose -message "Downloading $uri"

     _download -Uri $uri -Path $subfolder

    } #foreach
} #if SysInternals

} #end sequence

Write-verbose -message "$(Get-Date) Finished $($workflowcommandname)"

} #end workflow

There are rules for a workflow so I had to make a few adjustments. Workflows aren’t designed to run from pipelined input so the CSV data is embedded in the script. Otherwise, you should recognize most of the code. The key difference is that downloads now happen in parallel and throttled.

foreach -parallel -throttle $ThrottleLimit ($item in $download) {
  Write-Verbose -message "Downloading $($item.product)"
  _download -Uri $item.uri -Path $path
} #foreach item

I do the same thing for the SysInternals links.

foreach -parallel -throttle $ThrottleLimit ($item in $links) {
 #download files to subfolder
 $uri = "http://live.sysinternals.com$($item.href)"
 Write-Verbose -message "Downloading $uri"
 _download -Uri $uri -Path $subfolder
} #foreach

Sponsored

Once the workflow is loaded into my session, which you can do by dot-sourcing the script, I can run it the same way I would any command.

PS C:> Get-MyToolsWF -path G: -sysinternals

Use the –Verbose cmdlet if you want to see more detail. Using the workflow and parallel processing, I can download everything in about five minutes! Personally, I don’t find much use for workflows, but I love this parallel feature and hope that someday we’ll see it as part of the main PowerShell language.

The post Building a PowerShell Troubleshooting Toolkit Revisited appeared first on Petri.

12 Nov 20:42

Windows 10 GodMode

by Michael Pietroforte
Michael PietroforteMVP

Michael Pietroforte - 0 comments

Michael Pietroforte is the founder and editor of 4sysops. He is a Microsoft Most Valuable Professional (MVP) with more than 30 years of experience in system administration.

The Windows 10 GodMode lists links to all administration tools in one folder. In addition, you can use Shell Commands and Shell Locations to create more shortcuts to system tools.

Background

The term “god mode” was originally used as cheat code in video games and sometimes also on UNIX systems for a user with no restrictions on the system. On Windows, it is just a shortcut to a folder that lists links to the vast majority of Windows configuration tools. It does not give you privileges that go beyond those of the administrator account.

… read more of Windows 10 GodMode

Copyright © 2006-2014, 4sysops, Digital fingerprint: 3db371642e7c3f4fe3ee9d5cf7666eb0


Related
08 Nov 22:12

Build a Troubleshooting Toolkit using PowerShell

by Jeff Hicks

If you are an IT pro, then you are most likely the IT pro that’s on call for your family, friends and neighbors. You get a call that a neighbor’s computer is running slow or experiencing odd behavior. Virus? Malware? Rootkit? Application issues? If you are also like me, then you tend to rely on a collection of free and incredibly useful tools like Trend Micro’s HouseCall, Hijack This or CCleaner. Perhaps you might even need a copy of the latest tools from the Sysinternals site. In the past I’ve grabbed a spare USB key, plugged it in and started downloading files. But this is a time consuming and boring process, which makes it a prime candidate for automation. And in my case that means PowerShell.

Sponsored

Using Invoke-WebRequest

PowerShell 3.0 brought us a new command, Invoke-WebRequest. This cmdlet eliminated the need to use the .NET Framework in scripts. We no longer needed to figure out how to use the Webclient class. Cmdlets are almost always easier to use. If you look at the help for Invoke-WebRequest, then you’ll see how easy it is. All you really need to specify is the URI to the web resource. So for my task all I need is a direct download link to the tool I want to grab.

Invoke-webrequest –uri http://go.trendmicro.com/housecall8/HousecallLauncher64.exe

However, in this situation, I don’t want to write the result to the PowerShell pipeline, I want to save it to a file. Invoke-Webrequest has a parameter for that.

Invoke-webrequest –uri http://go.trendmicro.com/housecall8/HousecallLauncher64.exe -outfile d:HouseCallx64.exe -DisableKeepAlive -UseBasicParsing

I am using a few other parameters since I’m not doing anything else with the connection once I’ve downloaded the file. This should also make this command safer to run in the PowerShell ISE on 3.0 systems. In v3 there was a nasty memory leak when using Invoke-Webrequest in the PowerShell ISE. That has been fixed in v4. So within a few seconds I have the setup file downloaded to drive D:. That is the central part of my download script.

#requires -version 3.0

#create a USB tool drive

<#

.Synopsis
Download tools from the Internet.
.Description
This command will download a set of troubleshooting and diagnostic tools from the Internet. The path will typically be a USB thumbdrive. If you use the -Sysinternals parameter, then all of the SysInternals utilities will also be downloaded to a subfolder called Sysinternals under the given path.
.Example
PS C:> c:scriptsget-mytools.ps1 -path G: -sysinternals

Download all tools from the web and the Sysinternals utilities. Save to drive G:.
.Notes
Last Updated: September 30, 2014
Version     : 1.0

  ****************************************************************
  * DO NOT USE IN A PRODUCTION ENVIRONMENT UNTIL YOU HAVE TESTED *
  * THOROUGHLY IN A LAB ENVIRONMENT. USE AT YOUR OWN RISK.  IF   *
  * YOU DO NOT UNDERSTAND WHAT THIS SCRIPT DOES OR HOW IT WORKS, *
  * DO NOT USE IT OUTSIDE OF A SECURE, TEST SETTING.             *
  ****************************************************************

.Link
Invoke-WebRequest

#>

[cmdletbinding(SupportsShouldProcess=$True)]
Param(
[Parameter(Position=0,Mandatory=$True,HelpMessage="Enter the download path")]
[ValidateScript({Test-Path $_})]
[string]$Path,
[switch]$Sysinternals
)

Write-verbose "Starting $($myinvocation.MyCommand)"
#hashtable of parameters to splat to Write-Progress
$progParam = @{
    Activity = "$($myinvocation.MyCommand)"
    Status = $Null
    CurrentOperation = $Null
    PercentComplete = 0
}

#region data

<#
csv data of downloads
The product should be a name or description of the tool.
The URI is a direct download link. The link must end in the executable file name (or zip or msi).
The file will be downloaded and saved locally using the last part of the URI.
#>

$csv = @"
product,uri
HouseCallx64,http://go.trendmicro.com/housecall8/HousecallLauncher64.exe
HouseCallx32,http://go.trendmicro.com/housecall8/HousecallLauncher.exe
"RootKit Buster x32",http://files.trendmicro.com/products/rootkitbuster/RootkitBusterV5.0-1180.exe
"Rootkit Buster x64",http://files.trendmicro.com/products/rootkitbuster/RootkitBusterV5.0-1180x64.exe
RUBotted,http://files.trendmicro.com/products/rubotted/RUBottedSetup.exe
"Hijack This",http://go.trendmicro.com/free-tools/hijackthis/HiJackThis.exe
WireSharkx64,http://wiresharkdownloads.riverbed.com/wireshark/win64/Wireshark-win64-1.12.1.exe
WireSharkx32,http://wiresharkdownloads.riverbed.com/wireshark/win32/Wireshark-win32-1.12.1.exe
"WireShark Portable",http://wiresharkdownloads.riverbed.com/wireshark/win32/WiresharkPortable-1.12.1.paf.exe
SpyBot,http://spybotupdates.com/files/spybot-2.4.exe
CCleaner,http://download.piriform.com/ccsetup418.exe
"Malware Bytes",http://data-cdn.mbamupdates.com/v2/mbam/consumer/data/mbam-setup-2.0.2.1012.exe
"Emisoft Emergency Kit",http://download11.emsisoft.com/EmsisoftEmergencyKit.zip
"Avast! Free AV",http://files.avast.com/iavs5x/avast_free_antivirus_setup.exe
"McAfee Stinger x32",http://downloadcenter.mcafee.com/products/mcafee-avert/Stinger/stinger32.exe
"McAfee Stinger x64",http://downloadcenter.mcafee.com/products/mcafee-avert/Stinger/stinger64.exe
"Microsoft Fixit Portable",http://download.microsoft.com/download/E/2/3/E237A32D-E0A9-4863-B864-9E820C1C6F9A/MicrosoftFixit-portable.exe
"Cain and Abel",http://www.oxid.it/downloads/ca_setup.exe
"@

#convert CSV data into objects
$download  = $csv | ConvertFrom-Csv

#endregion

#region private function to download files

Function _download {
[cmdletbinding(SupportsShouldProcess=$True)]
Param(
[string]$Uri,
[string]$Path
)

$out = Join-path -path $path -child (split-path $uri -Leaf)

Write-Verbose "Downloading $uri to $out"

#hash table of parameters to splat to Invoke-Webrequest
$paramHash = @{
 UseBasicParsing = $True
 Uri = $uri
 OutFile = $out
 DisableKeepAlive = $True
 ErrorAction = "Stop"
}

if ($PSCmdlet.ShouldProcess($uri)) {
    Try {
        Invoke-WebRequest @paramHash
        get-item -Path $out
        }
    Catch {
        Write-Warning "Failed to download $uri. $($_.exception.message)"
    }

} #should process

} #end download function

#endregion

#region process CSV data

$i=0
foreach ($item in $download) {
    $i++
    $percent = ($i/$download.count) * 100
    Write-Verbose "Downloading $($item.product)"

    $progParam.status = $item.Product
    $progParam.currentOperation = $item.uri
    $progParam.PercentComplete = $percent

    Write-Progress @progParam

    _download -Uri $item.uri -Path $path

} #foreach item

#endregion

#region SysInternals

if ($Sysinternals) {
    #test if subfolder exists and create it if missing
    $sub = Join-Path -Path $path -ChildPath Sysinternals

    if (-Not (Test-Path -Path $sub)) {
        mkdir $sub | Out-Null
    }

    #get the page
    $sysint = invoke-webrequest "http://live.sysinternals.com/Tools" -DisableKeepAlive -UseBasicParsing

    #get the links
    $links = $sysint.links | Select -Skip 1

    #reset counter
    $i=0
    foreach ($item in $links) {
     #download files to subfolder
     $uri = "http://live.sysinternals.com$($item.href)"
     $i++
     $percent = ($i/$links.count) * 100
     Write-Verbose "Downloading $uri"

     $progParam.status ="SysInternals"
     $progParam.currentOperation = $item.innerText
     $progParam.PercentComplete = $percent
     Write-Progress @progParam

     _download -Uri $uri -Path $sub

    } #foreach
} #if SysInternals

#endregion

Write-verbose "Finished $($myinvocation.MyCommand)"

Sponsored

Within the script, there’s a string of CSV data. The data contains a description and direct link for all the tools I want to download. You can add or delete these as you see fit. Just make sure the download link ends in a file name. The download function will parse out the last part of the URI and use it to create the local file name.

$out = Join-path -path $path -child (split-path $uri -Leaf)

All you need to do is specify the path, which will usually be a USB thumb drive.

The script has an optional parameter for downloading utilities from the Live.SysInernals.com website. If you opt for this, then the script will create a subfolder for SysInternals tools. That’s the way I like it. To download the tools I first use Invoke-WebRequest to get the listing page.

$sysint = invoke-webrequest "http://live.sysinternals.com/Tools" -DisableKeepAlive -UseBasicParsing

Within this object is a property called Links, which will have links to each tool.

$links = $sysint.links | Select -Skip 1

The first link is to the parent directory, which I don’t want which is why I’m skipping 1. Then for each link I can build the URI from the HREF property.

$uri = http://live.sysinternals.com$($item.href)

The only other thing I’ve done that you might not understand is that I’ve created a function with a non-standard name. I always try to avoid repeating commands or blocks of code. I created the _download function with the intent that it will never be exposed outside of the script. And this is a script which means to run it you need to specify the full path.

PS C:> c:scriptsget-mytools.ps1 -path G: -sysinternals

As I mentioned, I included the CSV data within the script which makes it very portable. But you might want to keep the download data separate from the script. In that case you’ll need a CSV file like this:

product,uri
HouseCallx64,http://go.trendmicro.com/housecall8/HousecallLauncher64.exe
HouseCallx32,http://go.trendmicro.com/housecall8/HousecallLauncher.exe
"RootKit Buster x32",http://files.trendmicro.com/products/rootkitbuster/RootkitBusterV5.0-1180.exe
"Rootkit Buster x64",http://files.trendmicro.com/products/rootkitbuster/RootkitBusterV5.0-1180x64.exe
RUBotted,http://files.trendmicro.com/products/rubotted/RUBottedSetup.exe
"Hijack This",http://go.trendmicro.com/free-tools/hijackthis/HiJackThis.exe
WireSharkx64,http://wiresharkdownloads.riverbed.com/wireshark/win64/Wireshark-win64-1.12.1.exe
WireSharkx32,http://wiresharkdownloads.riverbed.com/wireshark/win32/Wireshark-win32-1.12.1.exe
"WireShark Portable",http://wiresharkdownloads.riverbed.com/wireshark/win32/WiresharkPortable-1.12.1.paf.exe
SpyBot,http://spybotupdates.com/files/spybot-2.4.exe
CCleaner,http://download.piriform.com/ccsetup418.exe
"Malware Bytes",http://data-cdn.mbamupdates.com/v2/mbam/consumer/data/mbam-setup-2.0.2.1012.exe
"Emisoft Emergency Kit",http://download11.emsisoft.com/EmsisoftEmergencyKit.zip
"Avast! Free AV",http://files.avast.com/iavs5x/avast_free_antivirus_setup.exe
"McAfee Stinger x32",http://downloadcenter.mcafee.com/products/mcafee-avert/Stinger/stinger32.exe
"McAfee Stinger x64",http://downloadcenter.mcafee.com/products/mcafee-avert/Stinger/stinger64.exe
"Microsoft Fixit Portable",http://download.microsoft.com/download/E/2/3/E237A32D-E0A9-4863-B864-9E820C1C6F9A/MicrosoftFixit-portable.exe
"Cain and Abel",http://www.oxid.it/downloads/ca_setup.exe

And this version of the script.

#requires -version 3.0

#create a USB tool drive

Function Get-MyTool2 {
<#

.Synopsis
Download tools from the Internet.
.Description
This command will download a set of troubleshooting and diagnostic tools from the Internet. The path will typically be a USB thumbdrive. If you use the -Sysinternals parameter, then all of the SysInternals utilities will also be downloaded to a subfolder called Sysinternals under the given path.
.Example
PS C:> Import-csv c:scriptstools.csv | get-mytool2 -path G: -sysinternals

Import a CSV of tool data and pipe to this command. This will download all tools from the web and the Sysinternals utilities. Save to drive G:.
.Notes
Last Updated: September 30, 2014
Version     : 1.0

  ****************************************************************
  * DO NOT USE IN A PRODUCTION ENVIRONMENT UNTIL YOU HAVE TESTED *
  * THOROUGHLY IN A LAB ENVIRONMENT. USE AT YOUR OWN RISK.  IF   *
  * YOU DO NOT UNDERSTAND WHAT THIS SCRIPT DOES OR HOW IT WORKS, *
  * DO NOT USE IT OUTSIDE OF A SECURE, TEST SETTING.             *
  ****************************************************************

.Link
Invoke-WebRequest

#>

[cmdletbinding(SupportsShouldProcess=$True)]
Param(
[Parameter(Position=0,Mandatory=$True,HelpMessage="Enter the download path")]
[ValidateScript({Test-Path $_})]
[string]$Path,
[Parameter(Mandatory=$True,HelpMessage="Enter the tool's direct download URI",
ValueFromPipelineByPropertyName=$True)]
[ValidateNotNullorEmpty()]
[string]$URI,
[Parameter(Mandatory=$True,HelpMessage="Enter the name or tool description",
ValueFromPipelineByPropertyName=$True)]
[ValidateNotNullorEmpty()]
[string]$Product,
[switch]$Sysinternals
)

Begin {

    Write-Verbose "Starting $($myinvocation.MyCommand)"
    #hashtable of parameters to splat to Write-Progress
    $progParam = @{
        Activity = "$($myinvocation.MyCommand)"
        Status = $Null
        CurrentOperation = $Null
        PercentComplete = 0
    }

Function _download {
[cmdletbinding(SupportsShouldProcess=$True)]
Param(
[string]$Uri,
[string]$Path
)

$out = Join-path -path $path -child (split-path $uri -Leaf)

Write-Verbose "Downloading $uri to $out"

#hash table of parameters to splat to Invoke-Webrequest
$paramHash = @{
 UseBasicParsing = $True
 Uri = $uri
 OutFile = $out
 DisableKeepAlive = $True
 ErrorAction = "Stop"
}

if ($PSCmdlet.ShouldProcess($uri)) {
    Try {
        Invoke-WebRequest @paramHash
        get-item -Path $out
        }
    Catch {
        Write-Warning "Failed to download $uri. $($_.exception.message)"
    }

} #should process

} #end download function

} #begin

Process {

    Write-Verbose "Downloading $product"

    $progParam.status = $Product
    $progParam.currentOperation = $uri

    Write-Progress @progParam

    _download -Uri $uri -Path $path


} #process

End {

if ($Sysinternals) {
    #test if subfolder exists and create it if missing
    $sub = Join-Path -Path $path -ChildPath Sysinternals

    if (-Not (Test-Path -Path $sub)) {
        mkdir $sub | Out-Null
    }

    #get the page
    $sysint = Invoke-WebRequest "http://live.sysinternals.com/Tools" -DisableKeepAlive -UseBasicParsing

    #get the links
    $links = $sysint.links | Select -Skip 1

    #reset counter
    $i=0
    foreach ($item in $links) {
     #download files to subfolder
     $uri = "http://live.sysinternals.com$($item.href)"
     $i++
     $percent = ($i/$links.count) * 100
     Write-Verbose "Downloading $uri"

     $progParam.status ="SysInternals"
     $progParam.currentOperation = $item.innerText
     $progParam.PercentComplete = $percent
     Write-Progress @progParam

     _download -Uri $uri -Path $sub

    } #foreach
} #if SysInternals

Write-verbose "Finished $($myinvocation.MyCommand)"

} #end

} #end function

Sponsored

This version has additional parameters that accept pipeline binding by property name, which means you can now run the command like this:

PS C:> Import-csv c:scriptstools.csv | get-mytool2 -path G: -sysinternals

You will need to dot-source this second script to load the function into your session. Otherwise, it works essentially the same. There is one potential drawback to these scripts in that the downloads are all sequential, which means it can take 10 minutes or more to download everything. To build a toolkit even faster, take a look at this alternate approach.

By the way, if you have any favorite troubleshooting or diagnostic tools I hope you’ll let me know. If you can include a direct download link that would be even better.

The post Build a Troubleshooting Toolkit using PowerShell appeared first on Petri.

06 Nov 20:30

Another Night in Bora Bora

by Trey Ratcliff

EXIF Info

Remember, there are two ways to see the EXIF info for each of my shots. You can hover the mouse over and see them, or if you click through to SmugMug, you can click on the little “i” and see this information! BTW, I don’t even know what some of that means… like “Brightness -24026/256″??? Maybe one of you smarties can tell me what that means!

Camera SONY ILCE-7R (Sony A7r)
ISO 800
Exposure Time 25s (25/1)
Name The Bungalows at Night.jpg
Size 7837 x 5884
Date Taken 2014-06-29 19:55:35
Date Modified 2014-07-04 08:47:18

File Size 23.88 MB
Flash flash did not fire, compulsory flash mode
Metering pattern
Exposure Program manual
Exposure Bias 0 EV
Exposure Mode manual
Light Source unknown
White Balance auto
Digital Zoom 1.0x
Contrast 0
Saturation 0
Sharpness 0
Color Space sRGB
Brightness -24026/256

Daily Photo – Another Night in Bora Bora

Was every night in Bora Bora this pretty, or just the ones where I took photos? It was EVERY NIGHT! It was so pretty every night, how could you NOT go out and take photos? With that mountain and the amazing architecture over the water… it was so awesome and a real treat to be here.

Another Night in Bora Bora

Photo Information


  • Date Taken
  • CameraILCE-7R
  • Camera MakeSony
  • Exposure Time25
  • Aperture
  • ISO800
  • Focal Length
  • FlashOff, Did not fire
  • Exposure ProgramManual
  • Exposure Bias

05 Nov 22:17

Creating Colorful HTML Reports

by ps1

All PowerShell versions

To turn results into colorful custom HTML reports, simply define three script blocks: one that writes the start of the HTML document, one that writes the end, and one that is processed for each object you want to list in the report.

Then, hand over these script blocks to ForEach-Object. It accepts a begin, a process, and an end script block.

Here is a sample script that illustrates this and creates a colorful service state report:

$path = "$env:temp\report.hta"

$beginning = {
 @'
    <html>
    <head>
    <title>Report</title>
    <STYLE type="text/css">
        h1 {font-family:SegoeUI, sans-serif; font-size:20} 
        th {font-family:SegoeUI, sans-serif; font-size:15} 
        td {font-family:Consolas, sans-serif; font-size:12} 

    </STYLE>

    </head>
    <image src="http://www.yourcompany.com/yourlogo.gif" />
    <h1>System Report</h1>
    <table>
    <tr><th>Status</th><th>Name</th></tr>
'@
}

$process = {
    $status = $_.Status
    $name = $_.DisplayName

    if ($status -eq 'Running')
    {
        '<tr>'
        '<td bgcolor="#00FF00">{0}</td>' -f $status
        '<td bgcolor="#00FF00">{0}</td>' -f $name
        '</tr>'
    }
    else
    {
        '<tr>'
        '<td bgcolor="#FF0000">{0}</td>' -f $status
        '<td bgcolor="#FF0000">{0}</td>' -f $name
        '</tr>'
    }
}


$end = { 
@'
    </table>
    </html>
    </body>
'@


}


Get-Service | 
  ForEach-Object -Begin $beginning -Process $process -End $end |
  Out-File -FilePath $path -Encoding utf8

Invoke-Item -Path $path

Twitter This Tip! ReTweet this Tip!

05 Nov 21:24

Download VMware Remote Console 7.0

by nospam@example.com (Eric Sloof)
VMware Remote Console provides console access and client device connection to VMs on a remote host. You will need to download this installer before you can launch the external VMRC application directly from a vSphere web client.

Download VMware Remote Console 7.0
30 Sep 21:39

Shellshock and Arch Linux

by Allan

I’m guessing most people have heard about the security issue that was discovered in bash earlier in the week, which has been nicknamed Shellshock. Most of the details are covered elsewhere, so I thought I would post a little about the handling of the issue in Arch.

I am the Arch Linux contact on the restricted oss-securty mailing list. On Monday (at least in my timezone…), there was a message saying that a significant security issue in bash would be announced on Wednesday. I let the Arch bash maintainer and he got some details.

This bug was CVE-2014-6271. You can test if you are vulnerable by running

x="() { :; }; echo x" bash -c :

If your terminal prints “x“, then you are vulnerable. This is actually more simple to understand than it appears… First we define a function x() which just runs “:“, which does nothing. After the function is a semicolon, followed by a simple echo statement – this is a bit strange, but there is nothing stopping us from doing that. Then this whole function/echo bit is exported as an environmental variable to the bash shell call. When bash loads, it notices a function in the environment and evaluates it. But we have “hidden” the echo statement in that environmental variable and it gets evaluated too… Oops!

The announcement of CVE-2014-6271 was made at 2014-09-24 14:00 UTC. Two minutes and five seconds later, the fix was committed to the Arch Linux [testing] repository, where it was tested for a solid 25 minutes before releasing into our main repositories.

About seven hours later, it was noticed that the fix was incomplete. The simplified version of this breakage is

X='() { function a a>\' bash -c echo

This creates a file named “echo” in the directory where it was run. To track the incomplete fix, CVE-2014-7169 was assigned. A patch was posted by the upstream bash developer to fix this issue on the Wednesday, but not released on the bash ftp site for over 24 hours.

With a second issue discovered so quickly, it is important not to take an overly reactive approach to updating as you run the risk of introducing even worse issues (despite repeated bug reports and panic in the forum and IRC channels). While waiting for the dust to settle, there was another patch posted by Florian Weimer (from Red Hat). This is not a direct fix for any vulnerability (however see below), but rather a hardening patch that attempts to limit potential issues importing functions from the environment. During this time there was also patches posted that disabled importing functions from the environment all together, but this is probably an over reaction.

About 36 hours after the first bug fix, packages were released for Arch Linux that fixed the second CVE and included the hardening patch (which upstream appears to be adopting with minor changes). There were also two other more minor issues found during all of this that were fixed as well – CVE-2014-7186 and CVE-2014-7187.

And that is the end of the story… Even the mystery CVE-2014-6277 is apparently covered by the unofficial hardening patch that has been applied. You can test you bash install using the bashcheck script. On Arch Linux, make sure you have bash>=4.3.026-1 installed.

And because this has been a fun week, upgrade NSS too!

30 Sep 20:02

PowerTip: Display Hidden Properties from Object

by The Scripting Guys

Summary: Learn how to use Windows PowerShell to display hidden properties from an object.

Hey, Scripting Guy! Question How can I see if there are any hidden properties on an object I have returned from Windows PowerShell?

Hey, Scripting Guy! Answer Use the -Force parameter with either Format-List or Format-Table, for example:

Get-ChildItem C: | Format-List * -Force

  

22 Sep 14:17

Complete Sci-Fi Spaceship Size Comparison Chart

by Paul Dixon
Science Fiction Spaceships Size Comparison Chart

Here at Geek Beat, our love of all things sci-fi is no secret. Who else do you know with a life-size Han Solo in Carbonite, or a transporter room? So when we heard about this truly epic sci-fi spaceship size comparison chart, we just had to share it!

Put together by German artist Dirk Löchel, the huge 4268 x 5690 pixel graphic features ships from popular science fiction movies, TV shows and video games from the past 40+ years, including: Star Trek, Star Wars, Battlestar Galactica, Stargate, Doctor Who, Babylon 5, EVE online, Mass Effect, Halo and more. The first version was actually created a year ago, but having updated it with a ton of extra information, Löchel says the chart is now complete.

For sake of image quality and organization, only spacecraft between 100 meters and 24000 metres are included – which is why the Death Star and a number of other large ships aren’t present. And sorry Whovians, as Löchel points out on his deviantART page, the TARDIS is both too big and too small to be included.

To help provide a sense of real-world perspective, the diagram also features the International Space Station, which looks tiny in comparison to almost everything else.

Pretty impressive, isn’t it?

[Via: Nerdist]

The post Complete Sci-Fi Spaceship Size Comparison Chart appeared first on Geek Beat.

19 Sep 13:05

Azure for Longer Term Backup

by Thomas Lee

I’ve been teaching the AOTG (Ahead of the Game) partner training around the UK, and shortly in Eire. It’s been very interesting talking to Microsoft’s SMB partners and looking at how they sell and utilise Azure. One of the Azure Products this training is advocating is Azure Backup. Last week, when I was teaching this in Manchester, one delegate pointed out that the big downside to Azure backup was that there was not much of a retention period and as such was not helpful to the delegate’s customers.

Fast forward a few days, and the wish has come true. I had a conference call this morning with the Azure backup who told me that this request had been heard loud and clear and is now in place. He pointed me to the blog post at: http://azure.microsoft.com/blog/2014/09/11/announcing-long-term-retention-for-azure-backup/

Sure enough, you can get all the backup you need (well all reasonable backups!). And the maximum retention period is 9 years, as the blog post explains.

One word: Awesome!

del.icio.us Tags: Azure,Azure backup,cloud
19 Sep 12:59

PowerCLI in the vSphere Web Client–Announcing PowerActions

by Alan Renouf


You don’t know how excited I am to write this!  Around a year ago I presented something we were working on internal as a tech preview for my VMworld session, the response was phenomenal, if you were there you would remember people standing up and clapping and asking when this awesomeness would be available, its taken a while but its here and its worth the wait.  So what is this that I am so excited about?

 

PowerActions is a new fling from VMware which can be downloaded here, it adds the automation power of PowerCLI into the web client for you to use your scripts and automation power back inside the client, have you ever wanted to right click an object in the web client and run a custom automation action, maybe return the results and then work with them further all from the convenience of the web client…. Now you can!

This works in 2 ways….

Console

PowerShell console in the Web Client

Firstly you can access a PowerCLI console straight in the web interface, even in safari, this fling allows for a dedicated host to be used as a PowerShell host and this machine will be responsible for running the PowerCLI actions, once its setup you will access the console from within the web client and all commands will run remotely on the PowerShell host, it even uses your current logged on credentials to launch the scripts meaning you don’t have to connect the PowerCLI session.

 

You can use tab completion on your cmdlets and even use other PowerShell snapins and modules to control any PowerShell enabled infrastructure to extend your automation needs within the vSphere Web Client.

 

MenuRight Click your objects

Secondly you can now right click an object in the Web Client and create a dedicated script which will work against this object, have the ability to extend your web client and take the object as an input to use inside your script.

This comes with 2 options, Create a script and also execute a script.

 

My Scripts and Shared Scripts

Not only can you create your own scripts to run against objects in the web client but advanced admins can create scripts and share them with all users of the web client by storing them in the Shared Scripts section of this fling, read the documentation to find out more about how to do this.  This gives the great ability to have not only shared scripts but actually a golden set of scripts which all users of the web client can use while you keep your items in a separate area “My Scripts”, enabling each user to have their own custom actions.

 

Download and read more

Download the fling from the VMware Labs site here, also make sure you grab the document from the same site and also check out the great post on the PowerCLI Blog for more information here.

 

Check out the video for a quick introduction

To help with the details I shot a quick install and usage video that covers the basics, make sure you read the PDF that comes with the fling and make sure you are active, if you like this then let is know, if you want more then let us know…. basically give us feedback!

-Alan

This post was original written and posted by Alan Renouf here: PowerCLI in the vSphere Web Client–Announcing PowerActions

17 Sep 12:11

Latest Fling from VMware Labs - XenApp2Horizon

by nospam@example.com (Eric Sloof)
The XenApp2Horizon Fling helps you migrate published applications and desktops from XenApp to Horizon View. One XenApp farm is migrated to one or more Horizon View farm(s).



The GUI wizard-based tool helps you:
  • Validate the View agent status on RDS hosts (from View connection server, and XenApp server)
  • Create farms
  • Validate application availability on RDS hosts
  • Migrate application/desktop to one or multiple farms (new or existing)
  • Migrate entitlements to new or existing applications/desktops. Combination of application entitlements are supported
  • Check environment
  • Identify incompatible features and configuration
Download this fling from VMware Labs - XenApp2Horizon
16 Sep 13:50

Remove Lingering Objects that cause AD Replication error 8606 and friends

by Justin Turner [MSFT]

Introducing the Lingering Object Liquidator

Hi all, Justin Turner here ---it's been a while since my last update. The goal of this post is to discuss what causes lingering objects and show you how to download, and then use the new GUI-based Lingering Object Liquidator (LOL) tool to remove them. This is a beta version of the tool, and it is currently not yet optimized for use in large Active Directory environments.

This is a long article with lots of background and screen shots, so plug-in or connect to a fast connection when viewing the full entry. The bottom of this post contains a link to my AD replication troubleshooting TechNet lab for those that want to get their hands dirty with the joy that comes with finding and fixing AD replication errors.  I’ve also updated the post with a link to my Lingering Objects hands-on lab from TechEd Europe.

Overview of Lingering Objects

Lingering objects are objects in AD than have been created, replicated, deleted, and then garbage collected on at least the DC that originated the deletion but still exist as live objects on one or more DCs in the same forest. Lingering object removal has traditionally required lengthy cleanup sessions using tools like LDP or repadmin /removelingeringobjects. The removal story improved significantly with the release of repldiag.exe. We now have another tool for our tool belt: Lingering Object Liquidator. There are related topics such as “lingering links” which will not be covered in this post.

Lingering Objects Drilldown

The dominant causes of lingering objects are

1. Long-term replication failures
While knowledge of creates and modifies are persisted in Active Directory forever, replication partners must inbound replicate knowledge of deleted objects within a rolling Tombstone Lifetime (TSL) # of days (default 60 or 180 days depending on what OS version created your AD forest). For this reason, it is important to keep your DCs online and replicating all partitions between all partners within a rolling TSL # of days. Tools like REPADMIN /SHOWREPL * /CSV, REPADMIN /REPLSUM and AD Replication Status should be used to continually identify and resolve replication errors in your AD forest.

2. Time jumps
System time jump more than TSL # of days in the past or future can cause deleted objects to be prematurely garbage collected before all DCs have inbound replicated knowledge of all deletes. The protection against this is to ensure that :

    1. your forest root PDC is continually configured with a reference time source (including following FSMO transfers
    2. All other DCs in the forest are configured to use NT5DS hierarchy
    3. Time rollback and roll-forward protection has been enabled via the maxnegphasecorrection and maxposphasecorrection registry settings or their policy-based equivalents.

The importance of configuring safeguards can't be stressed enough. Look at this post to see what happens when time gets out of whack.

3. USN Rollbacks

USN rollbacks are caused when the contents of an Active Directory database move back in time via an unsupported restore. Root causes for USN Rollbacks include:

  • Manually copying previous version of the database into place when the DC is offline
  • P2V conversions in multi-domain forests
  • Snapshot restores of physical and especially virtual DCs. For virtual environments, both the virtual host environment AND the underlying guest DCs should be Virtual Machine Generation ID capable. Windows Server 2012 or later. Both Microsoft and VMWARE make VM-Generation ID aware Hyper-V host.

Events, errors and symptoms that indicate you have lingering objects
Active Directory logs an array of events and replication status codes when lingering objects are detected. It is important to note that while errors appear on the destination DC, it is the source DC being replicated from that contains the lingering object that is blocking replication. A summary of events and replication status codes is listed in the table below:

Event or Error status

Event or error text

Implication

AD Replication status 8606

"Insufficient attributes were given to create an object. This object may not exist because it may have been deleted."

Lingering objects are present on the source DC (destination DC is operating in Strict Replication Consistency mode)

AD Replication status 8614

The directory service cannot replicate with this server because the time since the last replication with this server has exceeded the tombstone lifetime.

Lingering objects likely exist in the environment

AD Replication status 8240

There is no such object on the server

Lingering object may exist on the source DC

Directory Service event ID 1988

Active Directory Domain Services Replication encountered the existence of objects in the following partition that have been deleted from the local domain controllers (DCs) Active Directory Domain Services database.

Lingering objects exist on the source DC specified in the event

(Destination DC is running with Strict Replication Consistency)

Directory Service event ID 1388

This destination system received an update for an object that should have been present locally but was not.

Lingering objects were reanimated on the DC logging the event

Destination DC is running with Loose Replication Consistency

Directory Service event ID 2042

It has been too long since this server last replicated with the named source server.

Lingering object may exist on the source DC

A comparison of Tools to remove Lingering Objects

The table below compares the Lingering Object Liquidator with currently available tools that can remove lingering objects

Removal method

Object / Partition & and Removal Capabilities

Details

Lingering Object Liquidator

Per-object and per-partition removal

Leverages:

  • RemoveLingeringObjects LDAP rootDSE modification
  • DRSReplicaVerifyObjects method
  • GUI-based.
  • Quickly displays all lingering objects in the forest to which the executing computer is joined.
  • Built-in discovery via DRSReplicaVerifyObjects method
  • Automated method to remove lingering objects from all partitions
  • Removes lingering objects from all DCs (including RODCs) but not lingering links.
  • Windows Server 2008 and later DCs (will not work against Windows Server 2003 DCs)

Repldiag /removelingeringobjects

Per-partition removal

Leverages:

  • DRSReplicaVerifyObjects method
  • Command line only
  • Automated method to remove lingering objects from all partitions
  • Built-in discovery via DRSReplicaVerifyObjects
  • Displays discovered objects in events on DCs
  • Does not remove lingering links. Does not remove lingering objects from RODCs (yet)

LDAP RemoveLingeringObjects rootDSE primative (most commonly executed using LDP.EXE or an LDIFDE import script)

Per-object removal

  • Requires a separate discovery method
  • Removes a single object per execution unless scripted.

Repadmin /removelingeringobjects

Per-partition removal

Leverages:

  • DRSReplicaVerifyObjects method
  • Command line only
  • Built-in discovery via DRSReplicaVerifyObjects
  • Displays discovered objects in events on DCs
  • Requires many executions if a comprehensive (n * n-1 pairwise cleanup is required. Note: repldiag and the Lingering Object Liquidator tool automate this task.

The Repldiag and Lingering Object Liquidator tools are preferred for lingering object removal because of their ease of use and holistic approach to lingering object removal.

Why you should care about lingering object removal

Widely known as the gift that keeps on giving, it is important to remove lingering objects for the following reasons

  • Lingering objects can result in a long term divergence for objects and attributes residing on different DCs in your Active Directory forest
  • The presence of lingering objects prevents the replication of newer creates, deletes and modifications to destination DCs configured to use strict replication consistency. These un-replicated changes may apply to objects or attributes on users, computers, groups, group membership or ACLS.
  • Objects intentionally deleted by admins or application continue to exist as live objects on DCs that have yet to inbound replicate knowledge of the deletes.

Once present, lingering objects rarely go away until you implement a comprehensive removal solution. Lingering objects are the unwanted houseguests in AD that you just can't get rid of.

Mother in law jokes… a timeless classic.

We commonly find these little buggers to be the root cause of an array of symptom ranging from logon failures to Exchange, Lync and AD DS service outages. Some outages are resolved after some lengthy troubleshooting only to find the issue return weeks later.
The remainder of this post, we will give you everything needed to eradicate lingering objects from your environment using the Lingering Object Liquidator.

Repldiag.exe is another tool that will automate lingering object removal. It is good for most environments, but it does not provide an interface to see the objects, clean up RODCs (yet) or remove abandoned objects.

Introducing Lingering Object Liquidator

 More:

Lingering Object Liquidator automates the discovery and removal of lingering objects by using the DRSReplicaVerifyObjects method used by repadmin /removelingeringobjects and repldiag combined with the removeLingeringObject rootDSE primitive used by LDP.EXE. Tool features include:

  • Combines both discovery and removal of lingering objects in one interface
  • Is available via the Microsoft Connect site
  • The version of the tool at the Microsoft Connect site is an early beta build and does not have the fit and finish of a finished product
  • Feature improvements beyond what you see in this version are under consideration

How to obtain Lingering Object Liquidator

1. Log on to the Microsoft Connect site (using the Sign in) link with a Microsoft account:

http://connect.microsoft.com

Note: You may have to create a profile on the site if you have never participated in Connect.

2. Open the Non-feedback Product Directory:

https://connect.microsoft.com/directory/non-feedback

3. Join the following program:

AD Health

Product Azure Active Directory Connection Join link

4. Click the Downloads link to see a list of downloads or this link to go directly to the Lingering Objects Liquidator download. (Note: the direct link may become invalid as the tool gets updated.)

5. Download all associated files

6. Double click on the downloaded executable to open the tool.

Tool Requirements

1. Install Lingering Object Liquidator on a DC or member computer in the forest you want to remove lingering objects from.

2. .NET 4.5 must be installed on the computer that is executing the tool.

3. Permissions: The user account running the tool must have Domain Admin credentials for each domain in the forest that the executing computer resides in. Members of the Enterprise Admins group have domain admin credentials in all domains within a forest by default. Domain Admin credentials are sufficient in a single domain or single domain forest.

4. The admin workstation must have connectivity over the same port and protocol required of a domain-joined member computer or domain controller against any DC in the forest. Protocols of interest include DNS, Kerberos, RPC, LDAP and ephemeral port range used by the targeted DC See TechNet for more detail. Of specific concern: Pre-W2K8 DCs communicate over the “low” ephemeral port between 1024 and 5000 while post W2K3 DCs use the “high” ephemeral port range between 49152 to 65535. Environments containing both OS version families will need to enable connectivity over both port ranges.

5. You must enable the Remote Event Log Management (RPC) firewall rule on any DC that needs scanning. Otherwise, the tool displays a window stating, "Exception: The RPC server is unavailable"

6. The liquidation of lingering objects in AD Lightweight Directory Services (AD LDS / ADAM) environments is not supported.

7. You cannot use the tool to cleanup lingering objects on DCs running Windows Server 2003.  The tool leverages the event subscriptions feature which wasn’t added until Windows Server 2008.

Lingering Object Discovery

To see all lingering objects in the forest:

1. Launch Lingering Objects.exe.

2. Take a quick walk through the UI:

Naming Context:

Reference DC: the DC you will compare to the target DC. The reference DC hosts a writeable copy of the partition.

Note: ChildDC2 should not be listed here since it is an RODC, and RODCs are not valid reference DCs for lingering object removal.

 More:

The version of the tool is still in development and does not represent the finished product. In other words, expect crashes, quirks and everything else normally encountered with beta software.

Target DC: the DC that lingering objects are to be removed from

3. In smaller AD environments, you can leave all fields blank to have the entire environment scanned, and then click Detect. The tool does a comparison amongst all DCs for all partitions in a pairwise fashion when all fields are left blank. In a large environment, this comparison will take a great deal of time as the operation targets (n * (n-1)) number of DCs in the forest for all locally held partitions. For shorter, targeted operations, select a naming context, reference DC and target DC. The reference DC must hold a writable copy of the selected naming context.

During the scan, several buttons are disabled. The current count of lingering objects is displayed in the status bar at the bottom of the screen along with the current tool status. During this execution phase, the tool is running in an advisory mode and reading the event log data reported on each target DC.

Note: The Directory Service event log may completely fill up if the environment contains large numbers of lingering objects and the Directory Services event log is using its default maximum log size. The tool leverages the same lingering object discovery method as repadmin and repldiag, logging one event per lingering object found.

When the scan is complete, the status bar updates, buttons are re-enabled and total count of lingering objects is displayed. The log pane at the bottom of the window updates with any errors encountered during the scan.
Error 1396 is logged if the tool incorrectly uses an RODC as a reference DC.
Error 8440 is logged when the targeted reference DC doesn't host a writable copy of the partition.

 Note:

Lingering Object Liquidator discovery method

  • Leverages DRSReplicaVerifyObjects method in Advisory Mode
  • Runs for all DCs and all Partitions
  • Collects lingering object event ID 1946s and displays objects in main content pane
  • List can be exported to CSV for offline analysis (or modification for import)
  • Supports import and removal of objects from CSV import (leverage for objects not discoverable using DRSReplicaVerifyObjects)
  • Supports removal of objects by DRSReplicaVerifyObjects and LDAP rootDSE removeLingeringobjects modification

The tool leverages the Advisory Mode method exposed by DRSReplicaVerifyObjects that both repadmin /removelingeringobjects /Advisory_Mode and repldiag /removelingeringobjects /advisorymode use. In addition to the normal Advisory Mode related events logged on each DC, it displays each of the lingering objects within the main content pane.

Details of the scan operation log in the linger.log.txt file in the same directory as the tool's executable.

The Export button allows you to export a list of all lingering objects listed in the main pane into a CSV file. View the file in Excel, modify if necessary and use the Import button later to view the objects without having to do a new scan. The Import feature is also useful if you discover abandoned objects (not discoverable with DRSReplicaVerifyObjects) that you need to remove. We briefly discuss abandoned objects later in this post.

Removal of individual objects

The tool allows you to remove objects a handful at a time, if desired, using the Remove button:

1. Here I select three objects (hold down the Ctrl key to select multiple objects, or the SHIFT key to select a range of objects) and then select Remove.

The status bar updates with the new count of lingering objects and the status of the removal operation:

Logging for removed objects

The tool dumps a list of attributes for each object before removal, and logs this along with the results of the object removal in the removedLingeringObjects.log.txt log file. This log file is in the same location as the tool's executable.

C:\tools\LingeringObjects\removedLingeringObjects.log.txt

the obj DN: <GUID=0bb376aa1c82a348997e5187ff012f4a>;<SID=010500000000000515000000609701d7b0ce8f6a3e529d669f040000>;CN=Dick Schenk,OU=R&D,DC=root,DC=contoso,DC=com

objectClass:top, person, organizationalPerson, user;
sn:Schenk ;
whenCreated:20121126224220.0Z;
name:Dick Schenk;
objectSid:S-1-5-21-3607205728-1787809456-1721586238-1183;primaryGroupID:513;
sAMAccountType:805306368;
uSNChanged:32958;
objectCategory:<GUID=11ba1167b1b0af429187547c7d089c61>;CN=Person,CN=Schema,CN=Configuration,DC=root,DC=contoso,DC=com;
whenChanged:20121126224322.0Z;
cn:Dick Schenk;
uSNCreated:32958;
l:Boulder;
distinguishedName:<GUID=0bb376aa1c82a348997e5187ff012f4a>;<SID=010500000000000515000000609701d7b0ce8f6a3e529d669f040000>;CN=Dick Schenk,OU=R&D,DC=root,DC=contoso,DC=com;
displayName:Dick Schenk ;
st:Colorado;
dSCorePropagationData:16010101000000.0Z;
userPrincipalName:Dick@root.contoso.com;
givenName:Dick;
instanceType:0;
sAMAccountName:Dick;
userAccountControl:650;
objectGUID:aa76b30b-821c-48a3-997e-5187ff012f4a;
value is :<GUID=70ff33ce-2f41-4bf4-b7ca-7fa71d4ca13e>:<GUID=aa76b30b-821c-48a3-997e-5187ff012f4a>
Lingering Obj CN=Dick Schenk,OU=R&D,DC=root,DC=contoso,DC=com is removed from the directory, mod response result code = Success
----------------------------------------------
RemoveLingeringObject returned Success

Removal of all objects

The Remove All button, removes all lingering objects from all DCs in the environment.

To remove all lingering objects from the environment:

1. Click the Remove All button. The status bar updates with the count of lingering objects removed. (the count may differ to the discovered amount due to a bug in the tool-this is a display issue only and the objects are actually removed)

2. Close the tool and reopen it so that the main content pane clears.

3. Click the Detect button and verify no lingering objects are found.

Abandoned object removal using the new tool

None of the currently available lingering object removal tools will identify a special sub-class of lingering objects referred to internally as, "Abandoned objects".

An abandoned object is an object created on one DC that never got replicated to other DCs hosting a writable copy of the NC but does get replicated to DCs/GCs hosting a read-only copy of the NC. The originating DC goes offline prior to replicating the originating write to other DCs that contain a writable copy of the partition.

The lingering object liquidator tool does not currently discover abandoned objects automatically so a manual method is required.

1. Identify abandoned objects based on Oabvalidate and replication metadata output.

Abandoned objects can be removed with the LDAP RemoveLingeringObject rootDSE modify procedure, and so Lingering Objects Liquidator is able to remove these objects.

2. Build a CSV file for import into the tool. Once, they are visible in the tool, simply click the Remove button to get rid of them.

a. To create a Lingering Objects Liquidator tool importable CSV file:

Collect the data in a comma separated value (CSV) with the following data:

FQDN of RWDC

CNAME of RWDC

FQDN of DC to remove object from

DN of the object

Object GUID of the object

DN of the object's partition

3. Once you have the file, open the Lingering Objects tool and select the Import button, browse to the file and choose Open.

4. Select all objects and then choose Remove.

Review replication metadata to verify the objects were removed.

Resources

For those that want even more detail on lingering object troubleshooting, check out the following:

To prevent lingering objects:

  • Actively monitor for AD replication failures using a tool like the AD Replication Status tool.
  • Resolve AD replication errors within tombstone lifetime number of days.
  • Ensure your DCs are operating in Strict Replication Consistency mode
  • Protect against large jumps in system time
  • Use only supported methods or procedures to restore DCs. Do not:
    • Restore backups older than TSL
    • Perform snapshot restores on pre Windows Server 2012 virtualized DCs on any virtualization platform
    • Perform snapshot restores on a Windows Server 2012 or later virtualized DC on a virtualization host that doesn't support VMGenerationID

If you want hands-on practice troubleshooting AD replication errors, check out my lab on TechNet Virtual labs. Alternatively, come to an instructor-led lab at TechEd Europe 2014. "EM-IL307 Troubleshooting Active Directory Replication Errors"

For hands-on practice troubleshooting AD lingering objects: check out my lab from TechEd Europe 2014. "EM-IL400 Troubleshooting Active Directory Lingering Objects"

12/8/2015 Update: This lab is now available from TechNet Virtual labs here.

Finally, if you would like access to a hands-on lab for in-depth lingering object troubleshooting; let us know in the comments.

Thank you,

Justin Turner and A. Conner

Update 2014/11/20 – Added link to TechEd Lingering objects hands-on lab
Update 2014/12/17 – Added text to indicate the lack of support in LOL for cleanup of Windows Server 2003 DCs
Update 2015/12/08 – Added link to new location of Lingering Object hands-on lab

14 Sep 18:15

Were Your Google Credentials Leaked?

by Erin Styles




Early on Tuesday, Google announced that a potential 5 million usernames and passwords associated with Gmail accounts have been leaked. It is unclear how many of them are current vs. outdated credentials. According to Google’s blog post, “less than 2 percent of the username and password combinations might have worked.”

Visit our email look-up tool to see if your account was part of the leaked data.  

We strongly suggest that you take this opportunity to change your Gmail account password and generate a new, strong password using LastPass. To protect our users, those who have reused their LastPass master password as their Gmail account password have been temporarily deactivated. For your security, note that it is very important to never use your LastPass master password for other logins.

If you’ve experienced trouble with your account, please contact LastPass Support so we may assist you in reactivating your account and creating a new, stronger master password.

Be Secure,
LastPass
09 Sep 19:34

Book: Announcing Windows PowerShell Desired State Configuration Revealed

by Ravikanth C
08 Sep 18:49

Useful Path Manipulation Shortcuts

by ps1

All PowerShell Versions

Here are a bunch of useful (and simple to use) system functions for dealing with file paths:

[System.IO.Path]::GetFileNameWithoutExtension('file.ps1')
[System.IO.Path]::GetExtension('file.ps1')
[System.IO.Path]::ChangeExtension('file.ps1', '.copy.ps1')

[System.IO.Path]::GetFileNameWithoutExtension('c:\test\file.ps1')
[System.IO.Path]::GetExtension('c:\test\file.ps1')
[System.IO.Path]::ChangeExtension('c:\test\file.ps1', '.bak')

All of these methods accept either file names or full paths, and return different aspects of the path, or change things like the extension.

Twitter This Tip! ReTweet this Tip!

08 Sep 12:50

IS IT POSSIBLE TO CALCULATE THE PRESSURE IN A VOLCANO SIMPLY FROM A VIDEO OF THE...

by Smarter Every Day
IS IT POSSIBLE TO CALCULATE THE PRESSURE IN A VOLCANO SIMPLY FROM A VIDEO OF THE EXPLOSION?

Wow... and actual volcano explosion shock wave. Insane. Hey! We can do math on this!

The explosion happens at 0:12 seconds in, and the shock wave hits at 0:25 seconds. 13 seconds for the shock wave to hit the boat.


The speed of sound at sea level is 331.5 m/s (741.5 mph)

Velocity = Distance / Time

Therefore

Distance = Velocity x Time

Distance = 331.5 m/s x 13 seconds: = 4,309.5 meters

Ok.. so now we know the distance to the volcano... we can assume:



Density of the air at sea level = 1.224 kg/m^3

Speed of sound = 331.5 m/s as above

r = 4,309.5 meters from math....

If we had a calibrated microphone on the camera we could use the following equation from the paper "Generation and propagation of infrasonic airwaves from volcanic explosions" to calculate the pressure in the volcano at the moment of explosion.

The equation:
http://i.imgur.com/JIA36cj.png

My handwritten notes for posterity.
http://i.imgur.com/S6NjBgK.jpg


Volcano Eruption in Papua New Guinea

The eruption of Mount Tavurvur volcano on August 29th, 2014. Captured by Phil McNamara.
08 Sep 12:43

A private Island

by Trey Ratcliff

The Video

You’ll notice this island from above in at least two scenes in the video below. I made this with my quadcopter — see my DJI Phantom Review for more!

Daily Photo – A private Island

This was our favorite little island on the trip. You might remember a kayak photo from last month where I made my daughter paddle the kayak to get to this little island. It was hard work, but the whole time I told her it was character building!

A private Island

Photo Information


  • Date TakenJune 29, 2014 at 4:38pm
  • CameraILCE-7R
  • Camera MakeSony
  • Exposure Time1/100
  • Aperture18
  • ISO125
  • Focal Length26.0 mm
  • FlashOff, Did not fire
  • Exposure ProgramAperture-priority AE
  • Exposure Bias-1

03 Sep 13:34

System Uptime

by ps1

All PowerShell Versions

Windows starts a high definition counter each time it boots, and this counter will return the milliseconds the system runs:

$millisecondsUptime = [Environment]::TickCount
"I am up for $millisecondsUptime milliseconds!"

Since you will hardly be interested in the milliseconds, use New-Timespan to turn the milliseconds (or any other time interval for that matter) into a meaningful unit:

$millisecondsUptime = [Environment]::TickCount
"I am up for $millisecondsUptime milliseconds!"

$timespan = New-TimeSpan -Seconds ($millisecondsUptime/1000)
$timespan

So now, you can use the timespan object in $timespan to report the uptime in any unit you want:

$millisecondsUptime = [Environment]::TickCount
"I am up for $millisecondsUptime milliseconds!"

$timespan = New-TimeSpan -Seconds ($millisecondsUptime/1000)
$hours = $timespan.TotalHours

"System is up for {0:n0} hours now." -f $hours

As a special treat, New-Timespan cannot take milliseconds directly, so the script had to divide the milliseconds by 1000, introducing a small inaccuracy.

To turn milliseconds in a timespan object without truncating anything, try this:

$timespan = [Timespan]::FromMilliseconds($millisecondsUptime)

It won't make a difference in this example, but can be useful elsewhere. For example, you also have a FromTicks() method available that can turn ticks (the smallest unit of time intervals on Windows systems) into intervals.

Twitter This Tip! ReTweet this Tip!

19 Aug 11:47

Mathematica 10 – now available for your Pi!

by Liz Upton

Liz: If you use Raspbian, you’ll have noticed that Mathematica and the Wolfram Language come bundled for free with your Raspberry Pi. (A little boast here: we were only the second computer ever on which Mathematica has been included for free use as standard. The first? Steve Jobs’s NeXT, back in 1988.) 

Earlier in July, Wolfram Research announced a big update to Mathematica, with the introduction of Mathematica 10. Here’s a guest post announcement from Arnoud Buzing at Wolfram about what the new Mathematica will offer those of you who use it on your Raspberry Pi. Over to you, Arnoud!

In July, we released Mathematica 10, a major update to Wolfram’s flagship desktop product. It contains over 700 new functions, and improvements to just about every part of the system.

wolfram-rasp-pi2

Today I am happy to announce an update for Mathematica and the Wolfram Language for the Raspberry Pi, which bring many of those features to the Raspberry Pi.

To get this new version of the Wolfram Language, simply run this command in a terminal on your Raspberry Pi:

sudo apt-get update && sudo apt-get install wolfram-engine

This new version will also come pre-installed in the next release of NOOBS, the easy set-up system for the Raspberry Pi.

If you have never used the Wolfram Language on the Raspberry Pi, then you should try our new fast introduction for programmers, which is a quick and easy way to learn to program in this language. This introduction covers everything from using the interactive user interface, basic evaluations and expressions, to more advanced topics such as natural language processing and cloud computation. You’ll also find a great introduction to the Wolfram Language in the Raspberry Pi Learning Resources.

This release of the Wolfram Language also includes integration with the newly released Wolfram Cloud. This technology allows you to do sophisticated computations on a remote server, using all of the knowledge from Wolfram|Alpha and the Wolfram Knowledgebase. It lets you define custom computations and deploy them as a “instant API” on the cloud. The Wolfram Cloud is available with a free starter account, and has additional non-free accounts which enable additional functionality.

Check the Wolfram Community in the next couple of weeks for new examples which show you how to use the Wolfram Language with your Raspberry Pi.

31 Jul 13:13

Creating a Secure Environment using PowerShell Desired State Configuration

by PowerShell Team

Introduction:

Traditionally, IT environments have secured their business critical information against external threats by adding additional layers of security to the org’s network (e.g. firewalls, DMZs, etc.). 
However many of today’s attacks are coming from inside the network so a new “assume breach” approach must be adopted.

In this blog, we show how to create a secure environment to run a particular application or service inside of an assume-breached network.  This substantially reduces the attack surface of the application or service by configuring a highly customized, application specific environment, by limiting user access and by having “Just Enough” administrative control with full auditing.

Below is a sample environment called Safeharbor.  Safeharbor is an isolated environment for critical information that limits access to the resources. This is accomplished by:

• Policies to clearly define User access and actions on the resources
• Separate isolated domain constraining access to the resources
• Limited & relevant access to users
• Auditing access to protected data, changes to user permissions and setting up alerts on access.

We will walk you through an implementation of the Safeharbor environment using PowerShell Desired State Configuration (DSC) and PowerShell Constrained Endpoints.

The key elements of creating a secure environment are:

Lab Configuration:

This blog is focused on creating a lab environment to explore the creation and operation of Secure Environments using PowerShell DSC. We first use DSC to create a “Corporate” domain for the lab.
In the real world, you’ll skip this step and use your existing domain. We then assume that this environment has been breached. Of course you would route out and address the breach and secure the environment to avoid further breaches.
But in an “assume-breach” approach, you recognize that you need to invest and put your most valuable assets, in this case, the corporate data stored on file servers, into a Secure Environment.

The first step for this lab is to setup a “Corporate” domain with a Domain Controller, Domain Administrator, Domain Users and a user to perform admin tasks in the domain (Person Authorized to perform Administrative tasks – PAPA).

 

Corporate Environment:

  

Below is the configuration to provision the lab’s Corporate domain controller using DSC.


Highlights of the configuration:
• Setup and Promote the machine to be a DC using the xADDomain resource
Credentials are securely handled using certificates and Secure string in DSC
DNS zone transfer is configured to allow replication of DNS databases across other DNS Servers (This will be explained later when the secure domain is stood-up)
• Domain users are added using the xADUser resource
• Config uses a component to synchronize the execution of operations between the DSC managed nodes. The Synchronization component is primarily used for the configuration agent on the local machine to capture the state of the remote DSC supported machine and to sequence the execution of its configuration resources locally

  

 

The configuration data for Corporate DC contains User configuration such as credentials, in a secure file. 

 

 

A Client machine “CorpClient” is provisioned in the Corporate domain and the user to perform administrative tasks is added to the Administrator group. 

 

 

This lab takes an “assume-breach” attitude so we are going to assume that the Corporate domain we just created is compromised and that IT Dept needs to create a secure environment for the critical data on the File Servers. A new Safeharbor domain is quickly stood-up, with a domain controller, Management head server constraining access to critical resources and a DSC Pull Server containing the configuration for the workload specific nodes such as File Servers. File Servers are then provisioned using boot-to-pull server mechanism.

By locking down the access to the resources using the isolated Safeharbor domain, we can mitigate threats originating internally from the domain.

   

Safeharbor Environment:

The plan to secure and lockdown access to the workload servers is following. We will explain each step of the process and go over the associated Configuration.

 

 

There are three users across domains that are of interest.
1) Corporate\PAPA – User in the corporate domain authorized to perform admin tasks. This is the only user from the Corporate domain allowed access to the JEA box.
2) Corporate\User – A general domain user of Corporate for whom we grant specific fileshare access (explained later)
3) Safeharbor\MATA – Non-admin domain user in Safeharbor - management account for trusted action” - RunAs on endpoint. This user has no other access in either domain.




Safeharbor Domain Controller and Pull Server:

The first step is to bring up the Safeharbor domain controller.

 

 

  

The same configuration which was used to setup the Corporate DC (see previous section) is used here. However the manifest data is different. Apart from using secure way of managing credentials, the key take away here is that we create a new domain user MATA (Management Account for Trusted Action). This is a non-admin user which is restricted to be used only on the workload File Servers to perform specific actions. There is also a one-way trust established to the Corporate domain to enable authenticating users from the Corporate domain.

 

 

Next, a DSC Pull Server is provisioned with all local admin accounts disabled and the Pull Server is joined to the Safeharbor domain using the xComputer resource. This is a HTTPS based Pull Server containing configuration for the workload File Servers. The workload servers, upon boot, will pull their state from this server for configuration.

 

 

The Pull Server configuration data contains the Certificate information for SSL binding and the path for the config and modules for the workload servers.

 

   

JEA Management Server:

A JEA (Just Enough Admin) enabled Management Server is setup with a constrained PowerShell endpoint. This endpoint allows access to only user from Corporate domain that can perform admin actions (PAPA). This is done to restrict the access to the workload file servers.For security reasons the Administrator’s Role is disabled in the Safeharbor domain.
Safeharbor domain user MATA is configured as the RunAs user on the constrained endpoint.


Here is the configuration for the Management Server. xPSEndpoint resource is used to setup the constrained PowerShell endpoint. All local admins are disabled to restrict access to the machine.

 

 

The configuration data is as follows. The SDDL config for the endpoint grants access to Corporate\PAPA only. Also, all credentials are handled securely.

 $ADUserSid is the SID of the user in Corporate Domain that is designated to perform Admin tasks (Corporate\PAPA). User -> Sid lookup is performed and the SDDL updated prior to configuring the constrained endpoint.

 

A startup script on the endpoint exposes only a relevant set of functionality to the incoming user. In this case, Corporate\PAPA is allowed access to the proxy equivalent of smbshare cmdlets to Create/Retrieve/Remove shares on the workload file servers.

 

  

Proxyfunction for smbshare cmdlets to restrict the functionality:

 

The proxy cmdlets use a Permission.csv file to map users-resources-access permissions. In this sample, Corporate\User1 will be allowed to access the named shares on the file server. This is configured during creating a new smbshare on the fileserver, when Corporate\PAPA connects to the constrained endpoint.

 

 

   

Workload Servers:
In the final step the workload file servers are added securely. DSC Metaconfiguration on these servers is configured such that they pull their configuration from the DSC Pull Server. Also, the file servers are locked down by removing built-in firewall rules and allowing only specific traffic.

All local admins are disabled and the Safeharbor domain account MATA (Management Account for Trusted Action) is granted admin rights on the machine to perform creating/removal of smbshares.

 

 

   

Here is a snippet of the configuration data used on the fileserver. Only smb and powershell remoting traffic is allowed and all other ports and rules are locked down. This ensures that the workload servers are secured completely.

 

...

...

 

 

  

Here is the final topology. The new Safeharbor domain protects and secures the corporate data and allows access to users on shares configured as per policy.

  

Final Topology:



  

Safeharbor demo can be deployed on a Hyper-V capable machine using the Assert-SafeharborScenario.ps1 script.

 

  

Validation:
Once the Safeharbor environment is setup, we can validate the configuration by:
• Creating a new smbshare on the File Server
• Accessing the Share



Create new share on the File Server:
• Create a new session to JEA Jump Box at the contrained PSSession endpoint
        • Validate that only Corporate\PAPA can connect to this endpoint
• Enumerate the available commands to Corporate\PAPA
• Create a New-SMBShare on the File Server


Note that the smbshare names and permissions are limited by the configuration supplied in Permission.csv (previous section)

 

   

Accessing files on the File Server as Corporate\User1:
• Only the Share authorized in the Permission.CSV file is accessible to this user
• User can only “Read” share contents

 

  

Further, any user (other than PAPA) in Corporate domain cannot create new SMB shares:

 

   

Updates/Enhancements:
The concept of JEA and constraining access to resources using Safeharbor in this sample can be further improved in your environment by:

• Remove domain from the isolated Safeharbor environment
• Remove trust between the two domains
• Limit all access to the Safeharbor environment through the Jump Box
• Audits, alerts for changes to the environment, resources and user permissions 

  

Download Safeharbor Environment sample code and powerpoint from the technet gallery

 

 

Raghu Shantha [MSFT]
PowerShell Desired State Configuration Team

 

 

 

16 Jun 13:15

SCCM 2012: Diskspace Report sorted by Freespace Percentage

by opsmgrtipps

System Center admins often get asked for disk space reports.
Depending on the discovery settings the data can be more current in SCOM or SCCM.
So you need to decide which datasource you use.

I have created a report for System Center Configuration Manager 2012, which lists Total Disk Space (MB), Total Free Space (MB), Total Used Space (MB), Total Free Space Percent and Total Used Space Percent.

It sorts by Total Free Space Percent and colour codes the output with this rule:

< 20 %: red
< 40 %: orange
Rest: green

You can select any device collection.

diskspacereport

The report can be downloaded here.