Wednesday, April 15, 2015

Why I am investing in DSC

In order to get a good grasp on something new, like a technology, it is always important to find a use case.

Once you have a use case, I can ensure you that the learning process is much more interesting, fun – and perhaps easier too.

That is what I did when I went deep into Desired State Configuration. I found a use case.
My use case was to leverage DSC as part of VM Roles in Azure Pack in a way that would be valid for the future too.

Here comes some reasons for my decision. 

Powershell has been around for some time now, and one of the best benefits by learning and using the shell is the amount of work you are able to do, combining modules, components, technologies and much more through the same API. Considering that everything that MS builds and do – regardless of cloud, will be accessible and manageable through Powershell in addition to other options, ensures that this is a real no-brainer.

With Windows Management Framework 4.0, we also got Powershell Desired State Configuration added to our table.
Powershell Desired State Configuration is Microsoft’s way to implement an idempotent configuration that ensures that the “desired state” will be reached by applying the entire configuration, regardless of the current state.

-          But, what does this really mean? Aren’t we able to do everything using native Powershell scripts already?

That is correct. There’s no “limits” by using Powershell natively today.
However, with native Powershell scripts you are responsible for building all the error handling and logic into your scripts. And as you probably know, that can be both very time consuming and challenging.

Desired State Configuration handles this automatically for you, letting you make and deploy any incremental changes to your configuration over time without risking the system to be put in a bad state.
If you have any configuration drift? Depending on how the Local Configuration Manager is configured – the engine that’s responsible for applying the configuration and follow the instructions, the system can heal itself by enforcing the desired state.

Think of Powershell Desired State Configuration as a contract between you and your nodes (manageable objects).

In order to create and deliver this “contract”, Desired State Configuration is based on CIM – and use WinRM for communicating. CIM uses a language called Manaed Object Format – often referred to as “MOF”. Powershell Desired State Configuration is a way to create and distribute MOF files that can be applied to systems supporting this standard.

The way it’s applied to the node(s) is either through “Push” or “Pull”.

(The difference between Push and Pull is out of scope right now and deserves a dedicated blog post later on. I promise).

To put it short, the Pull mechanism requires some infrastructure in order to work, where the node(s) are talking to the Pull server – either through SMB, Http or Https.

The Push method is pretty straight forward and what you can start using right out of the box. DSC requires that WinRM listeners are configured so that the CIM can push the configuration to the remote systems.

Here’s an example of how a Powershell DSC Configuration looks like:

configuration DNS
    node kndsc006
        WindowsFeature DNS
            Name = "DNS"
            Ensure = "Present"
            IncludeAllSubFeature = $true


Start-DscConfiguration -wait -force -Verbose .\DNS

As you can see, the format here is quite easy to read.
We can easily see that we will install (Ensure = "Present") DNS (Name = "DNS") on the target node (kndsc006). 

Actually, it is so easy to read that Powershell newbies like me are able to manage J

Hopefully this gave you some more context about the “why”, but we are not done yet.

In Azure today, we are able to leverage DSC as part of the VM extension, meaning we can create – upload – and apply our DSC configuration to Azure IaaS virtual machines. The method of applying the config for these VMs are “Push”.

As you probably know, we don’t have the exact same capabilities on-prem in order to leverage DSC as part of Azure Pack. However, we are able to simulate the same experience at some extent, by using the combination of DSC, SMA and VM Roles ( )

Moving forward, we know that the consistency across clouds will be as near as 1:1 with the introduction of Azure Resource Manager that will introduce us for a complete new way to interact with our cloud services – regardless of location. Also worth to note, the Azure Resource Manager itself will be idempotent.

What about your existing DSC scripts?
Exactly, that is the main point here. These configurations will be valid using Azure Resource Manager too J

So in essence, you invest in DSC now and use it both for Azure Pack (VM Roles + SMA) and Azure (VM Extension), and later on you can reuse the investment you’ve made into the era of Azure Resource Manager.

Hopefully this gave you some inspiration to start learning Desired State Configuration, available in Windows Management Framework 4.0 – but also available in 5.0 (which is in Preview).
Please note that everything you do in Azure when using the DSC VM Extension there is based on the 5.0 version.

Monday, March 16, 2015

Application Modeling with VM Roles, DSC and SMA

Earlier this year, I started to go deep into DSC to learn more about the concept, possibilities and most important, how we can improve what we already have and know, using this new approach of modeling.

For more information and as an introduction to this blog post, you can read my former blog post on the subject:

Desired State Configuration is very interesting indeed – and to fully embrace it you need to be comfortable with Powershell. Having that said, Desired State Configuration can give you some of what you are requiring today, but not everything.

Let me spend some minutes trying to explain what I am actually saying here.

If you want to use DSC as your primary engine, the standard solution to configure and deploy applications and services across clouds throughout the life cycle, there is nothing there to stop you from doing so.
However, given the fact that in many situations, you won’t be the individual who’s ordering the application, server and dependencies, it is important that we can make this available in a world full of tenants with a demand for self-servicing.

Looking back at how we used to do things before to handle the life-cycle management of applications and infra, I think it is fair to say it was something like this (in context of System Center):

1)      We deployed a Virtual Machine based on a VM Template using SCVMM
We either
a)      Manually installed and configured applications and services within the guest post VM deployment
b)      Used SCCM to install agents, letting the admin interact with the OS to install and configure applications using a central management solution
2)      If we wanted to provide monitoring, we then used SCOM to roll out the agents to our servers and configured them to report to their management group
3)      Finally yet importantly, we also wanted to be secure and have a reliable set of data. That’s why we also added backup agents to our servers using SCDPM

In total, we are talking about 4 agents here (SCVMM, SCCM, SCOM and SCDPM).
That is a lot.

Also note that I didn’t specify any version of System Center, so this was probably even before we started to talk about Private Clouds (introduced with System Center 2012).

And that’s the next topic, all of this in the context of cloud computing.

If we take a walk down the memorial lane, we can see some of Microsoft’s least proud moments, all the attempts in order to bring the private cloud a fully functional self-service portal.

-        We’ve had several self-service portals for VMM that later was replaced by different solutions, such as Cloud Service Process Pack and App Controller
-        Cloud Service Process Pack – which was introduced with SC 2012 – where all the components were merged into a single license, giving you out-of-the-box functionality related to IaaS.
The solution was one of the worst we have seen, and the complexity to implement it was beyond what we have seen ever since.
-        AppController was based on Silverlight and gave us the “single-pane of glass” vision for cloud management. With a connector to Azure subscriptions (IaaS) and to private and service provider clouds (using SPF), you could deploy and control your services and virtual machines using this console

Although it is common knowledge that AppController will be removed in vNext of System Center ( ), AppController introduced us to a very interesting thing: self-service of service templates.

The concept of service templates was introduced in System Center 2012 – Virtual Machine Manager, and if we go back to my list of actions we needed to perform, we could say that service templates at some point would replace the need of SCCM.
Service Templates was an extension to the VM template. It gave us the possibility to design, configure and deploy multi-tier applications – and deploy it to our private clouds.
However, I have to admit that back then; we did not see much adoption of service templates. Actually, we did not see some serious adoption before Microsoft started to push some pre-configured service templates on their own, and that happened last year – at the same time as their Gallery Items for Azure Pack was released.

To summarize, the service template concept (which was based on XML) gave the application owners and the fabric administrators a chance to interact to standardize and deploy complex applications into the private clouds, using AppController. In the same sentence there we found AppController (Silverlight) and XML.

If we quickly turn to our “final destination”, Microsoft Azure, we can see that those technologies aren’t the big bet in any circumstances.

VM Roles are replacing service templates in the private cloud through Windows Azure Pack.

A VM Role is based on JSON – and define a virtual machine resource that tenants can instantiate and scale according to their requirements.

We have in essence two JSON files. One for the resource definition (RESDEF) and one for the resource extension (RESEXT).
The resource definition describes the virtual machine hardware and instantiation restrictions, while the resource extension definition describes how a resource should be provisioned.

In order to support user input in a user friendly way, we also have a third JSON file – the view definition (VIEWDEF), which provides the Azure Pack details about how to let the user customize the creation of a VM Role.

These files are contained in a package, along with other files (custom resources, logo’s etc) that describe the entire VM Role.

You might ask yourself why I am introducing you to something you already know very well, or why I am starting to endorse JSON. The answer lays in the clouds.

If you have every played around with the Azure preview portal, you have access to the Azure Resource Manager.
ARM introduced an entirely new way of thinking about you resources. Instead of creating and managing individual resources, you are defining a resource model of your service – to create a resource group with different resources that are logically managed throughout the entire life cycle.

-        And guess what?

The Azure Resource Manager Templates is based on JSON, which describes the resources and associated deployment parameters.

So to give you a short summary so far:

Service Templates was great when it came with SCVMM 2012. However, based on XML and AppController for self-service, it wasn’t flexible enough, nor designed for the cloud.

Because of a huge focus on consistency as part of the Cloud OS vision by Microsoft, Windows Azure Pack was brought on-premises and should help organizations to adopt the cloud at a faster cadence. We then got VM Roles that should be more aligned with the public cloud (Microsoft Azure), compared to service templates.

So we might (so far) end up with a conclusion that VM Roles is here to stay, and if you are focusing too much on service templates today, you need to reconsider that investment.

The good, the bad and the ugly

So far, the blog post has been describing something similar to a journey. Nevertheless, we have not reached the final destination yet.

I promised you a blog post about DSC, SMA and VM Roles, but so far, you have only heard about the VM Roles.
Before we proceed, we need to be completely honest about the VM Roles to understand the requirement of engineering here. To better understand what I am talking about, I am comparing a VM Role with a stand-alone VM based on a VM Template:

As you can see, the VM Role gives us very much more compared to a stand-alone VM from a VM template. A VM Role is our preferred choice when we want to deploy applications in a similar way as a service template, but only as single tiers. We can also service the VM Role and scale it on demand.

A VM on the other hand, lacks all these fancy features. We can purely base a stand-alone VM on a VM Template, giving us a pre-defined HW template in VMM with some limited settings at the OS level.
However, please note that the VM supports probably the most important things for any production scenarios: backup and DR.
That is correct. If you use backup and DR together with a VM Role, you will end up in a scenario where you have orphaned objects in Azure Pack. This will effectively break the relationship between the VM Role (CloudService in VMM) and its members. There is currently no way to recover from that scenario.

This got me thinking.

How can we leverage the best from both worlds? Using VM Role as the engine that drives and creates the complexity here, supplemented by SMA and Desired State Configuration to perform the in-guest operations into normal VM templates?

I ran through the scenario with a fellow MVP, Stanislav Zhelyazkov and he nodded and agreed. “-This seems to be the right thing to do moving forward, you have my blessing” he said.

The workflow

This is where it all makes sense. To combine the beauty of VM Roles, DSC and SMA to achieve the following scenario:

1)      A tenant logs on to the tenant portal. The subscription includes the VM Cloud resource provider where the cloud administrator has added one or more VM Roles.
2)      The VM Role Gallery shows these VM Roles and provides the tenant with instructions on how to model and deploy the application.
3)      The tenant provides some input during the VM Role wizard and the VM Role deployment starts
4)      In the background, a parent runbook (SMA) that is linked to the event in the portal kicks in, and based on the VM Role the tenant chose, it will invoke the correct child runbook.
5)      The child runbook will deploy the (stand-alone) VMs necessary for the application specified in the VM Role, join them to the proper domain (if specified) and automatically add them to the tenant subscription.
6)      Once the stand-alone VMs are started, the VM Role resource extension kicks in (which is the DSC configuration, using push) that based on the parameters and inputs from the tenant is able to deploy and model the application entirely.
7)      Once the entire operation has completed, the child runbook will clean-up the VM Role and remove it from the subscription

In a nutshell, we have achieved the following with this example:

1)      We have successfully been able to deploy and model our applications using the extension available in VM Roles, where we are using Desired State Configuration to handle everything within the guests (instead of normal powershell scripts).
2)      We are combining the process in WAP with SMA Runbooks to handle everything outside of the VM Role and the VMs.
3)      We are guaranteed a supported life-cycle management of our tenant workloads

Here you can see some screenshots from a VM Role that will deploy Windows Azure Pack on 6 stand-alone VMs, combining DSC and SMA.

In an upcoming blog post, we will start to have a look at the actual code being used, the challenges and workarounds.

I hope that this blog post showed you some interesting things about application modeling with VM Roles, SMA and DSC, and that the times are a-changing compared to what we used to do in this space.

Monday, March 2, 2015

DSC with Azure and Azure Pack

Every now and then, it comes a time when I really need to ramp up on certain things.
It can be a new technology, a new product, or a new way of doing things.

This kind of journey is never easy, and I am that kind of person who doesn’t stop before I have a certain level of satisfaction. I expect a lot from myself and have a crazy self-discipline.

Starting early this year, I went deep into DSC to learn more about something that will be impossible to avoid in the next couple of months.

Before continuing, I just want you to know that this will not be yet another blog post that explains the importance of Powershell, which you need to learn ASAP or else you will "flip burgers in the future".

A result of have working with Azure Pack and Azure for the last years has made me much more creative.
Instead of having our out-of-the-box products where we were limited by the actions provided by the GUI, we can now easily create our own custom solutions where integrating several APIs, modules and so on to create new opportunities for our business.

Let us stop for a second on Azure. Microsoft Azure.
We have been talking about the Cloud OS and cloud consistency for over a year now and we should all be very familiar with MS vision and strategy around this topic.
Especially “Mobile first, Cloud first” will give us a hint that whatever comes will appear in Microsoft Azure first.

In the context of DSC, we can see that we can leverage some Azure VM Extensions and Features in our IaaS VMs today.
And that is really the background of this blog post.

Microsoft Azure provides us with several VM Extensions, either directly by Microsoft or some third-parties to enable security, runtime, debugging, management and other features that will boost your productivity working with IaaS VMs in Azure.

When you deploy a virtual machine in the Azure portal, you can decide whether or not the VM Extension should be enabled.

We have several extensions available, all depending on what we are trying to achieve.
The extensions I find most interesting belongs to the category of “Deployment and Configuration Management”.

First, let us talk about a VM extension for “MSEnterpriseApplication”.
Using this Extension, we will effectively implements features that supports VM Roles resource extensions, the same we can leverage on-premises with Azure Pack and Service Provider Foundation.
To add this extension, the VM must already exist in Azure and have the Azure Guest Agent pre-installed.

Running the following cmdlet using the Azure module gives us more details about the extension

With this extension enabled in the VM, we can use the VM Role Authoring tool to author our resource extension (that is the package that we normally import to VMM which contains the application payload). The latest version let us deploy directly to Azure.
If you rather want to use Powershell, you should view the Powershell functionality of the tool and save only the portion of the script that assigns a value to $plainSettings in a text file.

From here, you can store the text file in a variable ($plainSettings) and update your VM with the following cmdlet:

$VM = Set-AzureVMExtension –ExtensionName “MSEnterpriseApplication” –Publisher “Microsoft.SystemCenter” –Version “1.0” –PrivateConfiguration $plainSettings –VM $vmcontext.VM

Next, update your VM directly using the following cmdlet:

Update-AzureVM –ServiceName “ServiceName” –VM $VM –Name “VMName”

So, given the fact that we now have a single tool where we can author and deploy our resource extensions (application payload) to IaaS VMs in both WAP and Azure is good news, however, it is not idempotent.

This is where Desired State Configuration comes into the picture.
Been built on the Common Information Model (CIM) and uses Windows Remote Management (WinRM) as the communication mechanism, DSC is like putting steroids into your Powershell scripts.

I know I will get a lot of Powershell experts on my neck here, but that is at least one way to visualize what DSC is.
Let us say you create a script, deploy it to a node and then you are done.
If someone makes any changes to that configuration afterwards, the Powershell script would not care nor notice.
A Desired State Configuration can ensure that there won’t be any configuration drift and apply and monitor (for example) the configuration.
This is handled by the Local Configuration Manager (LCM) which you can consider as an “agent”, although it is not an agent per definition.

So, looking at the capabilities of DSC, we can quickly understand how important this will be for any in-guest management solution moving forward.

The requirement of using Azure Powershell DSC VM extension is that you must have Azure Powershell module installed. The DSC extension handler has a dependency on Windows Management Framework (WMF) version 5 – which is currently in preview and only supported by 2012 R2. WMF 5.0 will automatically be installed in your IaaS VM as a Windows Update once enabled, and require a reboot.

The following cmdlets are specific to DSC:

Publish-AzureVMDscConfiguration – will upload a DSC script to Azure blob storage, that later will be applied to your IaaS VMs using the Set-AzureVMDscExtension cmdlet

Get-AzureVMDscExtension – Gets the settings of the DSC extension on a particular VM
Remove-AzureVMDscExtension – Will remove the DSC extension from a VM

Set-AzureVMDscExtension – Configures the DSC extension on a VM

Here’s a very easy example on how to apply a DSC script to your VM in Azure, assuming you have the script already created.

Publish-AzureVMDscConfiguration –ConfigurationPath “c:\folder\DSCscript.ps1”

That will create a ZIP package which will be uploaded to a blob storage in Azure.

Next, we will add the config to the VM (which we assume is already stored in the variable named $VM ) )

$VM = Set-AzureVMDscExtension –VM $VM –ConfigurationArchive “” –ConfigurationName “DSCscript”

Once this cmdlet is executed, the following will happen within the VM:

1)      WMF 5.0 is downloaded and installed (the latest version) on the server
2)      The extension handler looks in the specified Azure container (which is defined when you connect with your subscription) for the .zip file
3)      Then the archive is unpacked and any dependent modules are moved into the PS Module path and runs the specified configuration function

Adding that this will also accept parameters gives you an understanding of how flexible, dynamic and powerful the DSC VM Extension will be.

Now, this was all about Microsoft Azure.
What about the things that are taking place in Azure Pack?

I briefly mentioned the VM Role Authoring Tool in this blog post which will be playing an important role in this setting.
The research I have been doing this year isn’t easy to put within a single blog post, especially not if I should describe all the errors and mistakes I have done as part of this journey J

I have been trying to simulate the Azure experience in Windows Azure Pack, but unfortunately, that is an impossible challenge as we don’t have the same possibilities when it comes to the interaction through the API. I am only able to achieve some of the good parts, but that again will qualify for some blog posts in the near future.

Before you start thinking “no, it is not that hard to simulate the exact experience”, I would like to remind you about that everything I do in this context, will always be using Network Virtualization with NVGRE, so there is no data-channel from the datacenter into the tenant environment what so ever.

If you think this is interesting, to learn more about DSC with Azure and Azure Pack, I have to point out the spectacular blog post series by Ben Gelens, where he has done a very good job explaining the complete setup of an entire DSC environment (using Pull) including the authoring of the required VM Role.

I will focus on the Push method in my examples, given the fact that the tenants are isolated and should be able to perform certain actions through self-service.

See you soon!

Monday, February 23, 2015

When your WAPack tenants are using VLANs instead of SDN

When your tenants are using VLANs instead of SDN

Ever since the release of Windows Azure Pack, I’ve been a strong believer of software-defined datacenters powered by Microsoft technologies. Especially the story around NVGRE has been interesting and something that Windows Server, System Center and Azure Pack are really embracing.

If you want to learn and read more about NVGRE in this context, I recommend having a look at our whitepaper:

Also, if you want to learn how to design a scalable management stamp and turn SCVMM into a fabric controller for your multi-tenant cloud, where NVGRE is essential, have a look at this session:

The objective of this blog post is to:

·        Show how you should design VMM to deliver – and use dedicated VLANs to your tenants
·        Show how to structure and design your hosting plans in Azure
·        Customize the plan settings to avoid confusion

How to design VMM to deliver – and use dedicated VLANs to your tenants

Designing and implementing a solid networking structure in VMM can be quite a challenging task.
We normally see that during setup and installation of VMM, people don’t have all the information they need. As a result, they have already started to deploy a couple of hosts before they are actually paying attention to:
1)      Host groups
2)      Logical networks
3)      Storage classifications
Needless to say, it is very difficult to make changes to this afterwards when you have several objects in VMM with dependencies and deep relationship.

So let us just assume that we are able to follow the guidelines and pattern I’ve been using in this script:

The fabric controller script will create host groups based on physical locations with child host groups that contains different functions.
For all the logical networks in that script, I am using “one connected network” as the network type.

This will create a 1:Many mapping of the VM network to each logical network and simplify scalability and management.

For the VLANs networks though, I will not use the network type of “one connected network”, but rather use “VLAN-based independent networks”.

This will effectively let me create a 1:1 mapping of a VM network to a specific VLAN/subnet within this logical network.

The following screenshot shows the mapping and the design in our fabric.

Now the big question: why VLAN-based independent network with a 1:1 mapping of VM network and VLAN?

As I will show you really soon, the type of logical network we use for our tenant VLANs gives us more flexibility due to isolation.

When we are adding the newly created logical network to a VMM Cloud, we simply have to select the entire logical network.
But when we are creating Hosting Plans in Azure Pack admin portal/API, we can now select the single and preferred VM Network (based on VLAN) for our tenants.

The following screenshot from VMM shows our Cloud that is using both the Cloud Network (PA network space for NVGRE) and Tenants VLAN.

So once we have the logical network enabled at the cloud level in VMM, we can move into the Azure Pack section of this blog post.

Azure Pack is multi-tenant by definition and let you – together with VMM and the VM Cloud resource provider, scale and modify the environment to fit your needs.

When using NVGRE as the foundation for our tenants, we are able to use Azure Pack “out of the box” and have a single hosting plan – based on the VMM Cloud where we added our logical network for NVGRE, and tenants can create and manage their own software-defined networks. For this, we only need a single hosting plan as every tenant is isolated on their own virtualized network.
Of course – there might be other valid reasons to have different hosting plans, such as SLA’s, VM Roles and other service offerings. But for NVGRE, everyone can live in the same plan.

This changes once you are using VLANs. If you have a dedicated VLAN per customer, you must add the dedicated VLAN to the hosting plan in Azure Pack. This will effectively force you to create a hosting plan per tenant, so that they are not able to see/share the same VLAN configuration.

The following architecture shows how this scales.

In the hosting plan in Azure Pack, you simply add the dedicated VLAN to the plan and it will be available once the tenant subscribe to this subscription.

Bonus info:

With the update rollup 5 of Azure Pack, we have now a new setting that simplifies the life for all the VLAN tenants out there!

I’ve always said that “if you give people too much information, they’ll ask too many questions”.
It seems like the Azure Pack product group agree on this, and we have now a new setting at the plan level in WAP that says “disable built-in network extension for tenants”.

So let us see how this looks like in the tenant portal when we are accessing a hosting plan that:

a)      Provides VM Clouds
b)      Has the option “disable built-in network extension for tenants” enabled

This will ease on the confusion for these tenants, as they were not able to manage any network artefacts in Azure Pack when VLAN was used. However, they will of course be able to deploy virtual machines/roles into the VLAN(s) that are available in their hosting plan.

Sunday, February 15, 2015

SCVMM Fabric Controller - Update: No more differential disks for your VM Roles

I just assume that you have read Marc van Eijk's well described blog post about the new enhancement with Update Rollup 5 for SCVMM, where we can now effectively turn off differential disks for all our new VM Role deployments with Azure Pack.

If not, follow this link to get all the details:

As a result of this going public, I have uploaded a new version of my SCVMM Fabric Controller script, that now will add another custom property to all the IaaS Clouds in SCVMM, assuming you want static disks to be default.

You can grab the new version from here:

Next, I will make this script a bit more user friendly and add some more functionality to it in the next couple of weeks.



Monday, February 2, 2015

Sharing VNet between subscriptions in Azure Pack

Sharing VNet between subscriptions in Azure Pack

From time to time, I get into discussions with customers on how to be more flexible around networking in Azure Pack.

Today each subscription is a boundary. Meaning, a co-admin can have access to multiple subscriptions, but you are not allowed to “share” anything between those subscriptions, such as virtual networks.

So here’s the scenario.

A tenant subscribes to multiple subscriptions in Azure Pack. Each subscription is based on its associated Hosting Plan, which is something that is defined and exposed by the service administrator (the backend side of Azure Pack). A Hosting Plan can contain several offerings, such as VM Clouds, web site Clouds and more. The context as we move forward is the VM Cloud.

Let us say that a customer has two subscriptions today. Each subscription has their own tenant administrator.

Subscription 1 is associated with Hosting Plan 1, which offers standard virtual machines based on VM templates.

Subscription 2 is associated with Hosting Plan 2, which offers VM Roles through Gallery Items.

The service provider has divided these offerings into two different plans.

Tenant admin 1 has created his VNet on subscription 1 and connected the virtual machines.
However, tenant admin 2 creates a new VNet on subscription 2 and connect his VM Roles, they are not able to communicate with the VMs on subscription 1.

So what do we do?

As this isn’t something we are exposing through the GUI nor the API, we have to get in touch with the service admin itself.

You have already noticed that we are dealing with two tenants here, so that should give you an indication on what we are about to do. We are going to share some resources in the backend.

If we head over to SCVMM and look at the VM Network in Powershell, a few interesting properties are appearing to the surface.

UserRole which says this is associated with a UserRole in SCVMM, which can be generated by Azure Pack and aggregated through SPF.
Owner which is owning the VM network in SCVMM.
GrantedToList which is obviously where we can allow other UserRoles to have access to this object.


This means that the service admin can help their tenants with the following.

Grant access to tenant admin 2 on the VNet that was created on subscription 1 – by tenant admin 1.

Powershell cmdlets:

### Find the VM network you want to share between subscriptions

$VNet = Get-SCVMNetwork | Where-Object {$ -eq "TechEd" -and $_.Owner -eq ""}

### Find the tenant admin for that subscription

$tenant = Get-SCUserRole | Where-Object {$_.Name -like "*kristine*"}

### Grant access

Grant-SCResource -Resource $VNet -UserRoleID $tenant.ID -RunAsynchronously

We have now enabled the following scenario:

Ok, so what is next?

You can now access the tenant portal and deploy your workloads.

In the portal, you will never be able to manage the VNet on this subscription, only deploy workloads that are connected to it.

Monday, January 26, 2015

Reconfigure your Resource Providers for Azure Pack

Configuring your Resource Providers for Azure Pack

While deploying Windows Azure Pack, several factors plays its part when it comes to the design and layout of the solution. As you may be aware of, Windows Azure Pack contains a lot of different sites, APIs and resource providers – just so that you can enable and realize Azure technologies within your own datacenter.
It’s more than a glorified self-service portal so the requirements for the design, load and can be overwhelming for some customers.

Before I get to the big point of this blog post, I would like to put it into some context first.

Normally at customer sites, we see the following different designs when it comes to Windows Azure Pack.


Organizations who want to just test and play around are deploying the single install, express setup of Windows Azure Pack. This will install all the sites and APIs onto a single virtual machine, and the organizations can easily add resource providers to start testing the powerful cloud enabled tool.

Although I have seen some examples where the Express setup has been used in production, it is far from what we recommend. The public facing parts of Azure Pack, such as the Tenant Public API, Tenant Site and eventually Tenant Authentication Site are directly exposed on the internet. Having everything on the same virtual machine will increase the attack surface as well as lead to performance, HA and scale issues.

Configuration requirements using this design:

There aren’t any hard requirements using the Express solution as we like to think that people are only using it in lab and test. However, if you want to make it available and actually use it across firewalls, you will have to perform the following:

·       Reconfigure tenant site (FQDN, certificate and port)
·       Reconfigure tenant authentication site (FQDN, certificate and port)
·       Reconfigure tenant public API (FQDN, certificate and port)


·       Reconfigure admin site (FQDN, certificate and port)
·       Reconfigure admin authentication site (FQDN, certificate and port)


For some of the smaller customers where HA is not the most important thing, we often see a basic implementation of Windows Azure Pack. This means that we have a single virtual machine running the high-privileged services – such as the Admin API, Admin Site, Tenant API and eventually Admin Authentication site together with the default resource providers. This virtual machine is located behind the firewall and in most cases within the same Active Directory Domain with its resource providers (SCVMM+SPF, SQL, ServiceBus, WebSites etc).

For the public facing part (the parts mentioned before, directly exposed on the internet) they use another – dedicated virtual machine which might be located in DMZ and available on the internet.
Of course, both the high-privileged VM and the internet facing VM are running on a Hyper-V cluster so that the VMs themselves are highly available.

Configuration requirements using this design:

I strongly recommend using a highly available WAP design whenever you plan to put it into production. But in this design, the only presence of HA is at the hypervisor level.
You will have to perform the following using this design:

·       Reconfigure tenant site (FQDN, certificate and port)
·       Reconfigure tenant public API (FQDN, certificate and port)
·       Reconfigure tenant authentication site (FQDN, certificate and port)
o   Or
·       Integrate with Active Directory Federation Services and remove tenant authentication site


·       Reconfigure admin site (FQDN, certificate and port)
·       Reconfigure admin authentication site (FQDN, certificate and port)
o   Or
·       Integrate with Active Directory Federation Services and remove admin authentication site

Minimal Distribution

The most common design of Windows Azure Pack and what’s normally at least what I am recommending, Is where we have at least two virtual machines for the high-privileged servers, configured as highly available behind a load balancer, and the same for the internet facing part.
This will indeed require load balancers and VIPs, but also some additional reconfiguration when it comes to the Azure Pack environment.

Configuration requirements using this desing

Having the high-privileged services as well as the internet facing parts scaled across several virtual machines, helps us to address performance, availability and scale issues.
You will have to perform the following reconfiguration to make this work:

·       Reconfigure tenat site (FQDN, certificate and port)
·       Reconfigure tenant public API (FQDN, certificate and port)
·       Reconfigure tenant authentication site (FQDN, certificate and port)
o   Or
·       Integrate with Active Directory Federation Services and remove tenant authentication site
·       Reconfigure admin site (FQDN, certificate and port)
·       Reconfigure admin authentication site (FQDN, certificate and port)
o   Or
·       Integrate with Active Directory Federation Services and remove admin authentication site
·       Reconfigure resource providers (FQDN and certificate):
o   Sqlserver
o   MySQL
o   Monitoring
o   Marketplace
o   Usageservice
o   Systemcenter
o   Webspaces
o   Servicebus


So whenever you plan to scale out and ensure HA across all sites and APIs, you have to reconfigure the components as mentioned with the Minimal Distribution design. The same rules apply if you intend to be more drastic around this, having dedicated VMs for each and every site and API. The reconfiguration is still mandatory.

Windows Azure Pack has been available for over a year now, and the majority of organizations are adopting the VM Cloud resource provider. The good thing here is that even if you have scaled out the SPF endpoint, you are simply adding the endpoint to the admin API and everything is handled.
There’s really not much reconfiguration required if you have configured SPF correctly with FQDN and certificates upfront.

What’s more of a concern is when you want to add resource providers such as SQL server(s) and/or MySQL server(s).

By default, when you install the first high-privileged server with the admin API, admin site and so on, you also get the default resource providers added, such as SQL, MySQL, Usage, Monitoring, Servicebus and Marketplace. The FQDN’s are bound to the computer name of this machine.
Once you add the second – or even third VM that should be located behind a load balancer together with the first VM, these resource providers must also be reconfigured so that you are not pointing toward an individual virtual machine, but towards a FQDN that is associated with a VIP behind the load balancer.

Reconfiguring the default Resource Providers – and why that can be a pain

In order to reconfigure the Windows Azure Pack portals, APIs and resource providers, we have to instrument the databases in a supported way. The supported way is through Powershell, and together with my good friend Flemming Riis, we have convered how to reconfigure the high-privileged services – as well as the internet facing parts in some earlier blog posts.

As a result of that, I won’t cover it over again, but rather refer to those URL’s, hoping you will notice them, read them and continue reading this blog post as I am about to reach my point.

Allright, let us continue on the resource providers.

You are probably familiar with the reconfiguration of the tenant and admin stuff by now, and understand that we have several sets of APIs and portals involved. In the end of the day, everything here should interact nicely together, being able to reach each other and expose the right set of information to both an administrator and a tenant.

If we look at the resource providers we are dealing with directly in the database, we can see that we have several endpoints to each and every resource provider.
We have an endpoint for the resource provider when it comes from the admin API, and we have an endpoint for the resource provider when coming from the tenant site and API.
In addition, each resource provider have an endpoint for usage and notification too.


The SQLserver resource provider will have the following endpoints:

AdminEndPoint.ForwardingAddress: https://FQDN:30010/
TenantEndPoint.ForwardingAddress: https://FQDN:30010/Subscriptions/
UsageEndPoint.ForwardingAddress: https://FQDN:30010/
NotificationEndPoint.ForwardingAddress: https://FQDN:30010/

So, when only reconfiguring the FQDN, certificates and ports for the high-privileged services and internet facing parts, these endpoints are left behind.

The same is applicable for the other resource providers as well, an in order to turn the in to highly available resource providers, you must perform this through powershell:


Since ADSF is added to the mix, we need to create a function that will allow us to get the token we require for accessing the AdminURI.

function Get-AdfsToken([string]$adfsAddress, [PSCredential]$credential)
    $clientRealm = 'http://azureservices/AdminSite'
    $allowSelfSignCertificates = $true

    Add-Type -AssemblyName 'System.ServiceModel, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089'
    Add-Type -AssemblyName 'System.IdentityModel, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089'

    $identityProviderEndpoint = New-Object -TypeName System.ServiceModel.EndpointAddress -ArgumentList ($adfsAddress + '/adfs/services/trust/13/usernamemixed')
    $identityProviderBinding = New-Object -TypeName System.ServiceModel.WS2007HttpBinding -ArgumentList ([System.ServiceModel.SecurityMode]::TransportWithMessageCredential)
    $identityProviderBinding.Security.Message.EstablishSecurityContext = $false
    $identityProviderBinding.Security.Message.ClientCredentialType = 'UserName'
    $identityProviderBinding.Security.Transport.ClientCredentialType = 'None'

    $trustChannelFactory = New-Object -TypeName System.ServiceModel.Security.WSTrustChannelFactory -ArgumentList $identityProviderBinding, $identityProviderEndpoint
    $trustChannelFactory.TrustVersion = [System.ServiceModel.Security.TrustVersion]::WSTrust13

    if ($allowSelfSignCertificates)
        $certificateAuthentication = New-Object -TypeName System.ServiceModel.Security.X509ServiceCertificateAuthentication
        $certificateAuthentication.CertificateValidationMode = 'None'
        $trustChannelFactory.Credentials.ServiceCertificate.SslCertificateAuthentication = $certificateAuthentication

    $ptr = [System.Runtime.InteropServices.Marshal]::SecureStringToCoTaskMemUnicode($credential.Password)
    $password = [System.Runtime.InteropServices.Marshal]::PtrToStringUni($ptr)

    $trustChannelFactory.Credentials.SupportInteractive = $false
    $trustChannelFactory.Credentials.UserName.UserName = $credential.UserName
    $trustChannelFactory.Credentials.UserName.Password = $password #$credential.Password

    $rst = New-Object -TypeName System.IdentityModel.Protocols.WSTrust.RequestSecurityToken -ArgumentList ([System.IdentityModel.Protocols.WSTrust.RequestTypes]::Issue)
    $rst.AppliesTo = New-Object -TypeName System.IdentityModel.Protocols.WSTrust.EndpointReference -ArgumentList $clientRealm
    $rst.TokenType = 'urn:ietf:params:oauth:token-type:jwt'
    $rst.KeyType = [System.IdentityModel.Protocols.WSTrust.KeyTypes]::Bearer

    $rstr = New-Object -TypeName System.IdentityModel.Protocols.WSTrust.RequestSecurityTokenResponse

    $channel = $trustChannelFactory.CreateChannel()
    $token = $channel.Issue($rst, [ref] $rstr)

    $tokenString = ([System.IdentityModel.Tokens.GenericXmlSecurityToken]$token).TokenXml.InnerText;
    $result = [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($tokenString))
    return $result

Next, we will list the resource providers we have and query for the endpoints:

### Change the variables to fit your environment

$adfsAddress = '
$username = 'domain\username'
$password = 'P@@$Word'
$adminuri = ‘’
$securePassword = ConvertTo-SecureString -String $password -AsPlainText -Force
$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username,$securePassword

$token = Get-AdfsToken -adfsAddress $adfsAddress -credential $credential


$FQDN = ‘’

### Get a list of all your resource providers

Get-MgmtSvcResourceProvider -IncludeSystemResourceProviders -AdminUri $adminUri -Token $token -DisableCertificateValidation | Format-List -Property "Name"

Get-MgmtSvcResourceProvider -IncludeSystemResourceProviders -AdminUri $adminuri -Token $token -DisableCertificateValidation | fl -Property "name"

# Get a list of resource providers with the current configured endpoint values
$rp = Get-MgmtSvcResourceProvider -IncludeSystemResourceProviders -AdminUri $adminUri -Token $token -DisableCertificateValidation
$rp | Select Name, @{e={$_.AdminEndPoint.ForwardingAddress}}, @{e={$_.TenantEndpoint.ForwardingAddress}}, @{e={$_.UsageEndpoint.ForwardingAddress}}, @{e={$_.healthcheckendpoint.forwardingaddress}}, @{e={$_.notificationendpoint.forwardingaddress}}

# STEP 1 - Configure new FQDN for the SQLserver resource provider

$resourceProviderName = "sqlservers"
$adminEndpoint = "https://$FQDN:30010/"
$tenantEndpoint = "https://$FQDN:30010/subscriptions"
$usageEndpoint = "https://$FQDN:30010/"
$healthCheckEndpoint = $null
$notificationEndpoint = "https://$FQDN:30010/"

$rp = Get-MgmtSvcResourceProvider -Name $resourceProviderName -IncludeSystemResourceProviders -AdminUri $adminUri -Token $token -DisableCertificateValidation
# update all the endpoints using the new fqdn:
if ($rp.AdminEndpoint -and $adminEndpoint) {
# update endpoint
$rp.AdminEndpoint.ForwardingAddress = New-Object System.Uri($adminEndpoint)
if ($rp.TenantEndpoint -and $tenantEndpoint) {
# update endpoint
$rp.TenantEndpoint.ForwardingAddress = New-Object System.Uri($tenantEndpoint)
if ($rp.UsageEndpoint -and $usageEndpoint) {
# update endpoint
$rp.UsageEndpoint.ForwardingAddress = New-Object System.Uri($usageEndpoint)
if ($rp.HealthCheckEndpoint -and $healthCheckEndpoint) {
# update endpoint
$rp.HealthCheckEndpoint.ForwardingAddress = New-Object System.Uri($healthCheckEndpoint)
if ($rp.NotificationEndpoint -and $notificationEndpoint) {
# update endpoint
$rp.NotificationEndpoint.ForwardingAddress = New-Object System.Uri($notificationEndpoint)

Set-MgmtSvcResourceProvider -ResourceProvider $rp -AdminUri $adminUri -Token $token -DisableCertificateValidation -Force

# Repeat STEP 1 and STEP 2 on the remaining resource providers

 By following the steps in this blog post, you shouldn’t have any warnings or errors in your WAP portals.