Citrix Cloud Connector -101

What is the Citrix Cloud Connector?

Citrix Cloud Connectors are components that effectively provide a communications link to your AD environments, back to Services provided by Citrix in the Cloud. The official Citrix statement is here:

The Citrix Cloud Connector is a Citrix component that serves as a channel for communication between Citrix Cloud and your resource locations, enabling cloud management without requiring any complex networking or infrastructure configuration. 


I will assume you know what the Cloud Plane is and what Resource locations are, however, if not please read here:
Outbound Connection
The link between the Citrix Cloud (Services provided by Citrix for you to control and administer) and Resource Locations (Your AD environment, VDA’s, Group Policy, Applications) requires nothing more than TLS 1.2 with strong cipher suites. Outbound port 443 to be open with access to the internet. The Cloud Connector should have no inbound ports accessible from the Internet. The advised placement of Connectors is in the LAN, not DMZ and for the 2012/2016 O/S machine to be domain-joined. This is not an option! (No server Core!)

A Connector can be behind a web proxy that works with SSL/TLS encrypted communication.
Internal ‘Inbound/Outbound’ connections
49152 -65535/UDP 123/UDP W32Time
49152 -65535/TCP 135/TCP RPC Endpoint Mapper
49152 -65535/TCP 464/TCP/UDP Kerberos password change
49152 -65535/TCP 49152-65535/TCP RPC for LSA, SAM, Netlogon (*)
49152 -65535/TCP/UDP 389/TCP/UDP LDAP
49152 -65535/TCP 636/TCP LDAP SSL
49152 -65535/TCP 3268/TCP LDAP GC
49152 -65535/TCP 3269/TCP LDAP GC SSL
53, 49152 -65535/TCP/UDP 53/TCP/UDP DNS
49152 -65535/TCP 49152 -65535/TCP FRS RPC (*)
49152 -65535/TCP/UDP 88/TCP/UDP Kerberos
49152 -65535/TCP/UDP 445/TCP SMB


Cloud Connector updates are ‘managed’ by Citrix. The method by which updates are applied is known as ‘Canary’.  The method this encompasses is first controlled by two distinct environments maintained by Citrix – Release A and Release B. One of the environments is updated first and then customers are migrated over in batches within a 4-5-day process. Once all customers are migrated, work begins on the alternate environment. Every time there is a Cloud Plane update it is likely there will be a Connector update. It is, therefore, a recommendation that more than one connector is present in a Resource location (N + 1) as Connectors are upgraded in serial. Just a word of caution on this, always try to think of the recommendation as N+1. Even during an update (Always have 2 working Connectors).
Controlled Updates
Citrix Cloud now has the capability to allow you as the Administrator to leverage control of the update process.  You can now specify the time you wish the install to be carried out, at least giving you some notion of any update problems. Simply accessing the Resource Locations under the Hamburger menu in the Cloud Plane and choosing manage Resource Locations will provide this capability.

It is advised to keep your Connectors online, so you don't miss updates provided by Citrix. What I would like to see with the above feature is to   set your update schedule per Connector and switch off a % of your connectors in public clouds and save on consumption costs, then start them back up when you know the upgrade slot is approaching to cover the  connector that you have scheduled to update. Right now this appears to target all Connectors in the Resource Location.
Update Issues
If an issue is encountered during the update/migration phase, a hard stop is issued. Citrix Cloud can easily roll back to the previous code/environment within 5 minutes. Cloud Connectors are downgraded in serial.

The Cloud Connector will self-manage. Do not disable reboots or put other restrictions on the Cloud Connector. These actions prevent the Cloud Connector from updating itself when there is a critical update.

Connector Logs

When faced with problems, there are some steps that you can carry out to mitigate the situation.

Know where to look for the log files:
Install Logs Locations
%AppData%\Local\Temp\CitrixLogs\CloudServicesSetup %windir%\Temp\CitrixLogs\CloudServicesSetup

Codes to look out for:

1603 - An unexpected error occurred

2 - A prerequisite check failed

0 - Installation completed successfully

Connector to Cloud Logs
Location = %ProgramData%\Citrix\WorkspaceCloud\Logs

The logs here can be deleted and can be controlled by a Registry key. (HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\CloudServices\AgentAdministration\MaximumLogSpaceMegabytes).

 There are also useful Event logs in the Application part of Event Viewer (Source will be from Citrix). By default, event logs are in the C:\ProgramData\Citrix\WorkspaceCloud\Logs directory of the machine hosting the Cloud Connector.

Time is also important. Make sure your time is synched.

Internet Explorer Enhanced Security Configuration (IE ESC) is turned off.

You also have  to check any Cloud Plane issues. 

Drastic Action

Sometimes, you may not get the obvious answer. In this instance it is advised to carry out the following:

- Have operational cloud connectors – n+1 if possible

- Go to Citrix Cloud and remove the connector from inside the Resource  Location

- Then, log in to cloud connector server and uninstall software

- Add new Connector and run installation

- Reboot

Install new cloud connector software manually by logging in to the Citrix Cloud from the Cloud Connector Server and going to your Resource tab in the Citrix Cloud management portal.

Remember when it is downloaded to install using ‘Run as Administrator’.


By default, the XML and broker traffic is not secure to the Cloud Connectors. There is certainly some confusion I have seen on blogs about whether this is supported. It is. The procedure to add a certificate is as follows:
Adding Certificate
Install PFX certificate in local machine context on the Connector Server

Obtain thumbprint of certificate in properties of the certificate and go to the details tab

Add to a notepad and then get rid of spaces (if any between characters). This is the Certificate Hash.

Then identify app identifier for certificate HKCR\Installer\Products and do a search for Citrix broker service.

This will take you to a product name key. Copy the key Guid into the same notepad and get rid of the path.

(You must format this, so it has a – after the first 8 characters, then – after next 4, 4, 4 with remaining characters left)

Example GUID:

Add the details to the following command in the same notepad and save in UTF-8 format to a location. Then copy your command into an administrative cmd prompt.


Netsh http add sslcert ipport= certhash=PASTE_CERT_HASH appid={PASTE_XD_GUID_HERE}

Once the certificate is added, you will need to change the storefront XML brokers to 443 and the STA’s used on the Gateway to 443 (https).
Trusted CA
The (CA) used by Citrix Cloud SSL/TLS certificates and by Microsoft Azure Service Bus SSL/TLS certificates is trusted by the Cloud Connector.
Outbound Communication
As stated, all outbound communication is encrypted.

Secure Data
All company data remains in resource location. The control plane does not store sensitive customer information.  Administrator passwords are asked for on-demand (by prompting only those who can administer). There is no data-at-rest that is sensitive or encrypted.


If the Connectors are behind a Proxy, the following URL’s must be reachable:

Cloud Connector 






https://* (Gateway Service) 


Administration Console




The Connector will use the proxy settings in the context of the installing user. All Connector services will also run as Local Service. To configure support for the services to use a proxy, run the following command:

Netsh winhttp import proxy source=ie

Then restart the connector. (No support for auto-detect, authenticating or PAC scripts)


This setting enables an HDX session between the client (Workspace App/Receiver) and server (VDA) to be established through the Citrix Gateway Service. When enabled, HDX traffic no longer flows through the Cloud Connector. VDA establishes an outbound connection directly to the Citrix Gateway Service in the Citrix Cloud, therefore minimizing resource constraints of your Citrix Cloud Connectors.

This policy is enabled by default and applies only to HDX sessions established through Citrix Cloud.

Rendezvous conditions:

- VDA 1811 or later

- The functional level at machine catalog 1811 minimum

- CVADS 1811 or later


If the VDA requires a proxy server to access the internet, the proper proxy configuration is required, however, Proxy connections are not supported when using the Rendezvous protocol. In this instance, you would need to use traffic via the Connector. If the Rendezvous protocol policy is enabled and the ICA traffic can’t reach the Gateway Service directly in the Cloud, the traffic will fall back to the Cloud Connector.


The Citrix Cloud is resilient by design. You do need to factor this resiliency into your Connectors taking in to account the Connector upgrade process. Remember, N+1 always!

The diagram below highlights the resilient nature of the Citrix Cloud.

If using a public Cloud, place your connectors into availability sets.

Local Host Cache

Local Host Cache also provides a level of resiliency should you lose access to the Cloud Control Plane. This is enabled by default. All connection data is copied into a local database (SQL Express) that resides on the Cloud Connector. You can still broker internal connections within your Resource Location without access to the Citrix Cloud. This obviously means you require Storefront on-premises. 
Other areas to note are this works with Xenapp and Pooled VDI. LHC v2 had improvement that allow the use of Pooled VDI. 

By default, power-managed desktop VDAs in pooled Delivery Groups that have the “ShutdownDesktopsAfterUse” property enabled are placed into maintenance mode when an outage occurs. You can change this default, to allow those desktops to be used during an outage. However, you cannot rely on the power management during the outage. (Power management resumes after normal operations resume.) Also, those desktops might contain data from the previous user, because they have not been restarted.To override the default behavior, you must enable it site-wide and for each affected Delivery Group. Run the following PowerShell cmdlets.

Set-BrokerSite -ReuseMachinesWithoutShutdownInOutageAllowed $true Set-BrokerDesktopGroup -Name “<name>” -ReuseMachinesWithoutShutdownInOutage $true

Enabling this feature in the Site and the Delivery Groups does not affect how the configured “ShutdownDesktopsAfterUse” property works during normal operations.

There are sizing recommendations to be noted when using LHC which leads us nicely into the next topic.


When updates are deployed to Citrix Cloud, Connector machines get updated.  You do not want to affect your user service. So, if you always want a fully operational environment, you are looking at a minimum of 3 connectors. This is where I disagree with the minimum of 2. Cloud connectors are also stateless. Connections are automatically load balanced but not at equal measure.

- 5000 VDA’s and 20,000 sessions can be used obtained with 2 Cloud        connectors. They would have 4 CPU and 4gb Ram.

- Cpu tends to be the resource constraint with Connectors.

- You do not need premium SSD storage for Connector Workloads.

- Two Cloud Connectors hosted on Azure Standard_A2_v2 VMs are recommended for 1,000 Windows 10 VMs

- The use of 2-vCPU Cloud Connectors is recommended for sites that host 2,500 VDAs

- For faster registrations and stability, go for 4 CPU’s

Connectors Love CPU

Based on the above, 3 connectors might be way too much for your workloads. The 3, in my opinion, is required due to the way Citrix manages the updates rather than specific sizing estimates.

You will have to be careful about VDA registration storms which get initiated about every two weeks by SQL and Delivery Controller updates in the Cloud.

The Authentication that traverses via the Cloud Connectors also has a direct impact on CPU.

The Local Host Cache (Require On-premises Storefront) will also eat into your CPU count. This is caused by the Citrix High Availability Service.

It is for this reason I suggest 4 CPU minimum (1 socket, 4 cores) and you get quicker registration and session launches.

Domain Trusts 

Cloud Connectors cannot traverse domain-Level trusts. For each separate domain, install Cloud Connectors. The same is true for Resource and User Domains. Trusts are required for launching resources in this scenario.

Trust relationships are only required if launching resources in a different domain or forest. (VDA’s in separate resource domain than your users).

Following Domain Trust configurations have been tested by Citrix:
Scenario = Single Domain, Single Forest

Deployed Connectors = One Domain

Trust = None

Domains Listed = Single

Workspace = Yes, all users

Storefront = Yes, all users

Scenario = Parent/Child Domain, Single Forest

Deployed Connectors = Resource(Parent) Domain

Trust = Parent-Child

Domains Listed = Both

Workspace = Yes, all users

Storefront = Yes, all users


Scenario = User/Resource Domain, Seperate Forests

Deployed Connectors = Resource Domain

Trust = Forest 2-Way

Domains Listed = Resource Domain

Workspace = Yes, only Resource Forest **

Storefront = Yes all users, both Forests

Scenario = User/Resource Domain, Seperate Forests

Deployed Connectors = Each Forest

Trust = Forest 2-Way

Domains Listed = Both

Workspace = Yes, all users

Storefront = Yes, all users
** Users in user domain may be nested into Resource domain security groups to mitigate this issue.

Domain Functional Levels

Now, I know this has caught a few people out, so it is worth mentioning.

The Citrix Cloud Connector supports the following forest and domain functional levels in Active Directory.
Forest Domain Supported Domain Controllers
Windows Server 2008 R2 Windows Server 2008 R2 Windows Server 2008 R2,

Windows Server 2012,

Windows Server 2012 R2,

Windows Server 2016

Windows Server 2008 R2 Windows Server 2012 Windows Server 2012,

Windows Server 2012 R2,

Windows Server 2016

Windows Server 2008 R2 Windows Server 2012 R2 Windows Server 2012 R2,

Windows Server 2016

Windows Server 2008 R2 Windows Server 2016 Windows Server 2016
Windows Server 2012 Windows Server 2012 Windows Server 2012,

Windows Server 2012 R2,

Windows Server 2016

Windows Server 2012 Windows Server 2012 R2 Windows Server 2012 R2,

Windows Server 2016

Windows Server 2012 Windows Server 2016 Windows Server 2016
Windows Server 2012 R2 Windows Server 2012 R2 Windows Server 2012 R2,

Windows Server 2016

Windows Server 2012 R2 Windows Server 2016 Windows Server 2016
Windows Server 2016 Windows Server 2016 Windows Server 2016


Communication Outbound from Connectors

AD Provider allows Citrix Cloud to manage resources associated with AD accounts

Cloud Agent Logger transmits logs from on premises agents to logger Worker Cloud Service

Cloud Agent Watchdog handles auto updates of connector

Cloud Credential Provider is a local endpoint that interfaces with credential wallet in Citrix Cloud

Web Relay Provider is used by Xenmobile/App-layering to forward http requests received from web relay cloud service to on premises web servers

Config Synch Service copies brokering configuration from CVADS to the local system for LHC high availability

NetScaler Cloud Gateway Provides internet connectivity to on premises desktops/apps without need to open inbound firewall rules or deploy components in DMZ

Remote Broker Provider facilitates comms from local vda’s and storefront servers to the Remote Broker Service in Citrix Cloud

Remote HCL Server proxies’ communications between Delivery Controllers in Citrix Cloud and the hypervisors hosting your virtual resources.

Session Manager Service uses session manager proxy to manage anonymous pre-launch sessions and upload session count information to the Citrix Cloud

There are two services that do not communicate to the Cloud Plane.
Cloud Agent System is the provider of privileged services and is only Cloud service running with local system permissions.

High Availability Service listens and processes connection requests during an outage, gathered by configuration sync service which gathers broker information.

Internal Communication

AD Provider communicates with Active Directory over various ports

Web Relay Provider is used by Xenmobile so users can add CVADS resources   through secure hub via the pnagent services site

Remote Broker Provider is the Citrix Cloud version of the Broker Service running on delivery controller in traditional deployments. Works in the same way.

Config Synch/HA service and Remote Broker services work together for local host cache feature.

Config Synchroniser service sends Broker configuration data sent to HA Service. HA writes received data in local database. Remote Broker Provider transfers brokering responsibility to HA service.

NetScaler Gateway Service will send HDX traffic through connectors
Remote HCL service is used to provision Virtual Machines via the CVADS service using MCS

Session Manager service uses session manger proxy to interact with the delivery controller in a traditional deployment. If not using it, the proxy remains dormant

Command-line Installation

CWCConnector.exe /q /Customer:*Customer*  /ClientId:*ClientId*  /ClientSecret:*ClientSecret*  /ResourceLocationId:*ResourceLocationId*  /AcceptTermsOfService:*true*

You can retrieve a list of supported parameters by running CWCConnector /?.

/Customer: Required. The customer ID is shown on the API Access page in the Citrix Cloud console (within Identity and Access Management).

/ClientId: Required. The secure client ID an administrator can create, located on the API Access page.

/ClientSecret: Required. The secure client secret that can be downloaded after the secure client is created. Located on the API Access page.

/ResourceLocationId: Required. The unique identifier for an existing resource location. To retrieve the ID, click the ID button for the Resource Location on the Resource Locations page in the Citrix Cloud console. If no value is specified, Citrix Cloud uses the ID of the first resource location in the account.

/AcceptTermsOfService: Required. The default value is Yes.
I hope this serves as a nice one-stop-shop of Citrix Connector facts.

A final word before the article concludes. If you are looking at Citrix Cloud as a solution to get rid of ‘managed servers’, then great. If you are looking at the Cloud solution to rid yourself of tin, operational costs and have some hard and soft cost savings, even better. If you say, it will reduce the number of servers we require, think again.Certainly, this might be the case with 'Enterprise' customers. Remember, you are adding Connectors and there are limitations when it comes to domain traversing. One thing I will say is that Connectors are less high maintenance than Delivery Controllers, SQL and James Kindon, and there are many advantages to the Citrix Cloud approach.

Carpe Diem



Swimming to Success with Citrix Cloud


As a child and young man, I was involved in swimming. Many an hour was spent within the swim lane, every week, early morning and evening. This culminated in some races/galas that I took part in and sometimes I was able to bring home a medal, a smile, and some chlorinated hair.

Eventually, I reached the stage where the effort vs. the reward was not worth it. In hindsight, this was a mistake and partly brought on by misjudging the benefits and rewards I was getting from the experience. The way I was judging the success I achieved from the sport was narrow, limited, and lacked the vision for my future benefit.

Do I base my success criteria on winning, or do I look at every angle and assess all the benefits?

Health, Activity, Team, Interaction, Determination, Hope, Losing, Winning, Learning, Friends, Appearance, Achievement, Pride, Proud, Smiles, Foundation, Handling Nerves, Social, Muscles/Tone, Cardio, Family, Asthma, Depression, Relax, Destress, Energy.

Above are some positive buzzwords that highlight and broaden the different aspects of success criteria that I should have had a view of back in the day. It is important not to limit the success in anything to one goal. After all, it is not the endgame that matters, but what you learn on the way.

(We did have to wear Speedos back in the day, which is a definite negative. Fortunately, I learn from my mistakes.)

Before this article, I wrote an article some time back addressing some negative comments about the Citrix Cloud here:

As suggested by the title, I provided 13 very good reasons to embark on the Citrix Cloud transformation journey. This article will blow that out of the water in a manner of ‘writing’. I will introduce the success spectrum spectacular ‘Cloud Success Services’ that will be your stepping stone, helping you on your path to success towards the endgame and beyond!

This broadening of success criteria is something that you must be aware of in Citrix Cloud deployments or any project delivery. When you embark on a project, how are you going to measure that success? There are various criteria that you should think about from the start.

Step 1 -Why Cloud?

Step 1 is crucial and can be further broken down:

The Citrix Cloud ‘Use Case’ requires some understanding. Some, or all of the following might be part of the strategy that is adopted:

Increase productivity
Enhance security
Simplify IT infrastructure and management
Ensure business continuity

Understand the use case. For example, is it to increase productivity or implement an effective DR strategy? It could be that the use case is geared towards the cost, in turn, this leads to a ‘Business Outcome’ of a BYOD initiative which leads to certain ‘Success Criteria’ which does not fall short of reducing support tickets, improving user experience, enhancing security and access. Be realistic about setting a time frame. Break it down into steps so you can present the rewards back to the business and stakeholders.

Step 2 – Starting Position

What is your entry into the project? Are you transitioning from an existing on-prem Citrix environment or transitioning from a non-Citrix environment? You could be building a new Citrix Cloud environment from scratch or developing the Citrix Cloud for a specific use case.

Identifying where the resources will be deployed should be understood. On-premises (No kitten died) and Public Cloud such as Azure?

Step 3 – User Target

Identify the apps and users that you wish to take on the transformation journey. Start simple and build upon that success. Identifying the correct applications and users to initiate the project and increase the confidence back to the Stakeholders will, in turn, provide you the required backing from the Business.

Step 4 – Success

We have already touched upon setting achievement goals along the journey which are defined through understanding the ‘use cases’ Citrix Cloud can present. The below tables should provide success goals to work towards and achieve, based on the use case.

The advantages listed exceed the 13 reasons I originally wrote about. Plus, did you know there are a whole host of Enterprise features the Citrix Cloud has?

Here are just some of the features that might have slipped by.

Step 5 – Assistance

Do not take the burden of delivering a successful project alone. It is important that we identify the correct personnel to assist and help shape the solution to the desired outcome. Find the person who knows the Citrix Environment and knows its cracks and creaks. Understand the known issues of the day and work to improve on the user annoyances. Have key contacts that can report back the success of the project after each phase. Remember to start small and build upon this success.

Step 6 – Results

This is more about how you present results back to the business. Setting an achievable project timeline and breaking this into goals will help.

Break the project into 6 distinct phases:


From the 6 distinct phases above, you will be able to identify realistic project goals and minimize any disappointment with expectations.


Citrix Cloud CVADS is certainly, more than ever before, an Enterprise fit for purpose solution to deliver Applications and Desktops to users. It is a bunch of services hosted by Citrix that the customer will utilize and administer. The key to successful implementation is to understand the use case, business outcome, success criteria and finally set a realistic time frame. However, do not take my word for it. Citrix has supplied those embarking on the transformation journey a tool. This is part of the Citrix Cloud Success Services which helps you adopt the above methodology.


You will be able to start a ‘Success Plan.’

Gareth Carson

Enterprise Architect Capgemini

Twitter: @Citxen



Carpe Diem!

CVAD Service using 1903 VDA and Azure Resource Location, the Azure MCS Creation Process and 2FA and Secure Browser Service


It has been a while since my last blog and I thought it was time to dust the cobwebs off and get straight back in to Citrix Cloud, as there has been some evolution in the services it provides. My professional career path has deviated from the Citrix stack (for now) but I must admit it feels good to be delving in to this again.

Not knowing the state of my lab, I have decided to build a new lab with the Citrix Cloud in Azure.

My reasons are simple – It is quick!

The purpose of this article is to provide familiarity in setting up resources accessed via the Citrix Cloud. This will be a living article and expand over time and as I learn more, so shall you.

Sign Up Process

The Citrix Cloud sign-up process is detailed here:


If you want to know more on sizing VDA resources here are some useful links:

Useful Graphs on Cost and Sizing in Azure:

Azure Lab

First, I created all that lovely stuff in Azure that is required.


- Subnets (Per machine type)

- Domain Controller (You need Kerberos authentication for your VDA’s)

- Resource Groups X 2 (Infra and MCS)

- Master VDA (2016 VDA)

- Cloud Connector



Cloud Connector Deployment

I will not go so much in to the creation of the above but will start with the Cloud Connector deployment.

Log on to your domain joined Cloud Connector machine in to the Citrix Cloud control Plane. I am using a 2016 server O/S.Browse to your Citrix Cloud URL and log in to the portal.
Navigate to Resource Locations in the Hamburger Menu (Top Left):
Click to add a 'Connector' and download.

Click on 'Run'.
Sign in to the Cloud Connector Prompt.
At this point the install will proceed installing relative components and services.
Some connectivity tests will be run.
Once you have installed your Cloud Connector you should see it in the Cloud Management Portal as Resource Location.
The orange warning above indicates that you have 1 Cloud Connector. My recommendation is N+1 at all times and that includes when Cloud connectors are updated one at a time. Cloud connector updates are managed by Citrix.

Next, we go to the familiar Citrix Studio via the Hamburger Menu:

Azure Hosting Connection

Click on the 'Manage' tab and then 'Full Configuration'.
You will now see a familiar management console. Your Resource Location is automatically added as a Zone.

More about understanding zones with CVADS can be found here:

First thing is first, you will create your hosting connection.
We are creating our Resource connection in Azure. Choose this option.
Select the Azure geographic location and your zone (Resource Location).

We will be using MCS as deployment method.
Next, obtain your Azure Subscription ID and choose an identifiable connection name.
In case you are wondering, the subscription ID can be obtained by looking at any object in Azure. As an example here is my Cloud Connector machine in Azure highlighting the 'Subscription ID'.
You will be asked for your Sign in credentials for your Azure subscription.


The connection to Azure will be authorised.
Click ‘Next’ to proceed.
The 'Region' will be the region that you want the VDA’s to be deployed in to.
Choose the subnets that you wish the virtual machines to use and the appropriate name.
Continue with install.
Confirm the settings and click finish.
Now, you will have a connection from your Citrix Cloud Service to your Azure subscription.
I like to carry out some checks at this stage. We can see our domain listed in Identity and Access Management.
Expand on the above to see the warning details.

VDA Creation

Create your VDA that you will use as a master image.

This involves installing the VDA on a virtual machine designated for this role in Azure.

Boot up your iso and go through the familiar VDA install procedure.
Choose to create an MCS Master Image.
This next step is important. Choose the Cloud Connectors, not a Delivery Controller. It is the Cloud Connectors that will relay (proxy) traffic to the Delivery Controllers in the Cloud. Traffic is outbound on 443 from the Cloud Connectors.
I have only one Connector in this scenario. That is not enough for production!
Note: If you want to choose MCSIO feature you will need the 'MCSIO driver installed on your VDA. If not, the creation of the Machine Catalog will fail. My preference with Azure is based on cost, so I have chosen not to use MCSIO. When you choose MCSIO remember that an extra disk will be created that there is an expense for. You will need to factor this in to any cost exercise.
Accept the Firewall ports to open.
Finish the Install.
Reboot machine.

I used the famously well known community tool Citrix Optimizer on my VDA as part of my VDA preparation.

Install any applications that are required for users and shut the MASTER VDA down (Deallocated).

Once this is actioned, return back to the STUDIO console in the Citrix Cloud Plane and create a Machine Catalog.

Machine Catalog Creation

Create a Server VDA.
Choose the appropriate hosting connection.
Now, choose the VDA disk. Remember that the VDA used for the MASTER image needs to be deallocated.

Tip: I like to take my own snapshot of the Master vhd in Azure. This way I can choose an appropriate name for the vhd.

Choose the minimal functional level required.
Here is a friendly warning reminding you to deallocate the Master VDA machine.
For production you will most likely choose Premium and if you have Hybrid user rights choose this option. This will provide you with favourable reduced compute base rate costs.

Use the following tool to see what your cost saving estimations could be:

Managed and unmanaged disks are supported with Citrix Cloud and MCS in Azure. There are advantages:
  • With Azure managed disks, you pay for the entire size of the disk versus unmanaged, you pay for only the blocks that are in use.
  • Azure Managed Disks only support VMs
  • Azure Storage Explorer does not show Azure Managed Disks
  • Deploy Cloud Connectors on Azure Managed Disks.
  • Managed disks are recommended because Microsoft will automatically replicate the disks to multiple storage arrays.
  • Citrix recommends deploying the Master VM on Azure Managed Disks.
Remember, if you use the MCSIO feature an extra disk will be created. As an example if you choose the default Disk Cache size a 127gb disk will be provisioned for MCSIO. This is not free folks.

I know a good Irish man who has written an article explaining this in more depth:


Should you choose the option, remember you need the MCSIO driver installed on your VDA.
Next screen will show your Resource Group that you will deploy the VDA’s in to. A few things to note.

The Resource Group must be empty.

If you want to create more than 240 machines you will need to have more ‘empty’ Catalogs pre-created if you do not have full subscription rights. You cannot add more Resource Groups later to a Machine Catalog!

If you have full subscription rights to Azure, the Resource Groups will be created. 
Tip: Personally I like to create mine beforehand so I can provide appropriate names.
Choose the appropriate subnets that your network card for your VDA’s will use.
Choose the OU the machines will be deployed and useful Naming Scheme. 
Tip: I like the OU’s to be named after the Machine Catalog name and place the machines from each Catalog in corresponding OU.
Enter the Domain AD credentials. MCS must have permission to create the machines on your domain.
Click ‘Next’.
Review and complete.

MCS Creation Process in Azure

At this stage some funky stuff happens within Azure. The next section will describe the MCS disk creation process.

Navigating back to my Resource Group I chose to deploy my virtual machines in to, I have captured some of this MCS disk creation process. (I have not caught all steps but most to provide a picture of what is happening.)

For the initial step, we created a master VM with an associated disk. If the VM is created using unmanaged disks, the VHD will be placed in a Storage Account. If the VM is created with a managed disk, the disk will not be placed in a Storage Account.

We then start the MCS wizard which checks via Azure API that we have the necessary connectivity and capacity.

If you have full scope permissions the Resource Groups will be created as previously mentioned, if not you will have to create the Resource groups beforehand. Remember a resource group can only contain 240 VMs.

A Security Group is created to isolate the preparation VM from rest of the network. This blocks any inbound or outbound traffic to the Preparation VM during its lifetime

MCS then asks plugin to make sure service principal has access to the Azure resources. We now begin to see items start to populate in the empty Resource Group.
On next refresh I see a Preparatory VM is created.

A preparation virtual machine (VM) is created based on the original VM. As part of the process of creating a machine catalog using MCS, the contents of the shared base disk are updated and manipulated in a process referred to as Image Preparation.
A storage account is created. This is for the preperatory Identity disk. This is a temporary step.

Inside the Storage account I see the following items:

(For some reason my screen shots turned black)
Within the Citrix locks are 2 .lock files.
An identity disk is created for the preparation VM. The process involves a small “instruction” disk, which contains the steps of the image preparation to run and is attached to that VM. The preparation VM is created, and because it is deployed in Azure it will start automatically. The preparation VM is forced to stop, so changes can be made. After the preparation VM stops, the identity disk is added to the VM. The preparation VM starts with the identity disk attached and runs through the preparation sequence, this involves writing the identity to the identity disk and anonymizing the master image to be used with MCS. 

- Preparation VM started. 
- Preparation VM stops after preparation. 
- Preparation VM disk copied to new container and used as base.
The Preparatory OS disk that appears matches the size (127gb) of our O/S.
The other disk is a 1gb preparatory Identity disk.
We now see the snapshot of the Master disk appear alongside the MCS created preparation virtual machines Identity and OS (Delta) disk.
During the Preparation Identity Disk phase the following vhd is present in the storage account.
Then a prep disk snapshot disappears.

- Replicate base image to all Storage Accounts.
- Delete Preparation VM and Identity disk.
All created resources are checked before VM creation process is started.
The snapshot disk reappears with the standard base name of our master disk.
The detail of the base disk is 127 gb.
Identity, OS disk and NIC disappear but the Base Disk snapshot remains.

The final creation of the Identity and OS disk of the VM plus the Nic assignment follow.
During the start of a VM, the operating system disk is created.

The VM is subsequently created during the start operation and the VM is bound to the OS disk.

The ID disks created is associated with the VM before starting the VM.

Within the Storage Account we see the virtual machine’s identity disk. This is temporary creation step, as I do not see this later.
The objects below are shown when the process is complete.
One thing has surprised me and that is the creation of the storage account, even when creating managed disks. This appears to be for the identity disk creation process (Instruction Disk) in both preparatory and vm creation phases.

The storage account is also present after the whole vm creation process, all but empty.

A Machine Catalog that has deployed machines of ‘type’ in to the Azure subscription via the Azure API will now be present in the Citrix Studio console for the CVADS.

The warning I see in the next screen shot is just notifying me, that I do not have the appropriate RDS licensing. Citrix allows this warning to be removed, if you wish. It is always great to see your RDS license issues highlighted, rather than wonder why your applications are not launching!

Delivery Group Creation

Heading back to the familiar Studio Console, the next step is to create a Delivery Group and assign this to the Machine Catalog. This step is necessary to provide the users the ability to access desktops and apps provided by the VDA’s in the Catalog.

Again you should be familiar with the process of Delivery Group creation but let’s highlight the steps anyway.
Select the Catalog.
Assign the appropriate user access.

There are options to use the familiar method of user assignment at this stage or, you could leave user management to the Citrix Cloud. This option makes use of the Library, which we will come to later.
Next, it is all about the apps and desktops you wish to publish to your user base.

Detect applications via the start menu or have the option to browse to specific locations.

Tip: At this point a VDA machine is started in order to read the start menu programs. This takes some time. You could manually start your VDA in advance.

The machine will turn on and go in a creating state in your Azure subscription when this happens.
Eventually, the applications will appear.
Choose your applications and then assign a desktop.
Complete the Delivery Group Assignment process specifying group name and display name.

Click Finish.
We now have published App and Desktop resources for our user base.

Secure Browser

It is a good idea to publish a Secure Browser to your end-user base for the primary reason to redirect risky internet browsing activity to an isolated, cloud-hosted browser. This is a client-less configuration.
The spiel from Citrix is here:
Citrix Secure Browser completely redirects internet browsing activities to a cloud-hosted web browser, adding layers of security. Now all your user’s risky internet browsing actions are separated from the corporate network.  Citrix Secure Browser is designed to enable users to traverse the internet; however, only screen updates, mouse click and keystroke commands associated with navigating the internet cross the network to reach the user’s endpoint device on the corporate, greatly reducing the risk of data exposure or exfiltration. No website data or information resides on the user device or in the local browser cache, and nothing is left behind when the network connection is terminated, aiding security and compliance.

The configuration is straight forward.

Click on 'Manage'.
Then, go through initial configuration steps that will take you through a publish, test and distribute procedure.


Two options:

- External Unauthenticated

- External Authenticated

An Unauthenticated Secure Browser can be used by anyone if they have the URL to launch it.  Unauthorised Secure Browser instances are not managed in the Library. We want our resources to be controlled by Library so will choose the 2nd option.
Next, you can choose the name, browser opening page, region and icon.
Then, we will assign our users via the Library (More on this later).

Click on the Library link.
Click on the 3 dots…
Choose your subscribers.
Once chosen, click the 'X' ICON at top right.
We can see the resource has subscribers (Users).
The Secure Browser also allows the ability to place some restrictions to the service which are self-explanatory.
Secure Browser also provides the ability to control access to URL content.
A test URL link is provided so you can see the experience first-hand.
Upon launch the browser opens a secure connection to our chosen web page URL.
I tested watching a new movie trailer. To be fair, the video playback was impressive.
I am also able to track the usage of the Secure Browser Service.
This is an effective secure way to provision a browser to your end users. Once accessing the published resources, the browser will appear as an application. The option to publish this browser to users via the Library or provide them a URL they can access can be granted by the Citrix Cloud Administrator.


As we have just touched upon this concept let’s provide some context.

All resources that you provide users with, are shown in this Library portal. You can assign user/groups to published resources. The below screenshot shows a published Secure Browser (More on this later), Desktop and applications that my user base can access. The Cloud Connector is communicating with the Active Directory and can browse and assign specific users/groups.

The option to ‘Leave user management to Citrix Cloud’ in the Delivery Group Wizard lets you assign the relative access within the Library.


Peeking within the Zones node I see my Cloud Connector, Hosting Connection, Machine Catalog and User Group. In a multi-site scenario (Again, more on this in a later blog) my user (in this example) will connect to VDA resources in the Azure location.

Workspace Portal and 2FA

There must be a way to connect to the resources we have published at this point.

Welcome to the Workspace. The URL users connect to that is customisable and the same can be said of the portal look and feel. This is somewhat limited currently. The great thing about this is you have one less secure certificate to worry about.
With Citrix Cloud Workspace you  can also choose to leverage the Gateway Service with multiple points of presence over the globe backed with a Cedexis backbone that will find the best routable access for your users.

Traffic Flow of VDA and Gateway Service is explained in the following article:

You can read more about Cedexis and Intelligent Traffic Management here:
The very complicated configuration is shown below 😉
Workspace will allow you to configure Authentication methods. We now have Active Directory + Token giving the ability of Two Factor using applications such as Google Authenticator. This feature is termed as Active Directory + Time Based One Time Password.

First you must enable this Authentication method in Identity and Access Management (in Citrix Cloud management portal) then, you assign it to the Workspace as an authentication method.

I have tested this authentication and it is simple and effective. This is a great move and pretty slick! One more reason to go to Citrix Cloud!
The screen below shows the Authentication access I have for my user after enabling 2FA in the Citrix Cloud.

First step, if not done so before would be to click on the ‘Don’t have a token?’ link.
Input Domain/username details and click next.
If you come across the error in the next screen shot, you must input an email address within the AD user account.
I populated my user1 email address. Provide the relevant information for your user.
This allows you to progress and an email will be sent out to complete device registration.

The example shows an email sent to my Gmail account.
Within the email body you will see something like the following:
Take the code provided so it can be added to the ‘Verification Code’ option along with your user password.

Click ‘Next’.
Next, you are presented with a barcode that your mobile phone can scan using an application like Google Authenticator.
Once this is done, I will be authenticated securely and have access to my applications, desktops, secure browsers.


I hope this article is useful and highlights the way you can effectively use Citrix Cloud Virtual Apps and Desktops Service using an Azure Resource Location along with Secure Two Factor Authentication to gain access to your Apps, Desktops and Secure Browser. With the introduction of additional security features and Cedexis (ITM), an improved Gateway Service and traffic Flow. I can easily see reasons why Enterprises should start the adoption of Citrix Cloud. It is a complete no brainer in a Multi-Site scenario and I will have more to say on this in my next blog.
For now, it is time to log off.

Errors Encountered on the Journey

Error 1

Encountered when I was unable to remove an old Delivery Group from Citrix Cloud Studio.

Error 2

Encountered when I installed the Cloud Connector Software without Administrative rights.

Error 3

Encountered when using _ character in naming scheme for VDA.

Error 4

Encountered when carrying out MCS catalog provisioning. My Azure based subscription did not have enough cores.

Error 5

Encountered when trying to provision a catalog using MCSIO without appropriate MCSIO driver being present on VDA.

Error 6

Encountered due to mismatch with Subnet between Domain Controller and VDA.

Error 7

Encountered due to Domain Firewall being on and partly due to the subnet mismatch in previous.


13 Reasons why you should use Citrix Cloud


Considering recent published articles surrounding Citrix Cloud I think it is important to remind institutions out there of the benefits. I will highlight (very briefly) 13 advantages about the Citrix Cloud (There are many more) and provide a link to a great article by fellow CTP Nicolas Ignoto on feature requests that should be incorporated in to the solution.

SQL Backend

This is a big one. If you have multiple resource locations on premises traditionally you are wanting multiple SQL servers for your Xenapp Sites back end. Moving to Citrix Cloud eliminates this. You also now can use WEM as a fully integrated cloud service meaning you do not have to worry about costly SQL. Have you checked how much SQL costs in Azure?

High Availability

All infrastructure is HA (Highly Available). Your Desktop Delivery Controllers (Brokers), license Servers, Studio, Director, SQL. Think of the comparable cost with IAAS or on premises.

Automatic Patching

All infrastructure is automatically upgraded. Citrix takes care of this for you eliminating the need to plan patch management. Hotfixes and Security patches are not your worry when it comes to the infrastructure components.

Always Latest Software

The infrastructure components are automatically upgraded to latest Citrix versions. You are on latest technology that is thoroughly tested before deployment. You get latest features and improvements.

License Usage

With the Citrix Xenapp and Xendesktop Service you can easily control your license usage. The licences are user licenses as there is no concurrent unless you subscribe to the full Workspace services, however you do get 2 for 1 trade up deals and hybrid rights usage. This allows you to continue using your on premises solution whilst migrating (testing) the Citrix Cloud. At time of writing I believe you have a 3 year transition period. The other advantage is you are eligible to release licenses after 30 days compared to 90 days for on premises environments.

Unified Management

You can easily manage multiple resource locations from one single unified management plane. This reduces the need for costly infrastructure at multiple site locations.

Smart Scale

You have the ability of controlling costs by using Smart Scale. This helps reduce the cost of your workloads in Azure, AWS or Xenserver (on premises). Think of the way public clouds incur cost by billing per minute. You can now have workloads running only during core operational hours or reduce workloads as users reduce.

Hidden Costs

Hard one to prove but if you think that there is an additional cost with Cloud you should think of the hidden cost savings also. Reduced tin, reduced operational costs, freeing up time and resource to concentrate on other initiatives, not worrying about upgrade cycles, multiple infrastructure in resource locations, easy central management, easy image management, monitoring capability included.


This will be a fully integrated Cloud service allowing you to improve the workspace experience for your users. Improve logon times by moving GPP to this service. Apply CPU and memory optimisations. The SQL back-end is managed by Citrix.

Smart Check

This is an automatic health check for your site. No need to deploy agents if you have the Xenapp and Xendesktop Cloud Service. You will receive diagnostics on your sites health such as machines in maintenance mode, services that are stopped and any back-end communication issues.

Simple Image Management

You have the ability to use MCS and PVS (on premises) via the Citrix Cloud. (Granted you do have this ability on premises -So maybe this one does not count.)

Cloud Agnostic

You can choose your Cloud of choice. There is no Cloud lock in. Citrix Cloud is public Cloud agnostic. Managing multiple resource locations in different public Clouds is easy.

Easy On boarding and ability to make POC

The time it takes to request a trial is up for debate but when you compare this to the time it takes to get traditional POC concepts running it is not that bad. This will be improved but it is easy to transition a on premises deployment to a running Citrix Cloud Xenapp and Xendesktop Service. We are talking hours and not days here!

Workspace App

Finally, you are able to take advantage of the Workspace experience using the Workspace APP which is an all in one place to go to use multiple resources you need on a daily basis. Whether it is Sharefile apps, Saas Apps, Web Apps, Xenapp/ Xendesktop Apps, on- premises, Cloud etc, you can browse and search for your resource through one easy to use Workspace App experience when linked to the Workspace in the Xenapp and Xendesktop Service.


Citrix Cloud is evolving and is improving and does have its limitations. I feel it is important to highlight some advantages though in the wake of some recent Citrix Cloud bashing. My fellow CTP’s  provide a constructive article on the limitations that are being worked upon that is worth a read. The aim for this article is to provide some Yin and Yang to the pro’s and cons of the solution.

Creating a NetScaler in Azure Resource Location for your Citrix Cloud

Background Information

The old limitations of using a single IP on an interface for a NetScaler Gateway solution in Azure are no more.

You may have heard that now the NetScaler is able to have one interface with multiple IP addresses, one interface with one IP address, Multiple interfaces with single IPs and Multiple interfaces with multiple IPs. What does this mean?

Well, the old methods of putting a load balancer in front to NAT 443 addresses to 4443 gateway IPs is no longer required. You can still use it if you wish, and will need the Azure Load Balancer if doing the HA setup.

I can have multiple IPs and assign them to a single NIC on the NetScaler Azure VPX. This is known as multi-IP architecture.


In Azure, assigning multiple IP’s to an interface looks like this:

Above, you can see that I have a single IP address “ipconfig1” which is my NetScaler NSIP. (I removed the public IP that was assigned).

In a multi-NIC, multi-IP Azure NetScaler VPX deployment, the private IP associated with the primary (first) IPConfig of the primary (first) NIC is automatically added as the management NSIP of the appliance. The remaining private IP addresses associated with IPConfigs need to be added in the NetScaler appliance as a VIP or SNIP.

You can see that I have added a SNIP IP and my Gateway IP with a internal and public IP.

So why am I focusing on having a Gateway in Azure?

Well, something I stress about going to the Citrix Cloud is having a NetScaler and StoreFront in your Resource location. Why?

You lose Cloud access and you lose the ability to broker connections to your VDA machines in Resource locations. This is about having a way to access your resources in that event.

You will still be able to access and broker connections in your resource location because the Citrix Connector Servers also act as proxy brokers, and they contain the Local Host Cache. (Please check out my webinar at the Virtual Expo for more on this.) So, even placing one NetScaler Gateway appliance in an Azure Resource location acts as a bit of resilience for your Citrix Cloud solution.

Typically, it is more likely that a customer’s internet connectivity will be interrupted (which could be due to 3rd party factors such as ISP or power problems), versus the highly reliable and redundant Citrix Cloud management plane running in Azure.

There are plenty of articles on creating NetScalers in a traditional environment and I will highlight some here by some other fellow CTPs:


The art of building a NetScaler in Azure is less known. Hopefully, this article will provide some enlightenment, and you can start forming your resilient Citrix Cloud solution using an Azure resource location.


Before you create your NetScaler in Azure, here are some prerequisites you should know about.

  • Create a resource Group for your NetScaler Instance. (A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group.)

TIP – In a Citrix Azure world, I put the NetScalers in one resource group, Infrastructure in another and VDAs in a separate Resource Group.

Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.

Also something to keep in mind, for compliance reasons, is that you may need to ensure that your data is stored in a particular region.

More on Resource Groups can be found here –

  • Pre-create PIP (Public IP) for the gateway and the NSG (Network Security Group) in your appropriate Resource Group:

NSG inbound outbound rules can be defined as below – You can make this more restrictive if you like.

This is just an example:

You can then simply add these objects when you create the NetScaler in the Azure market place.

  • Have your Vnet and Subnets in your virtual network pre-created.
  • Have any internal VIP (Virtual IP) addresses ready and your internal Gateway IP. Know your infrastructure IPs for LDAP, Service account passwords, Storefront URLs, STA IPs and DNS so you are able to configure the Gateway wizard set up.
  • Have your Certificates ready for 443 connections and do not forget to have all the intermediates!
  • Have your public DNS configured for your Gateway URL. You can do this as you can create your PIP for the Gateway in your Resource Group before VPX deployment in Azure.

The Configuration

So how do you build this Appliance?

First you choose your appliance In the Azure Market Place.

I have my own license so I chose the machine below –

Review and click create.

Next simply go through some basic configuration steps.

Name your NetScaler. My preference is name of NetScaler – Domain – Region. (example – NS01-CITXEN-UKS)

Choose your password, disk type, resource group and location.

Click OK.

Next, you can choose the machine specification you will use for the NetScaler instance.

VPX virtual appliances can be deployed on any instance type that has two or more cores and more than 4 GB memory. Remember to size appropriately for your particular environment.

Next you can choose a lot of settings such as Availability Set, Storage account, virtual network, subnet and NSG.

The NSG  (Network Security Group) assigned to your resource Group acts like a firewall. If this is created before the NetScaler market place wizard, you can simply choose the NSG and assign it to the NetScaler’s NIC.

Option to choose virtual network below.

Options to choose or create subnet.


Options to create or choose NSG.


Click OK

Click create.

In about five minutes, your NetScaler will be deployed as you can see below.

Now you need to navigate to your interface via the newly created machine’s Networking menu and then click on the highlighted Network Interface in the right pane.

Next go to IP Configurations and click Add.

Now, you configure your additional IP’s on the interface.

Create your SNIP IP.

Do the same for the NetScaler Gateway Virtual IP (VIP).

Set the IP to static and choose your Public IP that you created before set up.

In the end you should have three static IPs assigned to your NIC as seen below.

Remember to set up your external DNS so your URL can resolve to the Gateway!

At this point you should be able to access your NetScaler from your internal network and via your public management IP (if you added inbound rule port 80 to your NSG) and proceed with the initial configuration setup, upload licencing and your SSL certs and proceed with your Gateway wizard setup. (After configuration, I usually remove the PIP from the management IP).

There is no requirement for an Azure Load Balancer in this Single deployment method. If I was deploying my VDAs in a Resource Location in Azure as part of a Citrix Cloud Solution this solution would suffice for that little extra reassurance against losing your connection to the Cloud Management Plane.

Look up my blog where I  run through the configuration of Netscalers in HA (High Availability) in Azure using the new NetScaler 12 HA template.

Hope this helps somebody!

Carpe Diem!

Follow me on Twitter @CitXen

Office 365/FSLOGIX with Roaming Licences without ADFS in Cached Exchange Mode for VDI

Today we will step in to the world of “Why am I not using this solution already!?

I am going to write about FSLOGIX and in particular two features that when it comes to profile management, provide you with a win win. Happy administrators and happy users.

The end result of any implementation should be about the user experience. If this is not acceptable the hard work you made putting in the solution is ignored.

Profile Containers and Office 365 containers to the rescue!

In a nutshell all your folders and files are mounted at user logon in one single VHD/VHDX file. The system only sees a single VHD/VHDX file to attach to the user. There are two container (VHD/VHDX) files.

The Profile Container – No faffing about on what to include or exclude with the profile, no slowness of logon owed to folder redirection.

The Office 365 Container – OST and office data and even One Drive can be cached in a single VHD/VHDX file so you can have lightning experience with office apps including searching and indexing! (Imagine the case with a chatty OST/PST across the network causing issues with your VDI/RDS users or whenever the users search or do something within outlook. Lots of resources being hogged and network and shared users affected).

One thing to note – Space, you will need space. SAN, Storage, file server you name it you will require space to store the profiles. This is not so costly anymore and  in today’s terms this should not be a problem. Remember size appropriately.

From a user perspective and experience this is A+.

I had the fortunate experience of meeting some of the FSLOGIX team and they really know their onions!

So what will I cover?

  • Office 365 Installation
  • Image Checks
  • FSLOGIX Install
  • Image Preparation
  • GPO Setup
  • How FSLOGIX Works
  • Troubleshooting
  • FSLOGIX Container Management
  • Conclusion

Office 365 Installation

Once you have the master image prepared and optimised for the best user experience by a your preferred consultant we can then begin the office 365 install on the image.

The office install is done by using the ODT tool (office deployment tool).

You simply download this from the website –

Once the executable is downloaded simply run it on a network share.

You should then see the following files.

Next, you need to create a new configuration.XML file.

This is easily done by going to the wonderful GITHUB.

You are able to exclude the products you do not wish to download. (One Drive is usually recommended to not be redirected to the containers historically. However, this can be done).

This is pretty intuitive. What you need to be aware of is the following –

  • Disable Updates
  • Do not have any other office products on your image
  • Display level None
  • Accept the EULA
  • Shared Computer Licensing should be YES

Once you have your configuration file sorted simply download and replace the one in the profile share.

You should end up with something similar to the below.

Next you need to download the office files.

This can be done remotely from your Win 10 master image or any other machine.

Open up an administrative command prompt and run the setup from the location the files are in and download the files to the share.

\\Server\Share\setup.exe /download \\Server\Share\configuration.xml

Once this is done you will see extra files in your profile share –

Now from your master image you need to install the office files.

So first step is to download to a share and second step is to install from that share.

\\Server\Share\Setup.exe /configure \\Server\Share\configuration.xml

This process actually install office on to your image.

You will have to wait for a bit of time so get some pushups in!

Image Checks

At this stage do not launch your office apps. You do not want to activate office on the Master Image!

We have to go through a few tests to check all is good.

Check 1

Make sure the product is not activated on your image.

Check 2

Add the following registry keys and make sure shared computer activation is set to 1.

The registry keys are required to enable office licensing roaming.

Fslogix Install

We now install the fslogix agent.

This is easy.

Click Yes

Enter your product key.



The agent install does install a service.

You will also have 4 groups created for you.

The ODFC groups are user control for your Outlook containers.

The Profile groups are for your……yeah you  have it!

Remove the default groups

…. and add the users you want to have containers mapped when they log on to your VDI/ RDS solution.

Best practice – create 2 groups.



Put the users in to these groups and add the groups withing the FSLOGIX created groups.

Note – Exclude overides include.

The next thing you need to do is create a few reg keys.

FlipFlopProfileDirectoryName – Basically flip sid, username around to username, sid.

VHDLocations – The location of your share that will contain the Containers.

IncludeOfficeActivation – 1 = Office activation data is redirected to the container. 0 = Office activation data is not redirected to the container.

RoamSearch – Used to control the FSLogix Search Roaming feature. Set to ‘1’ or ‘2’ to enable the feature.

Image preparation


This will be your own set of best practices.

What I will say is there are 2 great tools to prep your images.

There are lots of other tweaks you can do – I follow this link for the machine template within VMware.

This link has a good vdi.bat file you use at the end of your image prep.

Once you finish preparing your image it is now over to your Horizon/Xenapp solution to deploy this out.

I will not go in to detail here for this part but this is using an image for MCS/ PVS /Linked clones etc etc.

GPO Setup

There are some GPO’s you can also use to configure settings.

Your VDI machines will be placed in the GPO that these rules apply to.


Office ADMX template

Setting to roam the 365 Token.

Set Computer Configuration Preferences.

Here are the keys we mentioned earlier but being set by GPO.

You will just have to put the ADMX/ and ADML files either in your local policy definitions location or the Sysvol Policy definitions location.


When FSlogix is working it will create VHD/VHDX files on your Container share under a user, sid folder location (Remember the FlipFlop setting) when a user logs in to the VDI.


This can be seen in the VDI Compmgmt console. So, you will get a mounted volume attached that resides on the file share as though it is local to the device.


Log files are located at Log Location for Agent = C:\ProgramData\FSLogix\ODFC

The log will indicate the setting being read/applied by GPO, the VHD being found for the user , VHD’s  being mounted to the specific folder of Office365.

If using UPM or other profile management tools in conjunction with FSLOGIX don’t forget to exclude the highlighted items.

UPDATE: More exclusions added – Thanks to Rene Bigler.

Outlook- \Users\<username>\AppData\Local\Microsoft\Outlook
Search- \Users\<username>\AppData\Roaming\FSLogix\WSearch (Only required when RoamSearch set to 2)
Skype4B- \Users\<username>\AppData\Local\Microsoft\Office\16.0\Lync
Licensing- \Users<username>\AppData\Local\Microsoft\Office\16.0\Licensing
One Drive- \Users\<Username>\<OneDrive folder name>

The status tray also highlights details and logs.

FSLogix Container Management

Mounting vhdx files so you can look at the contents is done with a tool – Frxcontent.exe, located in the following location – C:\Program Files\Fslogix\Apps

You install the tool using the command frxcontect.exe –Install

From the machine you installed this tool on you then browse to the VHDX files and right click and mount as shown below.

The container below shows the Office VHDX file.

The container below shows the users profile within the VHDX folder.

Two screens will pop up when mounted.

Registry and the Profile folders.


The next screen shot simply shows that I can drill in to the users desktop and see the folders that I created.


Simple, easy to implement and quick. Great user experience and solves issues with office 365 cached mode and Exchange online experience for corporate enterprises.

This solution will allow your license to roam in a non-persistent environment without ADFS.

More posts with FSLOGIX will be coming.

Carpe Diem!

Follow me or contact me on twitter @Citxen

Active/Passive Multi Nic Netscaler HA in Azure using ALB

I wrote an article in the Citrix User Group Community, where I highlighted deploying a single NetScaler in Azure to provide some resilience for a Citrix Cloud Solution.

I promised a follow up at the end of the article and here it is.

I have decided to highlight how to deploy a NetScaler Gateway In Azure using Active/Passive mode and a multi NIC Configuration using high availability.

The Azure Market Place has a ready to go template to help you deploy this solution.

It will deploy the following –

  • 2 NetScaler’s
  • 1 Azure Load Balancer
  • 6 Network Security Groups
  • 6 NICS
  • 3 Public IP’s
  • 1 Availability Group
  • 1 storage account


Have the Gateway IP PIP (Public IP) already defined. This way you can sort out your external DNS before template deployment and simply add the public IP to the Azure Load Balancer. More on this later.

You can create the resource Group/Subnets beforehand if you wish. You need 3 subnets, or the template will create 3 for you.

The Build

Search for the NetScaler template on the Azure Marketplace.

The template outlines the following –

Citrix NetScaler 12.0 High Availability (HA) Azure Resource Manager (ARM) template is designed to ensure easy and consistent way of deploying NetScaler pair in Active-Passive mode. This template increases reliability and system availability with built in redundancy. This ARM template supports Bring Your Own License (BYOL) or Hourly based selection. Choice of selection is offered during template deployment.

Citrix NetScaler is an all-in-one web Application Delivery Controller (ADC) that makes applications run faster, reduces web application ownership costs, optimizes the user experience, and makes sure that applications are always available.

Citrix NetScaler offers many tools for application deployment. Some of the primary tools are:

  • Application Acceleration and Application Security
  • HTTP Compression and HTTP Caching
  • Web Application Firewall (WAF)
  • L4-7 Load Balancer
  • Global Server Load Balancing (GSLB)
  • SSL Acceleration
  • Server Offloading
  • Server Consolidation
  • Content Switching and Content Caching
  • High Availability
  • Remote Access and Remote Monitoring
  • Policy Engine with Multi-Tenancy
  • Data Loss Prevention
  • Session Persistence

This template will guide through deployment of Citrix NetScaler HA Active-Passive mode. Preconfigured to include components and setting to deliver seamless HA experience. Details of topology can be found at On successful deployment, Pair of NetScaler will be pre-configured in HA-INC mode. NetScaler 12.0 VPX HA template support different SKU’s of NetScaler such as BYOL and hourly license such as VPX 10, VPX 200, VPX 1000 and VPX 3000.

Next you have the option to choose or create a Resource Group.

Put in your preferred username/password combination and it is here you can choose the licencing method. I have opted for BYO (Bring your own) which I will upload to the device and configure with the Mac ID of the VPX system once known.

You can also choose the virtual machine sizes. The recommended size is 2 x Standard DS3 v2 machines.

Next screen gives you the option of choosing or creating the vnet (virtual network) and your subnets. It is here you can choose the vnet/subnets you already created or let the template configure a new vnet and 3 subnets.

In the screen shot below I had 3 subnets already created.

One for management, one for Infrastructure and one where I placed my VDA (Xenapp/Xendesktop machines.

Once you have defined each subnet the template wizard prompts you to review your subnet configuration.

Click Ok.

I came across the below error. This was because my subscription only allows a certain number of cores. I removed some machines and I was able to run it again successfully.

Second attempt I was able to proceed and create the HA environment.

Azure lets you know the deployment is happening once you click create.

Again, in my deployment there was a failure but the template did deploy.

Then this message pops up informing me the template is deploying.

The error details I received are here.

Once deployed you should see the following virtual  VPX machines in the Resource Group Location.

The Azure objects I highlighted already will also be created in your chosen Resource Group.

Now you should connect to the internal management IP’s of your NetScaler appliance on your chosen management subnet.

The IP’s again can be identified by drilling in to the virtual machines and going to the Networking setting –

You will see your IP’s and the NSG rules set on the Interfaces. If you want to connect to your Public IP (PIP) for management, it is here you will need to relax the NSG rules to allow the access.

There is a public IP and an internal IP that is shown at the top of the screen shot.

Once you obtain your management IP’s from both devices you will be able to connect to the internal management IP’s (NSIP).

Here the fun stuff of configuring your two VPX devices will begin!

One thing to note is that when you log in you will see that your HA is set up and all nicely configured and can successfully failover.

  • Obtain your licensing and upload.
  • Import your certificates. Don’t forget the complete chain!
  • Upload the CA certs to your NetScaler and then install them.


Choose the Intermediate certificate.

Now import your Server Certificate. Mine is in PFX format.

Upload to the NetScaler device.

Click Install.

Choose the PFX certificate.

Put in your certificate password and click Install.

Your Server certificate should appear as below.


One more time – Do not forget to link the chain!

Bind the Server certificate to the Intermediate.

Click OK.

Now you have the appropriate licencing on both your HA appliances and Certificates are uploaded you can proceed with the install of your Netscaler Gateway VIP’s (VIRTUAL IP).

The Xenapp/ Xendesktop Gateway wizard process is well documented so I will not outline this process here. Please check out my CUGC blog at the top for links to other CTP members posts on this process.

Additional Configuration after Template Deployment

This additional configuration after the template deployment needs to be carried out.

First of all you need to create 2 NetScaler Gateway’s.

You can create this using the Xenapp and Xendesktop wizard on the Netscaler.

You can easily deploy a second NetScaler after the first one by clicking on the highlighted icon below.

Once again, I will not run through the NetScaler Gateway configuration as this is well documented.

On the virtual machine Interfaces in the Azure portal, the Gateway IP’s must be assigned.



Make sure your NSG (Network Security Group) on both interfaces is allowing port 443 inbound. The NSG acts like a firewall.

This needs to be done for both machines (VPX0 and VPX1) and the NICS the Gateway is assigned to.

The reason you need to create 2 Gateways in an Active/Passive deployment is that you can only assign one IP to one interface in Azure.

For example, if I was to assign to VPX0/NIC1, I would be unable to assign that IP to VPX1/NIC1.

As the gateway IP is a floating IP shared between two VPX devices, we need two Gateways with the same configuration so we can assign different IP’s to the NICS so the Azure Load Balancer can point to the Gateway.

Azure Load Balancer Configuration

You need to assign your public IP (PIP) for your NetScaler Gateway to a Frontend load balancer.

In an on-premises environment if the primary NetScaler goes down, the secondary node will flood the subnet with ARP to take over the IP address of the VIP and SNIP to restore the connection. In Azure GARP doesn’t work therefore we use the ALB in front and have a single public IP which health probes against the backend NetScaler’s and if one of the NetScaler’s goes down, the HA function in NetScaler will (when configured with INC) will set the virtual server as up on a separate IP address and the ALB will start to failover traffic to the second VIP on the second NetScaler.

More on Azure Load Balancer can be found here –

Access the ALB (Azure Load Balancer) via your Resource Group that contains the deployed resources.

You then need to create your Frontend configuration and assign the PIP (Public IP) to it.

Remember you can create your PIP before deployment so you are able to sort out your SSL certificate and external DNS before implementation day.

You simply click ADD and you get the option to configure a Frontend LB.


The next thing you need to do is create your backend Pools. This will be the machines and target IP’s the Frontend Load Balancer will connect to within your environment.

Simply click ADD and configure as necessary. If you don’t see your target IP you may not have assigned this on your machines NICS in Azure. Remember the IP’s defined within your NetScaler ADC configuration must exist on the NICS of the virtual machines in Azure.

The Backend IP’s can be seen below and they are pointing to the individual Gateway IP’s assigned to each NIC. (Remember 2 Gateway’s).

Lastly we need to configure the load balancing rules. This is the glue if you like, that will make your Frontend Load Balancer pass on 443 connections to the Backend you just configured.

Click ADD.

One thing that caught me out when I was configuring this set up was the Health Probe. I used the probe that was assigned by default. I could never hit the Gateway logon page and I was scratching my head for a while. When I created an additional Health Probe on 443 I was able to hit the Gateway externally. Before I did this I could not get a TELNET response from the Gateway externally.


In Azure SNIP IP’s are not floating. This means your SNIP IP can be different across appliances.



Make sure you have a SNIP for each subnet you communicate to. This then needs to be assigned to the Azure VM interface. Remember you cannot assign the same IP to more than one NIC.


Remember to add both your Gateways to Storefront. You can simply download the configuration file here and import it in to your Storefront.

That is basically it. You should now have a NetScaler HA active/passive solution in Azure.

This is front ended by an Azure Load Balancer.

Simply connect to your Public URL and log in and you are good to go.

The next article will highlight the Active/Active approach in Azure.

Follow me on Twitter @CitXen

Carpe Diem!



VDI Like a Pro – Announcing the State of VDI and SBC union survey 2018


The industry’s biggest (almost) annual survey has just started again. For years now one thing is clear: Many discussions in the End User Computing, VDI and SBC space are not just about performance best practices and product comparisons. With so many VDI and SBC deployments out there, the differences are huge. It is only logical to wonder how these real-world VDI and SBC environments are used and how they are built, especially when you consider the rapidly changing VDI/SBC landscape. Since it’s driven as much by innovation as it is by marketing campaigns, there is a clear need to better understand what is out there.

The goal of the survey executed by Ruben Spruijt, Field CTO at Frame, and Mark Plettenberg, Senior Product manager at Login VSI, is to share insights about usage, configuration and trends in the Virtual Desktop Infrastructure and Server Based Computing industry, ‘the State of the VDI and SBC union’. The survey responses will be strictly confidential and data from this research will be reported only in the aggregate. The results will be reported in a whitepaper that is free to download after registration. If that’s not enough we are also giving away 3 Amazon gift cards ($50,-) to randomly selected persons who complete the full survey.

The questions are comprehensive, and relevant to everyone building on-premises VDI and SBC, Desktop as a Service Cloud environments. The aim of VDI Like a Pro is to repeat the survey at least once a year. This will allow us to see how our industry is changing in practice. The amount and quality of the responses will determine the success of the survey.

VDI like a Pro will present and publish the findings of the survey publicly, but the full report with all results will be made available only for the survey participants first. So, participate to get first hand access and make this community effort a success.

The survey will be closed on the 31th of March this year; the whitepaper and results will be available end of April. We hope you want to participate and become a VDI Rock star!

Click here to take the VDI Like a Pro – “State of the VDI and SBC union” survey.


Citrix Policy Basics 101


As a Scouser (Endearing term for those of us born in Liverpool) who moved towards the south of England, every now and then I venture out and have a night out on the tiles. I will approach a bar and order some extravagant dressed up beverage. This is fine until my northern mates come to visit and smack me right back down to Earth and say, “Remember your roots lad!”

It is at this point the rounds turn in to the stuff of legend.

(These days I am a father of two so this is not common place nor was I ever really the stuff of legend ) 😉

Why do I mention this? Well, with anything in life, no matter how far you progress it is important to remember your roots. Most of us like to venture forward but without a solid grounding as a base we will begin to make mistakes.

This is the same with technology. Mistakes are quite often made with common tasks. People tend to concentrate on the latest and greatest and  sometimes forget about the fundamental basics.

As a support orientated consultant, I often see this happen in my field. I see very clever implementations done by very clever people but now and then issues arise and often it is due to some sort of oversight at those basic levels. Whether it is on premises or cloud the fundamental building blocks of any infrastructure will dictate how successful your implementation will be.

This series of  “101” articles will serve as a refresher about those basics starting with Citrix Policy 101.

I will talk about policy processing and precedence, best practices, policy modelling, locations policies are stored, tips and tricks and enough information to provide you with effective troubleshooting skills.

Stored Policy Locations

Citrix policies are stored in the Xenapp and Xendesktop Site database. They apply to systems that have the VDA agent installed. They are configured within the Studio console.

Site Policies in Active Directory are stored in the sysvol folder which replicates amongst domain controllers in a domain. They are configured using the group policy management console. (gpmc.msc)

Domain policies in Active Directory are stored In the sysvol folder which replicates amongst Domain Controllers in a domain. They are configured using the group policy management console. (gpmc.msc)

OU policies in Active Directory are stored in the sysvol folder which replicates amongst Domain Controllers in a domain. They are configured using the group policy management console. (gpmc.msc)

Local policies apply to the local machine and are stored in the registry of the local machine. They are configured using the local group policy editor (gpedit.msc)

All policies can contain Microsoft and Citrix settings apart the ones in Studio (Just Citrix)

As shown policies can be configured in multiple locations – the Group policy engine or the Citrix policy engine.

Tip! It is advised as part of best practice to choose a single location to configure your policies as this streamlines any troubleshooting you may have to carry out.

Best practice would be to configure your policies using the Group Policy Engine (Sysvol) but as a Citrix administrator, if you do not have access to AD GPO you can use the Citrix policy Engine using Citrix Studio console (Site DB).

Processing and Precedence

Policy Processing

  • Local GPO is processed first
  • Citrix Policies Created with Studio
  • Site GPO
  • Domain GPO
  • OU GPO – Processed last

Policy Precedence

  • OU GPO – Highest/Winner
  • Domain GPO
  • Site GPO
  • Citrix Policies Created with Studio
  • Local GPO
NOTE: Citrix Policies in Studio will be overridden by Citrix Policies at the OU level.

Citrix Policies rank from 1 upwards. The lower the number the higher the priority. So, a Citrix policy setting that has a priority of 1 will take precedence over a setting with the priority of 2.

Some Citrix Policies rely on underlying Microsoft functionality. If this is not enabled the Citrix policy will not apply.

Policies only configured in lower priority only will take affect over non-configured policies in higher priorities.



Assess the security requirements (Session limits, clipboard redirection, client mapped drives, authentication/encryption, removable media)


Assess your end user locations so you can successfully make bandwidth optimisation/restriction decisions. These will be decisions on printing bandwidth limits, dpi, audio, video etc. Also assess your OS optimisations and plan on a way of implementing these.

Virtual Infrastructure

Assess the virtual machine settings (updates, local client or server time, start menu, power button, Icons, shortcuts)

Policy Types

Identify Citrix and Windows policies that are required to achieve your goals.

A list of policy references can be found in the following link to help you assess:

Limit amount of Policies

There is a school of thought that it does help separate policies out according to functions (Printing, Bandwidth etc). Although easier for the Administrator it can cause a performance hit.

In addition, try to avoid duplicate policy settings as this increases logon time.


Filters define the criteria that is used to apply a policy to a computer or user object.

Apply your base foundational policies by setting them to the broadest filter. This eases administration when applying later policies that should only apply to specific situations.

By having an unfiltered policy this will apply to all. Also apply a baseline policy with the lowest priority so exceptions can take precedence.

A default unfiltered policy is provided to apply your broadest settings. If you do not wish to use this default unfiltered policy, you can disable it. This policy applies to all users and computers in the Farm/Site as no filter is set.

NEVER set a policy setting to its default value unless you are overriding a setting in a lower precedence policy. Setting default values requires extra work which requires extra time which adds to policy processing time. (Tip from Carl Webster)

An example of filters to apply is shown in this table:

Filtering Mechanisms

There are 3 filtering mechanisms –

GPO link location – This is Site, Domain or OU targeting. Careful planning of OU structure is required to be effective.

Security Filtering – This is used to restrict the policy set in an ou from applying to certain objects. A common example is Disabling lockdown settings to Admins so they have full functionality logging on to a VDA. Denying a group, a policy via security filtering or removing the group from Security filtering prevents the GPO from applying to that group.

WMI Filtering – This is used to further restrict how a GPO is applied. At this level you can specify the OU should only apply to 64 bit and not 32 bit computers. Just be aware that WMI filters can cause logon delays.

Understanding the LOOP-BACK policy

Allows user settings to be applied to a computer object.

You have two modes –

Replace – This will only apply the user settings specified to the computer object in the OU. No user policies outside this will be applied.

Merge – Other user settings outside the OU will apply.

I usually go with REPLACE but that is my preference.

Citrix Policy Templates

Citrix provide “best practice” templates to reduce your administrative tasks. There are a set of predefined templates for use cases that apply to certain situations such as low bandwidth, WAN and internal connections.

Additional predefined templates can be obtained via the Citrix support web site.

Simply open a template and use the configured settings or apply your own from this best practice point.

The below table highlights how to carry out actions using templates.

Backing up and Importing Policy

You can also import and export policy templates from the Actions tab. So, if you have your own predefined best practices from out in the field you can use these templates and configure as necessary.

Another way to backup your policy settings would be to use Powershell.

To import the Policy, carry out the following cmd.

Comparison and Modelling

Citrix provides a tool for comparing policies.

Warning signs indicate conflicting settings. You can drill down in to the warning signs for more information.

There are also two wizards provided which help you assess how policies are applied to your users connecting to VDA’s.

The Citrix Group Policy Modelling Wizard will show you results of Citrix and Microsoft GPO’s. This is found in the Group Policy Management Console.

You also get the Citrix Modelling Wizard. This will only show you the Citrix Policies and is found in the Studio console.

How Policies are Applied

Computer policies will apply at a system start. So, changes made will take affect after a reboot.

User Policies will apply when a user logs on. So, make your change, log the user off and back on for it to apply.

Policies are refreshed every 90 minutes plus 0-30 minutes.

Reconnecting a session will cause policies to re-evaluate and using GPUPDATE /FORCE will also trigger a refresh.

The Full Process

  • User logs in and Winlogon process starts up.
  • Client Side Extensions are then loaded. Both Microsoft and Citrix.
  • Citrix CSE will start to process policies (Local GPO first)
  • The Citrix CSE will then process Farm policies
  • Lastly the Citrix CSE will process Active Directory policies.
  • Now the precedence order will take effect.
  • A resultant set of policies file is generated (RSOP.GPF)
  • This file is used to make the actual policy settings in the registry.
Policy Folders

The Citrix Client-Side Extensions are responsible for caching the policies on your VDA system. The CSE is contained within the VDA itself.

  • HDXSite – Indicates it is a Citrix Policy
  • GUID – Indicates it is an AD GPO
  • Local Group Policy – Indicates a local GPO
  • 0 = User Policy
  • 1 = Computer Policy

The other folder of significance is where RSOP files are stored on the machines. These are the active combined policy settings. The numeric folders indicate the session ID.

Tip: Deleting the contents of both folders and performing a GPUPDATE /FORCE will completely refresh the policies.
Registry Policy locations

The below registry location is where policies are stored in the registry.

(Note: User Profile Manager settings too!)

This registry directory requires some explaining.

The numeric numbers are the user policy settings stored under a session id.

In each of the folders you will see the values of the policies set.

Events – This is the timestamp of the last computer update. There is an Events key also located in the User session ID folders. Using this key you could figure out when the user last logged on. As this would be when user session policies applied.

Evidence – This is where the filter criteria is tracked and stored.

ICAPolicies – Computer settings


The following information will provide you with good troubleshooting knowledge and methodology.

The Full Policy Troubleshooting Process

  • Check  Citrix CSE versions (Client Side Extension folder -CitrixCseEngine.exe)
  • Check the GPMC version (Group Policy – Management Components folder – CitrixGPMCConnector.dll)
  • Compare the versions to your appropriate product version (Support Site)
  • Check AD cached policies in C:\ProgramData\CitrixCseCache
  • Use the GUID to take a closer look in AD
  • Search in GPMC (Unique id) or use powershell to find a match
  • Look at the Created and modified dates of the GPO and compare this with the cache folder. This will tell you if the cached folder is up to date.
  • If it did not match you could issue a GPUPDATE /FORCE
  • If this does not work check event viewer for errors
  • Now check the Studio Policies in C:\ProgramData\Citrix\GroupPolicy
  • Computer and User files will both contain rollback and rsop gpf files if not something is wrong.
  • Check the registry locations
  • Check how policies are filtered
  • Check HKLM\Software\Citrix\ICA\Session\ (shows connection details – name, client address and full client version). Issues with policies not applying could be due to subnet changes or client rebuilds.

High Level Troubleshooting Methodology

  • Find out the problem
  • Replicate the problem
  • Gather evidence
  • Investigate
  • Test scenarios
  • Resolve

On a shared hosted server system, you may need to RDP to the server as an administrator and drill in to the registry and folder locations highlighted in this document to figure out issues.

You may need to set the policy priority to the highest and apply to a test user to see if the policy applies.

Use the tools at hand you have been provided to gather evidence such as the comparison and modelling tools, resultant set of policy etc.

Corrupt policy cache scenarios could involve deleting the contents of the CSE folder locations and the RSOP folder locations for the policies.

Sometimes you may have a locked down environment and you will need to relax this for troubleshooting if you have no run command or cannot access the registry.

One other method to get around this is to inject a cmd shortcut in to the affected users profile and opening elevated.

I find the following command is also useful to provide you with RSOP information from the GPO whilst logged in to a user’s session.

Command Prompt:

GPResult.exe /H %TEMP%\%USERNAME%.htm





Troubleshooting Policies Using Tools


CtxCseUtil is a tool that can generate resultant set of policy (RSOP) report (per computer, per user or both) for Citrix policies on a device that has the Group Policy Management Console installed.

It can be run locally or remotely against a server VDA. This tool will convert RSOP.GPF to HTML report. The end user does not have to be actively logged in but does need to have logged in at some point. This tool needs WinRm configured so it can run on the machine you are running it from and the target.

Also run the tool with AD privileges.

Example elevated cmd prompt targeting a logged in user using the /u switch and c/ switch (u= user/ c=computer)

Retrieved Citrix User Policies

Retrieved Citrix Computer Policies

Citrix Group Policy PowerShell Module

Import-Module C:\Citrix.GroupPolicy.Commands.psml

The following commands will provide you with help on how to use this module –



This command will backup your policy settings in to the specified folder.

Export-CtxGroupPolicy c:\Tools\GroupPolicy

Exporting Citrix Policies from AD GPO is outlined here – CTX140039 and more on using powershell to create policies can be found here –

WMI Issues

WMI filters can cause issues. Logons and reconnects can take a long time to occur.

You should enable Group Policy logging – HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Diagnostics\

“GPSvcDebugLevel”=dword 00030002

The log file location is %WINDIR%\debug\usermode\gpsvc.log

If you see FilterCheck: Evaluate returned error. hr=0x80041069 – This means AD is timing out on WMI calls.

Event Viewer will also highlight WMI errors.


Having a fundamental grasp on how policies work is vital. After all the hard work of implementation is done you are left with the user experience. If this is not good, all that hard work goes down the drain. This is what people hear about, the user gripes, frustrations, session experience etc.

Citrix have supplied you with great tools to troubleshoot HDX environments and getting your policies right for your devices and connections whether they are WAN, LAN, REMOTE or VIRTUAL/PHYSICAL is paramount and once your users are satisfied then you as a Consultant can relax. Remember policy best practices are always changing so, after the implementation is all said and done never forget to do the following:

Manage – Maintain – Assess – Re-evaluate- Improve – Manage – Maintain – Assess – Re-evaluate – Improve – Pay Rise!!!



Carpe Diem!


A bit of a hot topic right now is security and rightly so.

I will keep this simple. You are allowing access into your environment externally using a NetScaler Gateway. Let’s make sure we secure this beast and get an A result in your Qualys SSL Labs tests. Actually no, let’s go for the A+.

I don’t need to go into all the security jargon here, I will leave that up to your own research, plus, it is very dry reading. I will supply links at the bottom of this post for those wishing to know more. What I will show you is the last in my series of the overlooked-but-somewhat-familiar and delve in to NetScaler Gateway security.

Some environments I have witnessed just have an SSL certificate on the appliance and that is that. If that gets you to sleep at night, that is fine but, you really can do a lot better on security matters and it will not impress the boss when he looks at the results of those penetration tests.

The Four Steps to Success

My NetScaler was configured with an SSL cert and the bare-bones configuration for it to work so I could log in and launch my applications externally.

I decided to see what grade I would get by using Qualys SSL Labs Checker Tool.

Just your everyday C grade. That for me is not going to set my boss’s expectations on fire but it isn’t bad.


Remove SSL3 off your NetScaler Gateway and add Custom Ciphers, setting the ECDH protocols at the top.

Set ECDHE at top priority shown below within your custom ciphers.


Surely my grade will improve.

Ok a B grade and we got rid of that pesky…



The next action was to import the intermediate Certificate in my certificate chain on to the NetScaler appliance. I used the Intermediate SHA2 server certificate obtained from my trusted CA. I installed this on to my NetScaler appliance and linked my NetScaler Gateway Server Certificate to the intermediate.

Link server certificate to Intermediate certificate as shown here.


Click OK and now your certficate chain is linked.

Shall we see what the scores are?

So now we all understand we should never break the chain 😉.

We are close but no cigar yet. I want A+, remember.


Open up a putty shell and log on to your Netscaler.

Type in the following command:

set ssl parameter -denySSLReneg FRONTEND_CLIENT

Congratulations on your A!

Remember that teacher in school that was never satisfied with your efforts? I had many such teachers, but now I will follow suit and say we can do better than that! I have one more trick up my sleeve.


Open up putty and SSH to your NetScaler. Once logged in, type the command below.

Add rewrite action insert_STS_header insert_http_http_header Strict-Transport-Security “\”max-age=157680000\””

Add rewrite policy enforce_STS true insert_STS_header

Now bind the rewrite policy to your NetScaler Gateway:

bind vpn vserver Name_of_NetScaler_vServer -policy enforce_STS -priority 100 -gotoPriorityExpression NEXT -type RESPONSE

Congratulations, my friend, on achieving a tremendous result on securing your Netscaler appliance!



This article is not meant to go into the technical waffle on security. What I want to show you here is how easy it is to secure your NetScaler Gateway appliances in 4 steps within a matter of minutes.

Yes, there are other security settings you can add, but to be fair, I am happy with an A+ and your boss will be too when those penetration tests come in.

For those who wish to know more and fill in the blanks check out these resources:

This will be the end of my introductory series on overlooked settings. There is so much more on this topic, but the aim was just to highlight 3 seperate strategies that are simple, effective yet overlooked.

Application Groups

Out of the Box Printing 

Netscaler Gateway Security

I hope you have enjoyed my initial CTA posts and more will follow.

Follow me on Twitter:



Carpe Diem