Troubleshooting VDA Migration from 6.5 to 7.13

Choosing the option to let the 7.13 installation media remove Xenapp 6.5 resulted in a 1603 error.

Error 1603 and the details.


The VDA at this point did not install on the 2008R2 O/S.

I then installed the VDA from the Xenapp 7.13 ISO.

A few install errors appeared but the install carried through once you hit OK.

Interesting to note I checked to see the VDA was registered in XA7 Studio Console and indeed it was.

Problems continued and I was unable to launch any applications.

Checked STA configuration.

Checked Firewall.

Checked install logs in %AppData% on the VDA. (Local folder)

Because I knew there was a problem when installing the VDA on Server 2008r2 image I uninstalled the VDA software and any left over XA6.5 components.

From this point the VDA installed cleanly along with Receiver.

My apps could now be launched.

I will make another attempt at this to see if I can cleanly upgrade the VDA otherwise I will resort to a manual uninstall of 6.5.

I will update this post soon.

I know this is not rocket science but hopefully it will help someone.

Citrix Cloud – Part 5

This article will discuss the smart tool known as Smart Migrate.

We have already covered Smart Check and Smart Scale in previous articles so now it is time to talk about this tool which helps with your Xenapp 6.x to 7.x migrations.

Log in to the Citrix Cloud and let’s go through the purple window (Influenced there by a historical children’s program)

Once you click on Smart Migrate you can see or add additional projects.

If we click Add Project we can get started creating the migration.

Name your project.

We will chose the Fully Automated option.

Next you get clear instruction of the steps involved in order to carry out a successful migration.

Once you have digested the above click Next.

Now we are going to connect our Xenapp 7.x delivery Controller to the Workspace Cloud/Citrix Cloud.

We are unable to see our Delivery Controller so we need to download an agent.

The screenshot below highlights yet again clear instruction on how to do this.

Once the above is carried out you should be able to highlight your controller.

Once again if you do not see your XA6.x controller carry out the agent install on your XA6.x controller.

You can now choose the XA6.x controller and put in your Administrative Farm credentials.


Once you have agents installed on your XA6.x Controller we can start the Farm analysis.

At this point Smart Migrate will collect all data about your applications and their properties as well as any policies in your environment.


I came across an issue where my analysis of my XA6.x Farm would fail.

The Smart Migrate tool will provide you with logs so you are able to analyze the reason for failure.

In my case I had an invalid server entry with an application.

You can selectively choose the servers and the applications published to them that you wish to migrate to the Xenapp 7.x environment. You are also able to do this with policies when choosing the analyzed policies tab.

In order to fix this issue you have to look at using the Dscheck utility within an elevated cmd prompt in your XA6.x Farm.

To check any invalid entries in my Xenapp 6.x database:

Dscheck /full apps < c:\apps\apps.txt

I investigated the .txt file and I then ran the following to clean the apps.

DSCHECK /full apps /clean

This allowed Smart Tools to complete the analysis.

Proceeding on you should now see your XA6.x Farm data for apps and policies.

Once you have chosen your desired apps/policies you can Proceed to Migration.

You are also able to create a new Delivery Group within the XA7.x environment or choose an existing one to migrate the settings in to.


My migration failed and the following error was seen.

The fix for the above was the following:

The Citrix Common Commands is from the XenApp 6.5 SDK which was removed from 7.13, but can be re-installed by downloading and installing the 6.5 SDK.

In my case I downloaded the following to the XA7.x controller:

Once this was done I could successfully continue and migrate my applications from XA6.x to XA7.x using Smart Tools.

Take my word for it the apps and policies migrated in to the chosen delivery groups and icons, users were all correct. No need to faff around with permissions.

Nice and easy and you will agree saves a lot of time.

Please read about Smart Scale and Smart Check here:


Citrix Cloud – Part 4

Smart Scale

This post will discuss one of the Smart Tools available with Citrix Cloud called Smart Scale.

Basically, the tool allows you to connect to your site via an agent that you download on your controllers and provide an overview of your delivery groups and machines.

The screen shot below shows my site already added along with others.

Agent Install

Clicking Add Site will prompt you to download an agent with clear instruction.

Once you have installed the agent (nothing complex – just next, next) you will see your site and can enter it by clicking View Site.

Drilling in to Site Details

Within the next screen, you will see the below tabs.

You will also see a variety of graphs.

Estimated Savings

Capacity Utilization

Machines (On)

Machines in Maintenance Mode

Sessions (Capacity)

Load Index

You can drill further into the graphs and view sessions and get an idea of when they launched and finished.

In the graph below, we can see that one machine is switched on in the delivery group but we did have two. Around 5.40 the machine went down.

This can be explained in my case by the next rather cool feature of Smart Scale and that is controlling how many Xenapp machines you want on during certain time frames.

Clicking the configure tab shown here you can manage this.


Schedules and Capacity Management

The following screen shows that I can control session count on my servers.

We can also control schedule based scaling!

Clicking the Create New tab

….. we are presented with this screen.

We have a variety of options to configure such as the minimum number of machines you wish to keep alive.

We also can create a custom time schedule for our machines in the delivery group.

Once you have created your schedule by clicking Create it will be listed under the Schedules title.

We can create multiple schedules that will control how many machines are up.

Heading back to the initial Smart Scale page for your site, you can see under Machine Activity events such as Smart Scale bringing down the servers due to a preconfigured Schedule I had.

This is verified by looking in my Studio console on my Delivery Controller.


The Events tab is self-explanatory

Site Details gives me an overview of my Delivery Controllers and Delivery Groups. In my case I know one Delivery Controller is switched off and I have not enabled Smart Scale for the other Delivery Group in the screen shot. You are also able to Sync Site Data.

Enabling Smart Scale

To enable Smart Scale this is done at a Delivery Group level.


The beauty of Smart Scale is you can control multiple sites from any location with internet access.

I can log on to my Citrix Cloud and check how many servers are up, if any machines are in maintenance mode and what my current site configuration is looking like. I can change my Server load easily by changing schedules and capacity management.

I feel this tool is only going to get better and more advanced over time.

The Smart Tools Suite allows you as the Citrix Partner to keep a close eye on your customer environments and provide that proactive touch. In my role as a Citrix Support Consultant it is a welcome addition to my ever growing arsenal of tools.

Citrix Cloud – Part 3

Smart Checks

Smart Check is basically a mechanism to run periodic health checks in your site.

Citrix Partners can utilize scheduled checks to confirm Site Health.

It is part of the Smart Tools suite of products within the Citrix Cloud Services.

This article will only highlight screen shots rather than descriptive actions as this feature is still in preview mode and is subject to change.

What I want you to take away is the proactive ability this provides for your customers.

The screen shot below already has multiple sites added that are running Health Checks.

To link your site to Smart Tools you are required to download agent and run this on your Delivery Controller.

When you enter the Smart Check feature you are prompted to download the agent.


Once the agent is saved you should copy it and run it on your Delivery Controller.

Now click Next.

You can see in the screen shot below instructions on installing the agent. At this point you run the agent you just downloaded.

On the Controller run the agent:

Accept the terms and run through the setup.

Click Finish.

The Smart Tools agent set up is basically a next, next install on the delivery controller.

Once installed this will be detected and you can click next within the cloud portal.

Add your administrative site credentials.


Now you should see your site linked to the Smart Check utility.

Click the Get Started tab.

After clicking Get Started the site details are being uploaded.


Details about your site start appearing.

Once everything is uploaded you get some pretty good problem reporting on your site.


Navigating within your Smart Check site you can view health of your Delivery Controllers.

You can schedule a health check daily, weekly etc.

The screen shots below highlight some of the options.

You can set daily, weekly tasks and specific times to kick off the checks.


You can choose a Health report or site details.


We can drill down further into the sections for more information.

The next few screen shots show you information on services, controller availability and delivery groups.

Here I can see problematic services.

Smart Check is a pretty impressive addition to the Smart Tools suite and allows Citrix Partners to provide proactive rather than reactive measures to the Citrix environment.

If you want to know more about the Citrix Cloud I hope the articles so far have been informative and if you need help with transitioning and managing your Citrix environment to the Cloud I do have a Citrix Partner in mind who could help you 😉

Please check out my Citrix Cloud – Part 4 Post on Smart Scale (

Citrix Cloud – Part 2

In part 1 ( we showed how easy it is to set up an on-premise environment to the Citrix Cloud.

In this part, we will show you how to manage your users and images using the Citrix Cloud Xenapp and Xendesktop Service Management. I think you will find it somewhat familiar.

A few of the screenshots already have infrastructure applied so we are adding additional Catalog and Delivery Groups.
Navigate to the Xenapp and Xendesktop Service within the Citrix Cloud subscription.

Click on Manage and Service Creation.

Look familiar?

Now the first thing we should do is create a zone and add your hosting infrastructure to the Cloud environment.

Create a zone and add your connector within the zone.

Next we need to add the hosting infrastructure.

In my example I have added my local Xenserver Resources.

I am choosing local Xenserver storage.

Next screen you choose the network resources you are connecting to.

Click Finish.

My CitXen environment is now shown in the Studio console.

Machine Catalog
Next we need to create a Machine Catalog.

In my example I am choosing a Xenapp O/S deployment.

I have chosen the deployment method as MCS and my resources will be allocated to the CitXen Zone.

I have chosen my MCS snapshot image with my apps installed and selected the  minimum functional level.

I am deploying out one machine from this image.

Next I choose the domain and active directory OU location for my computer accounts.


I then choose the naming scheme for the machines I am deploying: CWCXA##

Input your administrative credentials.

Choose Machine Catalog Name and description

Click Finish

I can see my machine being provisioned via MCS on to my local Xenserver host.

My Machine Catalog is now visible in the console.

Next we need to create a Delivery group to assign users to this Catalog.
Delivery Group

I have chosen the Machine Catalog just created.

NOTE: This next screen shot is only an option in Cloud deployments.

The option I have chosen here lets Citrix Cloud manage my Workspace's.

“Leave user management to Citrix Cloud”

Workspace's are now known as Library’s.

A library is an offering that you can assign to users. (My delivery group will be offered up as a resource for users to use)

Continue through Delivery Group wizard and finish the Delivery Group and navigate to the Library node.
Library Offerings

You can now see your Offerings in the library (Basically your Delivery Groups with no assigned users…yet!)

Click on the 3 dot dial button and then Manage Subscribers.

Here you can choose the users who will have access to your delivery group resource.

Choose your domain and users to add.

Domain users is already added in my example.

Now you can see a number next to the Delivery Group offering indicating AD membership has been added.

Once you have added your resources to the cloud, created your Machine Catalog, created your Delivery Group offerings you can now get your apps and desktops.

Click the Xenapp and Xendesktop Service –

Navigate to the Manage tab and choose Service Delivery.

It is here you can see the URL for connectivity.

In my example we have Storefront and Netscaler Gateway services in the cloud.

I will explain in a later blog why I prefer the on premise Storefront and Netscaler.

Briefly the reason why is because features like two factor authentication and any other ADC feature other than Gateway is not available in the Citrix Cloud.

You also need to think about having Storefront within the resource location for connectivity to the environment if your ISP provider decides not to play nice one day.

Use the URL to access the environment (internal/External).

Log in and reap the rewards of a wonderful, Simple Cloud solution.


My desktop launches with all my applied GPO policies, UPM profile best practices, mapped drives and custom settings.

GPO User restrictions shown limiting control panel visibility.

My active session can be viewed and managed in the Citrix Cloud Xenapp and Xendesktop Service.

Here you can see the initial logon time and subsequent logon time.
So, the familiar management and ease of installation so far allow you as an administrator to really concentrate on your customer’s needs, apply best practices and effectively proactively maintain and manage the solution.

In part 3 ( we will look at one of the Smart Tools called Smart Check.

Citrix Cloud – Part 1


So, you have heard by now the term Cloud. If you have not your head must be up in one.

So, Citrix Cloud, what is it all about? There are plenty of articles and videos explaining this. or check out - for more information.
What I will do is list some, not all advantages of Citrix Cloud and then get right into a superb offering (Xenapp and Xendesktop Service) by those women and men at Citrix and show the simplicity of migrating to the cloud.

Reduced costs and footprint

No SQL server or licensing cost

Costs of running servers reduced

Power costs reduced

More Floor space

More Storage space

IT Operations simplified

Less network and storage infrastructure required

Server procurement

Always on latest technology

Automatic upgrades

Select services

Easily grow consumption

Easily decrease consumption

Most up to date technology

Familiar administration

Smart Tools

Ongoing health checks
Now you have considered the advantages and watched the videos and read the links above, I will show you the simplicity of transitioning your local site in to the cloud.

This is the first of many articles I will write on the Cloud.

Part 1 - Hooking up to the Citrix Cloud

Part 2 – Managing the Xenapp and Xendesktop Service

Part 3 – Smart Check

Part 4 – Smart Scale

Part 1 -Hooking up to the Cloud

Once you have your cloud subscription details -

and you have logged in you should create a resource Location.

Now download software called a connector. It does what it says on the tin. Connects your environment (Resource location) to the Citrix Cloud.
Connector Installs
You should connect to the Citrix Cloud on a 2012r2 machine minimum that you have designated for this role. The machine should have TCP 443 outbound access and internet connectivity.

Within your Resource location navigate to download your Connector software.

Save the file and run.

Once install procedure begins you will be asked to sign in to your Citrix Cloud using credentials.

The connector will continue with the install.
From here you can choose the subscription that you have set up.

NOTE (Partners can manage multiple subscriptions).

The connector will continue with the install.

All being well your connectivity tests should prove successful.


Do a refresh and you should now see your site in your resource location within Citrix Cloud.

You can now check within Identity and Access Management if your domain is present.

I can now see my domain as a resource within the Citrix Cloud.

If you head back to your resource location (click tab in left corner next to the words “Citrix Cloud” and choose Resource Locations), Citrix kindly reminds you of some best practice.

You should proceed with the second connector install on a second machine designated for the role!

Once you have done this your environment is connected to the citrix cloud.
What does this mean?

Well now you have SQL, Delivery Controllers, Studio, Director, Licensing, Provisioning capability moved off your site to the Citrix Cloud with minimal effort.

Yes, there is an immediate reduced footprint in your resource location.

Your AD environment is now accessible so the fun stuff of managing, maintaining and provisioning workloads can begin!

In part 2 ( I will move on to managing the Xenapp and Xendesktop Service.



The case of GSLB Failure due to ports not being open

Please note that this Citrix KB,, states that the public IP address should only be used when there is no VPN connectivity and GSLB has to communicate over the Internet.

“When adding a GSLB site, if the site communicates over the internet only then use the "Public IP" field. For example, when there is no site to site VPN connectivity between the GSLB sites.”
Customers GSLB configuration was not working and this was confirmed by a packet capture taken from the Netscaler.

 = NetScaler IP
 = Subnet IP
 = Subnet IP | GSLB site IP
 = Public IP Remote GSLB Site Configuration

Ports = 22, 3008, 3009

Test Lab = 3010 – communicating between two NetScalers.


In the example above we can first see communication from the GSLB site IP to the remote external.

Then we see communication attempts from the NSIP to the remote external address.

Then we see communication attempts to the SNIP to the remote external VIP.
Netscaler attempts to speak to the external IP of the remote device in this order:

The following trace further proves gslb uses ports 3010 and 22. In the example trace  below they are not opened ports.

Source Address      : : NetScaler IP

Destination Address : :Remote NetScaler GSLB Public IP

Netscaler > System > Network > RPC


GSLB will contact a remote public IP if configured.

It will also try in the following order:




GSLB must have ports 3008,3009 and 3010 open and 22.

Customer had a remote IP configured and this potentially did not need to be present as this was internal GSLB.

Customer also did not have the relevant ports opened up between sites.
  • ADNS IP: An IP that will listen for ADNS queries. For external, create a public IP for the ADNS IP and open UDP 53 so Internet-based DNS servers can access it. This can be an existing SNIP on the appliance.
  • GSLB Site IP / MEP IP: A GSLB Site IP that will be used for NetScaler-to-NetScaler communication, which is called MEP or Metric Exchange Protocol. The IP for ADNS can also be used for MEP / GSLB Site.
    • RPC Source IP: RPC traffic is sourced from a SNIP, even if this is different than the GSLB Site IP. It’s less confusing if you use a SNIP as the GSLB Site IP.
    • Public IP: For external GSLB, create public IPs that are NAT’d to the GSLB Site IPs. The same public IP used for ADNS can also be used for MEP. MEP should be routed across the Internet so NetScaler can determine if the remote datacenter has Internet connectivity or not.
    • MEP Port: Open port TCP 3009 between the two NetScaler GSLB Site IPs. Make sure only the NetScalers can access this port on the other NetScaler. Do not allow any other device on the Internet to access this port. This port is encrypted.
    • GSLB Sync Ports: To use GSLB Configuration Sync, open ports TCP 22 and TCP 3008 from the NSIP (management IP) to the remote public IP that is NAT’d to the GSLB Site IP. The GSLB Sync command runs a script in BSD shell and thus NSIP is always the Source IP.
  • DNS Queries: The purpose of GSLB is to resolve a DNS name to one of several potential IP addresses. These IP addresses are usually public IPs that are NAT’d to existing Load Balancing, SSL Offload, Content Switching, or NetScaler Gateway VIPs in each datacenter.
  • IP Summary: In summary, for external GSLB, you will need a minimum of two public IPs in each datacenter:
    • One public IP that is NAT’d to the IP that is used for ADNS and MEP (GSLB Site IP). You only need one IP for ADNS / MEP no matter how many GSLB names are configured. MEP (GSLB Site IP) can be a different IP, if desired.
    • One public IP that is NAT’d to a Load Balancing, SSL Offload, Content Switching, or NetScaler Gateway VIP.
    • If you GSLB-enable multiple DNS names, each DNS name usually resolves to different IPs. This usually means that you will need additional public IPs NAT’d to additional VIPs.

Bullet points taken from Carl Websters superb site:

VMWare View 6 – Removing Stale Machines from View DB

This tool must be executed on View Connection server.

Navigate to C:\Program Files\VMware\VMware View\Server\Tools\bin

For example, to locate and list erroneous virtual machines, use ViewDbChk --scanMachines.
The ViewDBChk program will connect to the view database and list out machines with errors.

Enter ViewDbChk with the required flags as listed below into the  administrative command prompt to find and clean your database inconsistencies.

In order to remove the machines the desktop pool they reside in needs to be disabled.

Confirm you wish the faulty machine to be removed.

Once the machine is removed you will need to enable the pool.

The command ViewDbChk --scanMachines will remove one machine at a time. In my example I had 4  machines to remove so this was sufficient and I reran the procedure 4 times.

You can use the --limit option to increase machine removal limit.
Originally I had 4 machines showing errors . This typically happens when machines have been removed at Vcenter level and not via the View console. As a result they remain in the View database.

There were a couple of machines with odd name syntax that were removed from VCENTER (right click -Remove from inventory) during this process to help the clearing process.

I also restarted the following service to refresh the view console.


For memory refresh purposes!

Full details in the following link:



Xenapp 7.x SQL Express Single Site to SQL Mirror Multi Site Migration – Part 3

Create 3 Xenapp Databases
You have migrated your Xenapp database and now you want to separate your database in to Site, Monitoring and Logging databases and introduce high availability. Let’s get started!

You are logged on to Xenapp Studio with an account that has sysadmin rights on SQL .

Open up Xenapp Studio console.

Highlight Logging and on the right click Change Database.

Enter new database and location.

Click OK so Studio can create the new databases automatically.

Go through the same process for the Monitoring DB.

Voila! You now have 3 separate Databases for your Xenapp environment.
In Xenapp Studio Management under configuration you can see the databases.

SQL Studio Management on your primary SQL will show 3 databases.

Change recovery Model of all 3 DB’s to Full
Next we get on with introducing HA in your database environment.

In this example we will use SQL mirroring and configure the Delivery controllers to be aware of the primary and Failover SQL partner.
All databases need to be backed up and restored with “No Recovery” option to the mirrored SQL partner. Before this is done the Recovery model should be changed to FULL on all the databases.

Right Click database/Options

Change the Recovery Model to FULL.

Make a full backup of all 3 DB’s
Right click DB/Tasks/Back up.

Select the location for your backup.

Once confirmed click OK to backup the DB.

Make a transaction log backup of all 3 DB’s
Right click DB/Tasks/Back up/Options

Backup Type: Transaction Log

Click OK
Backup the Transaction logs to the existing media set.

Do this for the Site, Monitoring and Logging databases.

Copy all backups to a local drive on the server acting as the SQL mirror.

Create the Controller logins on the SQL server acting as mirror
New Query

Create login [Your Domain\DDC Machine account$] from windows

Click !Execute

Restore the databases with the “NO RECOVERY” option
Do this on the SQL server acting as the mirror.

Choose back up DB copied locally on SQL mirror.

You will see the full and transaction logs appear as they were appended to the same backup set.

Before you commit and press OK for the restore make sure you are restoring with the “No Recovery” option.

On the right go to Options and choose RESTORE WITH NO RECOVERY.

Click OK and you will see a message confirming your database restored successfully.
You can now view the database in “Restoring” state in SQL studio.

Repeat the restore procedure for the remaining databases.

Create the mirror from the Principal SQL server
Choose database and right click.


Click Configure Security tab

The Mirroring Security Wizard appears. Click Next.

In this example I am not configuring a witness server. You can use SQL express for this role if a witness is required. Using a witness will provide automatic failover should you have issues with your principal SQL server and is best practice.

Choose your Principal SQL.

Next choose the Mirror Server Instance. You must click Connect and authenticate to the server.

Click Connect

Click Next

Enter your credentials (Usually the administrative account you are logged in with).

Review and click Finish.

Once you click Close this pop up should appear. Click Start Mirroring.

The status will confirm successful synchronization.

The Databases on the Principal should now look like the below:

The databases on the Mirror should look like the below:


If you come across the following error when trying to mirror your Xenapp site database:

You will need to set Auto Close to OFF on the database.
This is achieved by running a New Query on the primary SQL server and executing the query:


Failover and test permissions
Initiate failover from the principal database and check permissions on the Controller machine account.

Right click database and choose TASKS/MIRROR.

Click Failover.

The database on the original SQL server you initiated FAILOVER on should now show the following status:

Do this on all 3 databases.
Now you should check your permissions on the SQL you failed over to. 
Check permissions on the Controller accounts for the databases. They should match the following:
Logging Database Permissions

Monitoring Database Permissions

Site Database Permissions

If all looks good initiate failover once again from the database that shows the principal role so all the databases are on the original SQL server that Xenapp was connected to.
TEST, NULL and SET Connections on your Delivery Controllers

The following actions will need to be performed on all your Delivery Controllers so they point to the new SQL setup.

Test Connections
We now need to test connections on both SQL servers to check if there are any issues.
This can be achieved by the following .ps1 script.
Remember to put YOUR SQL primary server and YOUR failover SQL partner.
$cs = "Data Source=SQL01; Failover Partner=SQL02; Initial Catalog=CITXENSITE; Integrated Security=True; Network=dbmssocn"

$controllers = Get-BrokerController | %{$_.DNSName}

foreach ($controller in $controllers)


Write-Host "Testing controller $controller ..."

Test-ConfigDBConnection -DBConnection $cs -AdminAddress $Controller

Test-AcctDBConnection -DBConnection $cs -AdminAddress $Controller

Test-HypDBConnection -DBConnection $cs -AdminAddress $Controller

Test-ProvDBConnection -DBConnection $cs -AdminAddress $Controller

Test-BrokerDBConnection -DBConnection $cs -AdminAddress $Controller

Test-EnvTestDBConnection -DBConnection $cs -AdminAddress $Controller

Test-SfDBConnection -DBConnection $cs -AdminAddress $Controller

Test-MonitorDBConnection -DBConnection $cs -AdminAddress $Controller

Test-MonitorDBConnection -DataStore Monitor -DBConnection $cs -AdminAddress $Controller

Test-AdminDBConnection -DBConnection $cs -AdminAddress $Controller

Test-LogDBConnection -DBConnection $cs -AdminAddress $Controller

Test-LogDBConnection -Datastore Logging -DBConnection $cs -AdminAddress $Controller

Null connections
Connections to the principal SQL server need to be nulled.

This can be achieved by the following ps1 script.
Set-LogSite -State Disabled

Set-LogDBConnection -DataStore Logging -DBConnection $null

Set-MonitorDBConnection -DataStore Monitor -DBConnection $null

Set-MonitorDBConnection -DBConnection $null

Set-AcctDBConnection -DBConnection $null

Set-ProvDBConnection -DBConnection $null

Set-BrokerDBConnection -DBConnection $null

Set-EnvTestDBConnection -DBConnection $null

Set-SfDBConnection -DBConnection $null

Set-HypDBConnection -DBConnection $null

Set-ConfigDBConnection –DBConnection $null -Force

Set-LogDBConnection -DBConnection $null –Force

Set-AdminDBConnection -DBConnection $null –Force
Screen shot highlighting results of script.
Set the connections so the Delivery Controllers are aware of both SQL servers.

Connections to the SQL servers (Principal and Mirror) need to be set.

This can be achieved by the following ps1 script.

Remember to put YOUR SQL primary server and YOUR failover SQL partner.
$cs = "Server=SQL01; Initial Catalog=CitXenSite;Integrated Security=True;Failover Partner=SQL02"

$cl = "Server=SQL01;Initial Catalog=CitXenLogDB;Integrated Security=True;Failover Partner=SQL02"

$cm = "Server=SQL01;Initial Catalog=CitXenMonDB;Integrated Security=True;Failover Partner=SQL02"

Set-ConfigDBConnection -DBConnection $cs

Set-AdminDBConnection -DBConnection $cs

Set-LogDBConnection -DBConnection $cs

Set-AcctDBConnection -DBConnection $cs

Set-BrokerDBConnection -DBConnection $cs

Set-EnvTestDBConnection -DBConnection $cs

Set-HypDBConnection -DBConnection $cs

Set-MonitorDBConnection -DBConnection $cs

Set-ProvDBConnection -DBConnection $cs

Set-sfDBConnection –DBConnection $cs

Set-LogDBConnection -DataStore Logging -DBConnection $cl

Set-MonitorDBConnection -DataStore Monitor -DBConnection $cm

Set-LogSite -State Enabled
Screen shot of results of script.

Confirm and test connections to both SQL servers
Confirm that the Delivery Controller has connections to the principal and mirror SQL servers.



The result within Studio should show connections to the SQL server address and the Mirror server address.

Final test is to initiate failover from principal databases and run these commands again:




Finally open up Xenapp Studio Console.

Final Word
So in this 3 part series we have shown you the following:

Migrate SQL express to SQL production.

Create 3 separate databases for Xenapp.

Introduce resiliency by mirroring.

Hope you enjoy!

Remember to do everything in TEST FIRST!


Removing Problematic Delivery Controller – Method 1

This article will show you how to remove a delivery controller from your environment that is no longer required or functioning. Attempts to re add the controller fail with the same machine name. You do not have access to SQL but you can hand over eviction scripts to your DBA to clean up your Xenapp database.

This procedure worked in my Xenapp 7.x environment with a working Delivery Controller left in my Site.


Example 1

Obtain Controller SID

Launch Powershell as an administrator on your remaining Delivery Controller.

Run Get-BrokerController

Take note of the SID of the Delivery Controller that is no longer functioning.You will need this SID. The state may still show as Active if connections are still active.
Null Connections

Now run the following to null connections to the controller you wish to remove from your Xenapp database. This is carried out on a working Delivery Controller.

Set-ConfigDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-BrokerDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-ProvDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-AcctDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-HypDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-EnvTestDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-MonitorDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-SfDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-LogDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-AdminDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-AnalyticsDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM (XD 7.6 ONLY)
Get-BrokerController will now show the state of the second DDC as Off.

Run Eviction Scripts

Next we need to run the following powershell script using the SID identified on the controller that you are going to remove These commands will generate eviction scripts.

Take care to point the site, monitoring and logging parts to your correct database.

Get-BrokerDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\brokerevict.sql
 Get-ConfigDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\configevict.sql
 Get-HypDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\hostevict.sql
 Get-ProvDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\provevict.sql
 Get-AcctDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\adevict.sql
 Get-EnvtestDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\envtestevict.sql
 Get-LogDBSchema -DatabaseName CITXENLOGDB -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\logevict.sql
 Get-MonitorDBSchema -DatabaseName CITXENMONDB -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\monitorevict.sql
 Get-sfDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\Sfevict.sql
 Get-AdminDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\adminevict.sql
 Get-AnalyticsDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\analyticsevict.sql (XD 7.6 ONLY)

Execute Scripts on SQL
The above script will generate eviction scripts to run on SQL.
Scripts appear locally on your delivery controller C drive.

Copy these over to your SQL server acting as Principal.

Execute the eviction scripts on the sql server in SQLCMD mode

Open Sql Studio and click OPEN/FILE and choose your .sql script.

Your script will be imported into SQL.

Run your query in SQL CMD MODE.

Then click !Execute
You should get a result similar to the below.

Repeat this procedure for all your eviction scripts that you created.
Run Get-BrokerController. You should only see your remaining Delivery Controllers in your environment.

Clean up Registered Service Instances

Once this is done you need to clean up the registered service instances. You can see the controllers assigned to the services by running the below command.


You will see that the faulty delivery controller is still registered to services.

Run the following in your powershell window.

Get-ConfigRegisteredServiceInstance | select serviceaccount, serviceinstanceuid | sort-object -property serviceaccount > c:\registeredinstances.txt

This will generate a text file on c:\registeredinstances.txt.

Inside this file you will see something similar to the below:
In this example we can see DDC01 and DDC02 are registered.

Once you have the output, you can use an advanced text editor like Notepad++ to select the ServiceInstanceUid’s for the service instances on ddc02 and use the data to build and run a simple unregister script:

Copy your amended text and create a .ps1 file on your local C drive of the Delivery Controller.

Run the file within your administrative powershell cmd window.

Once complete check the registered service instances once again.

You should not see any registered service instances on the delivery controller you have removed.

You should now be able to add your Delivery Controller back in to the environment.