Citrix Cloud – Part 3

Smart Checks

Smart Check is basically a mechanism to run periodic health checks in your site.

Citrix Partners can utilize scheduled checks to confirm Site Health.

It is part of the Smart Tools suite of products within the Citrix Cloud Services.

This article will only highlight screen shots rather than descriptive actions as this feature is still in preview mode and is subject to change.

What I want you to take away is the proactive ability this provides for your customers.

The screen shot below already has multiple sites added that are running Health Checks.


To link your site to Smart Tools you are required to download agent and run this on your Delivery Controller.

When you enter the Smart Check feature you are prompted to download the agent.



 

Once the agent is saved you should copy it and run it on your Delivery Controller.



Now click Next.



You can see in the screen shot below instructions on installing the agent. At this point you run the agent you just downloaded.



On the Controller run the agent:

Accept the terms and run through the setup.



Click Finish.



The Smart Tools agent set up is basically a next, next install on the delivery controller.

Once installed this will be detected and you can click next within the cloud portal.



Add your administrative site credentials.

  

Now you should see your site linked to the Smart Check utility.

Click the Get Started tab.



After clicking Get Started the site details are being uploaded.

 

Details about your site start appearing.



Once everything is uploaded you get some pretty good problem reporting on your site.

 

Navigating within your Smart Check site you can view health of your Delivery Controllers.



You can schedule a health check daily, weekly etc.

The screen shots below highlight some of the options.




You can set daily, weekly tasks and specific times to kick off the checks.

  

You can choose a Health report or site details.



 

We can drill down further into the sections for more information.

The next few screen shots show you information on services, controller availability and delivery groups.



Here I can see problematic services.





Smart Check is a pretty impressive addition to the Smart Tools suite and allows Citrix Partners to provide proactive rather than reactive measures to the Citrix environment.

If you want to know more about the Citrix Cloud I hope the articles so far have been informative and if you need help with transitioning and managing your Citrix environment to the Cloud I do have a Citrix Partner in mind who could help you 😉

Please check out my Citrix Cloud – Part 4 Post on Smart Scale (http://wp.me/p8leEE-89)

Citrix Cloud – Part 2

 Part 2 – Managing THE XENAPP/XENDESKTOP SERVICE
Introduction
In part 1 (http://wp.me/p8leEE-6d) we showed how easy it is to set up an on-premise environment to the Citrix Cloud.

In this part, we will show you how to manage your users and images using the Citrix Cloud Xenapp and Xendesktop Service Management. I think you will find it somewhat familiar.

A few of the screenshots already have infrastructure applied so we are adding additional Catalog and Delivery Groups.
Navigate to the Xenapp and Xendesktop Service within the Citrix Cloud subscription.

Click on Manage and Service Creation.

Look familiar?

Now the first thing we should do is create a zone and add your hosting infrastructure to the Cloud environment.

Create a zone and add your connector within the zone.

Next we need to add the hosting infrastructure.

In my example I have added my local Xenserver Resources.





I am choosing local Xenserver storage.





Next screen you choose the network resources you are connecting to.



Click Finish.



My CitXen environment is now shown in the Studio console.

Machine Catalog
Next we need to create a Machine Catalog.



In my example I am choosing a Xenapp O/S deployment.



I have chosen the deployment method as MCS and my resources will be allocated to the CitXen Zone.



I have chosen my MCS snapshot image with my apps installed and selected the  minimum functional level.



I am deploying out one machine from this image.



Next I choose the domain and active directory OU location for my computer accounts.



 

I then choose the naming scheme for the machines I am deploying: CWCXA##



Input your administrative credentials.





Choose Machine Catalog Name and description



Click Finish



I can see my machine being provisioned via MCS on to my local Xenserver host.



My Machine Catalog is now visible in the console.



Next we need to create a Delivery group to assign users to this Catalog.
Delivery Group




I have chosen the Machine Catalog just created.



NOTE: This next screen shot is only an option in Cloud deployments.

The option I have chosen here lets Citrix Cloud manage my Workspace's.

“Leave user management to Citrix Cloud”

Workspace's are now known as Library’s.

A library is an offering that you can assign to users. (My delivery group will be offered up as a resource for users to use)



Continue through Delivery Group wizard and finish the Delivery Group and navigate to the Library node.
Library Offerings


You can now see your Offerings in the library (Basically your Delivery Groups with no assigned users…yet!)



Click on the 3 dot dial button and then Manage Subscribers.



Here you can choose the users who will have access to your delivery group resource.

Choose your domain and users to add.






Domain users is already added in my example.



Now you can see a number next to the Delivery Group offering indicating AD membership has been added.

Connectivity
Once you have added your resources to the cloud, created your Machine Catalog, created your Delivery Group offerings you can now get your apps and desktops.

Click the Xenapp and Xendesktop Service –



Navigate to the Manage tab and choose Service Delivery.

It is here you can see the URL for connectivity.

In my example we have Storefront and Netscaler Gateway services in the cloud.

I will explain in a later blog why I prefer the on premise Storefront and Netscaler.

Briefly the reason why is because features like two factor authentication and any other ADC feature other than Gateway is not available in the Citrix Cloud.

You also need to think about having Storefront within the resource location for connectivity to the environment if your ISP provider decides not to play nice one day.

Use the URL to access the environment (internal/External).



Log in and reap the rewards of a wonderful, Simple Cloud solution.


 

My desktop launches with all my applied GPO policies, UPM profile best practices, mapped drives and custom settings.
 

GPO User restrictions shown limiting control panel visibility.



My active session can be viewed and managed in the Citrix Cloud Xenapp and Xendesktop Service.



Here you can see the initial logon time and subsequent logon time.
 
So, the familiar management and ease of installation so far allow you as an administrator to really concentrate on your customer’s needs, apply best practices and effectively proactively maintain and manage the solution.

In part 3 (http://wp.me/p8leEE-7B) we will look at one of the Smart Tools called Smart Check.

Citrix Cloud – Part 1

Introduction

So, you have heard by now the term Cloud. If you have not your head must be up in one.

So, Citrix Cloud, what is it all about? There are plenty of articles and videos explaining this.

https://youtu.be/QywoWo9fDgY

http://docs.citrix.com/en-us/citrix-cloud/overview/about.html or check out - https://www.citrix.com/products/citrix-cloud/ for more information.
What I will do is list some, not all advantages of Citrix Cloud and then get right into a superb offering (Xenapp and Xendesktop Service) by those women and men at Citrix and show the simplicity of migrating to the cloud.

Reduced costs and footprint

No SQL server or licensing cost

Costs of running servers reduced

Power costs reduced

More Floor space

More Storage space

IT Operations simplified

Less network and storage infrastructure required

Server procurement

Always on latest technology

Automatic upgrades

Select services

Easily grow consumption

Easily decrease consumption

Most up to date technology

Familiar administration

Smart Tools

Ongoing health checks
Now you have considered the advantages and watched the videos and read the links above, I will show you the simplicity of transitioning your local site in to the cloud.

This is the first of many articles I will write on the Cloud.

Part 1 - Hooking up to the Citrix Cloud

Part 2 – Managing the Xenapp and Xendesktop Service

Part 3 – Smart Check

Part 4 – Smart Scale

Part 1 -Hooking up to the Cloud

Once you have your cloud subscription details -

https://onboarding.cloud.com/?utm_medium=referral&utm_source=citrix.com&utm_campaign=cwc-citrix.com%20-%20wwwb0515cwc_testdrive_promo

and you have logged in you should create a resource Location.

https://docs.citrix.com/en-us/citrix-cloud/overview/about/what-are-resource-locations.html

Now download software called a connector. It does what it says on the tin. Connects your environment (Resource location) to the Citrix Cloud.
Connector Installs
You should connect to the Citrix Cloud on a 2012r2 machine minimum that you have designated for this role. The machine should have TCP 443 outbound access and internet connectivity.

Within your Resource location navigate to download your Connector software.





Save the file and run.









Once install procedure begins you will be asked to sign in to your Citrix Cloud using credentials.



The connector will continue with the install.
From here you can choose the subscription that you have set up.

NOTE (Partners can manage multiple subscriptions).

The connector will continue with the install.



All being well your connectivity tests should prove successful.

 

 
Do a refresh and you should now see your site in your resource location within Citrix Cloud.

 
You can now check within Identity and Access Management if your domain is present.



I can now see my domain as a resource within the Citrix Cloud.

 
If you head back to your resource location (click tab in left corner next to the words “Citrix Cloud” and choose Resource Locations), Citrix kindly reminds you of some best practice.

 
You should proceed with the second connector install on a second machine designated for the role!

Once you have done this your environment is connected to the citrix cloud.
What does this mean?

Well now you have SQL, Delivery Controllers, Studio, Director, Licensing, Provisioning capability moved off your site to the Citrix Cloud with minimal effort.

Yes, there is an immediate reduced footprint in your resource location.

Your AD environment is now accessible so the fun stuff of managing, maintaining and provisioning workloads can begin!

In part 2 (http://wp.me/p8leEE-6z) I will move on to managing the Xenapp and Xendesktop Service.

 

 

The case of GSLB Failure due to ports not being open

Please note that this Citrix KB, https://support.citrix.com/article/CTX110348, states that the public IP address should only be used when there is no VPN connectivity and GSLB has to communicate over the Internet.


“When adding a GSLB site, if the site communicates over the internet only then use the "Public IP" field. For example, when there is no site to site VPN connectivity between the GSLB sites.”
Customers GSLB configuration was not working and this was confirmed by a packet capture taken from the Netscaler.

Where:

         192.168.2.31 = NetScaler IP

         192.168.6.22 = Subnet IP

         192.168.6.18 = Subnet IP | GSLB site IP

         80.194.53.11 = Public IP Remote GSLB Site Configuration

Ports = 22, 3008, 3009

Test Lab = 3010 – communicating between two NetScalers.

 

In the example above we can first see communication from the GSLB site IP to the remote external.

Then we see communication attempts from the NSIP to the remote external address.

Then we see communication attempts to the SNIP to the remote external VIP.
Netscaler attempts to speak to the external IP of the remote device in this order:

-GSLB SITE IP
-NSIP
-SNIP
The following trace further proves gslb uses ports 3010 and 22. In the example trace  below they are not opened ports.

Source Address      : 172.31.251.120 : NetScaler IP

Destination Address : 138.106.57.131 :Remote NetScaler GSLB Public IP

Netscaler > System > Network > RPC



Conclusion:

GSLB will contact a remote public IP if configured.

It will also try in the following order:

-GSLB SITE IP

-NSIP

-SNIP

GSLB must have ports 3008,3009 and 3010 open and 22.

Customer had a remote IP configured and this potentially did not need to be present as this was internal GSLB.

Customer also did not have the relevant ports opened up between sites.
  • ADNS IP: An IP that will listen for ADNS queries. For external, create a public IP for the ADNS IP and open UDP 53 so Internet-based DNS servers can access it. This can be an existing SNIP on the appliance.
  • GSLB Site IP / MEP IP: A GSLB Site IP that will be used for NetScaler-to-NetScaler communication, which is called MEP or Metric Exchange Protocol. The IP for ADNS can also be used for MEP / GSLB Site.
    • RPC Source IP: RPC traffic is sourced from a SNIP, even if this is different than the GSLB Site IP. It’s less confusing if you use a SNIP as the GSLB Site IP.
    • Public IP: For external GSLB, create public IPs that are NAT’d to the GSLB Site IPs. The same public IP used for ADNS can also be used for MEP. MEP should be routed across the Internet so NetScaler can determine if the remote datacenter has Internet connectivity or not.
    • MEP Port: Open port TCP 3009 between the two NetScaler GSLB Site IPs. Make sure only the NetScalers can access this port on the other NetScaler. Do not allow any other device on the Internet to access this port. This port is encrypted.
    • GSLB Sync Ports: To use GSLB Configuration Sync, open ports TCP 22 and TCP 3008 from the NSIP (management IP) to the remote public IP that is NAT’d to the GSLB Site IP. The GSLB Sync command runs a script in BSD shell and thus NSIP is always the Source IP.
  • DNS Queries: The purpose of GSLB is to resolve a DNS name to one of several potential IP addresses. These IP addresses are usually public IPs that are NAT’d to existing Load Balancing, SSL Offload, Content Switching, or NetScaler Gateway VIPs in each datacenter.
  • IP Summary: In summary, for external GSLB, you will need a minimum of two public IPs in each datacenter:
    • One public IP that is NAT’d to the IP that is used for ADNS and MEP (GSLB Site IP). You only need one IP for ADNS / MEP no matter how many GSLB names are configured. MEP (GSLB Site IP) can be a different IP, if desired.
    • One public IP that is NAT’d to a Load Balancing, SSL Offload, Content Switching, or NetScaler Gateway VIP.
    • If you GSLB-enable multiple DNS names, each DNS name usually resolves to different IPs. This usually means that you will need additional public IPs NAT’d to additional VIPs.

Bullet points taken from Carl Websters superb site:

http://www.carlstalhood.com/global-server-load-balancing/

VMWare View 6 – Removing Stale Machines from View DB

This tool must be executed on View Connection server.

Navigate to C:\Program Files\VMware\VMware View\Server\Tools\bin

For example, to locate and list erroneous virtual machines, use ViewDbChk --scanMachines.
 
The ViewDBChk program will connect to the view database and list out machines with errors.

Enter ViewDbChk with the required flags as listed below into the  administrative command prompt to find and clean your database inconsistencies.

In order to remove the machines the desktop pool they reside in needs to be disabled.

Confirm you wish the faulty machine to be removed.



Once the machine is removed you will need to enable the pool.

The command ViewDbChk --scanMachines will remove one machine at a time. In my example I had 4  machines to remove so this was sufficient and I reran the procedure 4 times.

You can use the --limit option to increase machine removal limit.
Originally I had 4 machines showing errors . This typically happens when machines have been removed at Vcenter level and not via the View console. As a result they remain in the View database.




There were a couple of machines with odd name syntax that were removed from VCENTER (right click -Remove from inventory) during this process to help the clearing process.

I also restarted the following service to refresh the view console.

 

For memory refresh purposes!

Full details in the following link:

https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2118050

 

 

Xenapp 7.x SQL Express Single Site to SQL Mirror Multi Site Migration – Part 3

Create 3 Xenapp Databases
You have migrated your Xenapp database and now you want to separate your database in to Site, Monitoring and Logging databases and introduce high availability. Let’s get started!

You are logged on to Xenapp Studio with an account that has sysadmin rights on SQL .

Open up Xenapp Studio console.

Highlight Logging and on the right click Change Database.

Enter new database and location.

Click OK so Studio can create the new databases automatically.

Go through the same process for the Monitoring DB.

Voila! You now have 3 separate Databases for your Xenapp environment.
In Xenapp Studio Management under configuration you can see the databases.

SQL Studio Management on your primary SQL will show 3 databases.

Change recovery Model of all 3 DB’s to Full
Next we get on with introducing HA in your database environment.

In this example we will use SQL mirroring and configure the Delivery controllers to be aware of the primary and Failover SQL partner.
All databases need to be backed up and restored with “No Recovery” option to the mirrored SQL partner. Before this is done the Recovery model should be changed to FULL on all the databases.

Right Click database/Options

Change the Recovery Model to FULL.

Make a full backup of all 3 DB’s
Right click DB/Tasks/Back up.

Select the location for your backup.



Once confirmed click OK to backup the DB.

Make a transaction log backup of all 3 DB’s
Right click DB/Tasks/Back up/Options

Backup Type: Transaction Log



Click OK
Backup the Transaction logs to the existing media set.


Do this for the Site, Monitoring and Logging databases.

Copy all backups to a local drive on the server acting as the SQL mirror.

Create the Controller logins on the SQL server acting as mirror
New Query


Create login [Your Domain\DDC Machine account$] from windows



Click !Execute

Restore the databases with the “NO RECOVERY” option
Do this on the SQL server acting as the mirror.

Choose back up DB copied locally on SQL mirror.



You will see the full and transaction logs appear as they were appended to the same backup set.

Before you commit and press OK for the restore make sure you are restoring with the “No Recovery” option.

On the right go to Options and choose RESTORE WITH NO RECOVERY.



Click OK and you will see a message confirming your database restored successfully.
You can now view the database in “Restoring” state in SQL studio.

Repeat the restore procedure for the remaining databases.

Create the mirror from the Principal SQL server
Choose database and right click.

Tasks/Mirror/

Click Configure Security tab

The Mirroring Security Wizard appears. Click Next.

In this example I am not configuring a witness server. You can use SQL express for this role if a witness is required. Using a witness will provide automatic failover should you have issues with your principal SQL server and is best practice.

Choose your Principal SQL.

Next choose the Mirror Server Instance. You must click Connect and authenticate to the server.

Click Connect

Click Next

Enter your credentials (Usually the administrative account you are logged in with).

Review and click Finish.



Once you click Close this pop up should appear. Click Start Mirroring.



The status will confirm successful synchronization.

The Databases on the Principal should now look like the below:

The databases on the Mirror should look like the below:

Note:

If you come across the following error when trying to mirror your Xenapp site database:


You will need to set Auto Close to OFF on the database.
This is achieved by running a New Query on the primary SQL server and executing the query:

ALTER DATABASE YourXenappDB SET AUTO_CLOSE OFF

Failover and test permissions
Initiate failover from the principal database and check permissions on the Controller machine account.

Right click database and choose TASKS/MIRROR.

Click Failover.

The database on the original SQL server you initiated FAILOVER on should now show the following status:

Do this on all 3 databases.
Now you should check your permissions on the SQL you failed over to. 
Check permissions on the Controller accounts for the databases. They should match the following:
Logging Database Permissions

Monitoring Database Permissions

Site Database Permissions

If all looks good initiate failover once again from the database that shows the principal role so all the databases are on the original SQL server that Xenapp was connected to.
TEST, NULL and SET Connections on your Delivery Controllers

The following actions will need to be performed on all your Delivery Controllers so they point to the new SQL setup.

Test Connections
We now need to test connections on both SQL servers to check if there are any issues.
This can be achieved by the following .ps1 script.
Remember to put YOUR SQL primary server and YOUR failover SQL partner.
$cs = "Data Source=SQL01; Failover Partner=SQL02; Initial Catalog=CITXENSITE; Integrated Security=True; Network=dbmssocn"

$controllers = Get-BrokerController | %{$_.DNSName}

foreach ($controller in $controllers)

{

Write-Host "Testing controller $controller ..."

Test-ConfigDBConnection -DBConnection $cs -AdminAddress $Controller

Test-AcctDBConnection -DBConnection $cs -AdminAddress $Controller

Test-HypDBConnection -DBConnection $cs -AdminAddress $Controller

Test-ProvDBConnection -DBConnection $cs -AdminAddress $Controller

Test-BrokerDBConnection -DBConnection $cs -AdminAddress $Controller

Test-EnvTestDBConnection -DBConnection $cs -AdminAddress $Controller

Test-SfDBConnection -DBConnection $cs -AdminAddress $Controller

Test-MonitorDBConnection -DBConnection $cs -AdminAddress $Controller

Test-MonitorDBConnection -DataStore Monitor -DBConnection $cs -AdminAddress $Controller

Test-AdminDBConnection -DBConnection $cs -AdminAddress $Controller

Test-LogDBConnection -DBConnection $cs -AdminAddress $Controller

Test-LogDBConnection -Datastore Logging -DBConnection $cs -AdminAddress $Controller

}
Null connections
Connections to the principal SQL server need to be nulled.

This can be achieved by the following ps1 script.
Set-LogSite -State Disabled

Set-LogDBConnection -DataStore Logging -DBConnection $null

Set-MonitorDBConnection -DataStore Monitor -DBConnection $null

Set-MonitorDBConnection -DBConnection $null

Set-AcctDBConnection -DBConnection $null

Set-ProvDBConnection -DBConnection $null

Set-BrokerDBConnection -DBConnection $null

Set-EnvTestDBConnection -DBConnection $null

Set-SfDBConnection -DBConnection $null

Set-HypDBConnection -DBConnection $null

Set-ConfigDBConnection –DBConnection $null -Force

Set-LogDBConnection -DBConnection $null –Force

Set-AdminDBConnection -DBConnection $null –Force
Screen shot highlighting results of script.
 
SET CONNECTIONS
Set the connections so the Delivery Controllers are aware of both SQL servers.

Connections to the SQL servers (Principal and Mirror) need to be set.

This can be achieved by the following ps1 script.

Remember to put YOUR SQL primary server and YOUR failover SQL partner.
$cs = "Server=SQL01; Initial Catalog=CitXenSite;Integrated Security=True;Failover Partner=SQL02"

$cl = "Server=SQL01;Initial Catalog=CitXenLogDB;Integrated Security=True;Failover Partner=SQL02"

$cm = "Server=SQL01;Initial Catalog=CitXenMonDB;Integrated Security=True;Failover Partner=SQL02"

Set-ConfigDBConnection -DBConnection $cs

Set-AdminDBConnection -DBConnection $cs

Set-LogDBConnection -DBConnection $cs

Set-AcctDBConnection -DBConnection $cs

Set-BrokerDBConnection -DBConnection $cs

Set-EnvTestDBConnection -DBConnection $cs

Set-HypDBConnection -DBConnection $cs

Set-MonitorDBConnection -DBConnection $cs

Set-ProvDBConnection -DBConnection $cs

Set-sfDBConnection –DBConnection $cs

Set-LogDBConnection -DataStore Logging -DBConnection $cl

Set-MonitorDBConnection -DataStore Monitor -DBConnection $cm

Set-LogSite -State Enabled
Screen shot of results of script.

Confirm and test connections to both SQL servers
Confirm that the Delivery Controller has connections to the principal and mirror SQL servers.


Get-BrokerDBConnection

Get-LogDBConnection

Get-MonitorDBConnection
The result within Studio should show connections to the SQL server address and the Mirror server address.

Final test is to initiate failover from principal databases and run these commands again:

Get-BrokerDBConnection

Get-LogDBConnection

Get-MonitorDBConnection

Finally open up Xenapp Studio Console.

Final Word
So in this 3 part series we have shown you the following:

Migrate SQL express to SQL production.

Create 3 separate databases for Xenapp.

Introduce resiliency by mirroring.

Hope you enjoy!

Remember to do everything in TEST FIRST!

 

Removing Problematic Delivery Controller – Method 1

This article will show you how to remove a delivery controller from your environment that is no longer required or functioning. Attempts to re add the controller fail with the same machine name. You do not have access to SQL but you can hand over eviction scripts to your DBA to clean up your Xenapp database.

This procedure worked in my Xenapp 7.x environment with a working Delivery Controller left in my Site.

OBTAIN CONTROLLER SID
NULL CONNECTIONS
RUN EVICTION SCRIPTS
EXECUTE SCRIPTS ON SQL
CLEAN UP REGISTERED SERVICE INSTANCES
RE-ADD DELIVERY CONTROLLER

Example 1

Obtain Controller SID

Launch Powershell as an administrator on your remaining Delivery Controller.

Run Get-BrokerController



Take note of the SID of the Delivery Controller that is no longer functioning.You will need this SID. The state may still show as Active if connections are still active.
Null Connections

Now run the following to null connections to the controller you wish to remove from your Xenapp database. This is carried out on a working Delivery Controller.

Set-ConfigDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-BrokerDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-ProvDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-AcctDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-HypDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-EnvTestDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-MonitorDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-SfDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-LogDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-AdminDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-AnalyticsDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM (XD 7.6 ONLY)
Get-BrokerController will now show the state of the second DDC as Off.

Run Eviction Scripts

Next we need to run the following powershell script using the SID identified on the controller that you are going to remove These commands will generate eviction scripts.

Take care to point the site, monitoring and logging parts to your correct database.

Get-BrokerDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\brokerevict.sql
 Get-ConfigDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\configevict.sql
 Get-HypDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\hostevict.sql
 Get-ProvDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\provevict.sql
 Get-AcctDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\adevict.sql
 Get-EnvtestDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\envtestevict.sql
 Get-LogDBSchema -DatabaseName CITXENLOGDB -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\logevict.sql
 Get-MonitorDBSchema -DatabaseName CITXENMONDB -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\monitorevict.sql
 Get-sfDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\Sfevict.sql
 Get-AdminDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\adminevict.sql
 Get-AnalyticsDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\analyticsevict.sql (XD 7.6 ONLY)

Execute Scripts on SQL
The above script will generate eviction scripts to run on SQL.
Scripts appear locally on your delivery controller C drive.

Copy these over to your SQL server acting as Principal.

Execute the eviction scripts on the sql server in SQLCMD mode

Open Sql Studio and click OPEN/FILE and choose your .sql script.
 

Your script will be imported into SQL.

Run your query in SQL CMD MODE.



Then click !Execute
You should get a result similar to the below.



Repeat this procedure for all your eviction scripts that you created.
Run Get-BrokerController. You should only see your remaining Delivery Controllers in your environment.

Clean up Registered Service Instances

Once this is done you need to clean up the registered service instances. You can see the controllers assigned to the services by running the below command.

Get-ConfigRegisteredServiceInstance

You will see that the faulty delivery controller is still registered to services.

Run the following in your powershell window.

Get-ConfigRegisteredServiceInstance | select serviceaccount, serviceinstanceuid | sort-object -property serviceaccount > c:\registeredinstances.txt

This will generate a text file on c:\registeredinstances.txt.

Inside this file you will see something similar to the below:
In this example we can see DDC01 and DDC02 are registered.

Once you have the output, you can use an advanced text editor like Notepad++ to select the ServiceInstanceUid’s for the service instances on ddc02 and use the data to build and run a simple unregister script:

Copy your amended text and create a .ps1 file on your local C drive of the Delivery Controller.



Run the file within your administrative powershell cmd window.

Once complete check the registered service instances once again.

You should not see any registered service instances on the delivery controller you have removed.

You should now be able to add your Delivery Controller back in to the environment.

Voila!

Xenapp 7.x SQL Express Single DB to SQL Mirror Multi DB Migration – Part 2

STAGE 1
As you can see below we are using SQL express. That just cannot do for production!

To check what database is being used for your Site, Logging and Monitoring check in PowerShell.

Run asnp Citrix* in administrative PowerShell.

Run the following highlighted commands:


Get-BrokerDBConnection


Get-LogDBConnection


Get-MonitorDBConnection

Backup your SQL DB
Now we need to install SQL Management Studio on to the Delivery Controller to manage and backup the Xenapp Database.

Once SQL Management Studio is downloaded launch the executable and run through the wizard.

 

The screen shot below is highlighting the incorrect choice. You should choose the “Perform a new installation of SQL Server 2012”.

If you continue as above, you will eventually come across this error.

So lets resume choosing the correct option and wiz through this bit.

Choose Management Tools – Basic (That is all you need).

Now launch the installed SQL Studio Management.

Connect

Now you can see your Xenapp 7.x database.

Right click your DB/Tasks/Backup.

Choose backup type FULL.

Choose location for .bak backup media set.

Click OK.

Copy the backup file to a local drive on your SQL primary.
Create Delivery Controller machine account Login within SQL Management Studio
Within SQL Management Studio on the Primary SQL server, highlight New Query and type the following to create your Delivery Controller Login:

Create login [DOMAIN\DDCNAME] from windows

Highlight text and click !Execute.

A message will appear stating the command completed.

You may need to refresh SQL Studio to see the Delivery Controller machine account.

Restore your Xenapp Single Site DB to the new SQL server
Now we are ready to restore the backed up database from the local SQL drive.

Right click Database/Restore Database

Choose Device and the radio buttons and click Add and browse to your .bak (sql backup) file.

Click OK.

Check permissions on the DB
Next, we will check the permissions on the Delivery Controller machine account.

Right click Delivery Controller account  within Security/Logins and go to Properties.

Make sure the machine account is mapped to the Xenapp database.

The database role membership for the Xenapp site should match the below screen shots.

There are also updates in article CTX140319 for the role memberships.

ADIdentitySchema_ROLE = 7.0 Onwards

Analytics_ROLE = 7.8 Onwards

AppLibrarySchema_ROLE = 7.8 Onwards

chr_Broker = 7.0 Onwards

chr_Controller = 7.0 Onwards

ConfigLoggingSchema_ROLE = 7.0 Onwards

ConfigLoggingSiteSchema_ROLE = 7.0 Onwards

ConfigurationSchema_ROLE = 7.0 Onwards

DAS_ROLE = 7.0 Onwards

DesktopUpdateManagerSchema_ROLE = 7.0 Onwards

EnvTestServiceSchema_ROLE = 7.0 Onwards

HostingUnitServiceSchema_ROLE = 7.0 Onwards

Monitor_ROLE = 7.0 Onwards

MonitorData_ROLE = 7.0 Onwards

OrchestrationSchema_ROLE = 7.11 Onwards

StorefrontSchema_ROLE = 7.8 Onwards

TrustSchema_ROLE = 7.11 Onwards

Test, Null and Set Connections on Delivery Controller

Now we get to the part where we TEST, NULL and SET connections on the Delivery Controller.
In terms of what connections to TEST, NULL and SET depending on your Xenapp version there is this table of reference.
AcctServiceStatus

AdminServiceStatus

AnalyticsServiceStatus     # 7.6 and newer

AppLibServiceStatus        # 7.8 and newer

BrokerServiceStatus

ConfigServiceStatus

EnvTestServiceStatus

LogServiceStatus

MonitorServiceStatus

OrchServiceStatus           #  7.11 and newer

TrustServiceStatus          #  7.11 and newer

ProvServiceStatus

SfServiceStatus
Amend the scripts accordingly or include all of the above but you may get error responses within your .ps1.
The following .ps1 scripts were used for a Xenapp 7.6 environment.



Testconnection.ps1


NullConnection.ps1


SetConnection.ps1

Remember to amend ServerName and SiteDBName to your environment in the scripts!
Test Connections from your Delivery CONTROLLER
We need to test connections to the migrated SQL DB using a .ps1 script.
This is done within PowerShell from the Delivery Controller. Create a TESTCONNECTION.ps1 script using the below information.

$ServerName=”YourSQLServer”

$SiteDBName=”YourXenapp Site”
Open PowerShell and type Asnp Citrix*

Navigate to test connection script in PowerShell and run .ps1.

All looks good below!
Null SQL connections from your delivery Controller
Once confirmed we need to null connections on the Delivery controllers using a .ps1 script.

Create a NULLCONNECTION.ps1 script using the below.



Navigate to your .ps1 script within PowerShell to null DB connections from your delivery controller.

All looks good once more.

Set the connections on your delivery Controller to the new SQL Server
Now we need to set connections so they point at the new SQL server.

Create a SETCONNECTION.ps1 script using the information below.

We get prompts all the connections have been SET. The DBUnconfigured is shown as some of the commands in the .ps1 script are going to NULL connections and then SET them.

Still looking good!

Restart the Citrix Broker Service within services.msc on the delivery controller.

Open up Studio.

Run the following commands to confirm.

Get-BrokerDBConnection

Get-LogDBConnection

Get-MonitorDBConnection
BOOM!! Successful SQL Express to SQL 2012R2 Migration.

Now let’s crack on with SQL HA and creating separate databases (Site,Logging and Monitoring) in PART 3 of this series.

 

 

Xenapp 7.x SQL Express Single DB to SQL Mirror Multi DB Migration – Part 1

So, for one reason or other you are using SQL express and wish to introduce some best practice in to your production ready Xenapp environment.
Your goals are:

Xenapp should use full blown SQL.

Xenapp should have 3 Databases not one for Site, Monitoring and Logging.

Xenapp should have resiliency at the DB level.
Remember folks the most important 3 rules before any actions are carried out.

Backup!

Backup!!

Go out in the evening without any worries.

  • Part 1 will provide the overview.
  • Part 2 will provide detailed steps for STAGE 1.
  • Part 3 will provide detailed steps for STAGE 2.
STAGE 1
Backup your SQL DB.

Create Delivery Controller machine account Login within SQL Management Studio.

Restore your Xenapp Single Site DB to the new SQL server.

Check permissions on the DB

Test, Null and Set Connections

Test permissions from your Delivery Controller.

Null SQL connections from your delivery Controller

Set the connections on your delivery Controller to the new SQL server.
STAGE 2
Create 3 Xenapp Databases

Change Recovery Model of all 3 DB's

Make a full backup of all 3 DB's

Make a transaction log of all 3 DB's

Create the Controller Logins on the SQL server acting as mirror

Failover and test permissions

Test, null and set connections on your delivery controllers

Test connections

Null Connections

Set Connections

Confirm and test connections to both SQL servers

Final word

In the next article we will go in to more detail for the initial stage.