How To Set Your Environment as Active/Active or Active/Passive per Application Level

Once upon a time….

A customer I was dealing with decided on an active/active solution.

Shortly after implementation users started complaining of slowness with a specific application. Investigations identified the DB backend for this application was only in one site.

To cut a long story short the backend location was not going to change. Other applications had backend DB’s in the other locations to add complication.

Now you may have heard of Application Groups introduced in Xenapp 7.9. I just read about it and never put it in to practice but in this case, it certainly came to the rescue especially combined with Tagging which was introduced in 7.8 version.

With these new features, I could configure the Xenapp solution to either load balance applications between different locations within the same Xenapp site or set a preferred site for session connection.

In addition, because of Tagging I could control what machines users could connect to and use Tagging to aid in troubleshooting support issues.

Let me walk you through the solution.

The below diagram shows the principal design of this solution.

  • You have your individual applications.
  • You create application groups.
  • You assign applications of likeness in to the Application Group.
  • You connect the Application Group to a Delivery Group or Delivery Groups.
  • Create Tags for individual Xenapp servers, groups of servers that you can add within your Delivery Groups
Advantages

The following all happens at the Application Group level:

  • Using the above method, you only set permissions at the Application Group level, not the Delivery Group or individual application.
  • You can set priority of assigned Delivery Groups. Having the same priority will load balance applications between the assigned Delivery Groups.
  • Setting different priorities will result in one of the Delivery Groups being favored for application connections.
  • You can control which servers the applications in your Application Group go to by restricting launch to a specific Tag.

So now I have a controlled solution for application site launches and server session launches which will greatly help in troubleshooting techniques.

Plus, if you have Xenapp 7.9 and above you can do this!

Now to show you what this looks like in the real world:

Within the individual application you assign the Application Group

Permissions are not set at the individual application

Within the Application Group is where you set the permission restrictions

Within the Application Group you assign the Delivery Groups. The below picture shows a priority setting favoring the London Delivery Group. If this was 0, 0 the applications in the group would be evenly load balanced.

At the Delivery Group level in the “edit Delivery Group” settings we see that no permissions are set.

Now the thing to remember is you can set permissions at 3 levels. Application, Application Group and Delivery Group. Best practice when using Application Groups is to set the permissions at this level. However, if for one reason or other your apps in the application group require different users then just set the permissions at the application level and not at the Application Group level or Delivery Group level.

Basically, try to set them only in one location!

Conclusion

I find this a very overlooked feature that is incredibly useful. It is very simple to change your solution from active/active to active/passive per app level or target specific servers when launching applications, all from within the core XenApp software without the addition of more kit.

So, if you have an issue where various apps perform better in certain site locations this solution will become handy. If you have profiles only in one location and they are causing problems loading in one site a simple change using Application Groups will be your knight in shining armour!

I hope you enjoyed this tale from the land of Citrix…and just so you know, they all lived happily ever after.

 

A CTA’s Personal Note of Thanks to the Citrix Community

I made a conscious decision to improve my life not so long back. This involved improving my health through fitness, organising my time better and getting involved in general.
I am not an astronaut, actor or rock star (only in my head) but rather than be disgruntled about this I decided to like what it is that I do. It is to this end my relationship with my work improved and I started developing an interest in Citrix technologies.
I have been really influenced by the shift from knowledge hoarders to knowledge sharers and the contributions these people make to help their fellow peers. The game here is not about knowing it all but about learning, listening and sharing.
With some encouragement from fellow CTP Dave Brett (@dbretty) whom I had the pleasure of working with and the influential fountain of knowledge that is Mr Lyndon Jon Martin (@lyndonjonmartin) and some extra work effort I put in I have achieved this goal of becoming a Citrix Technology Advocate (CTA).
I also have a personal reason that I decided to push myself and I did not want to be constrained or stop myself from achieving my goals. I will not let my fears govern me, nor should anyone. After all, didn’t someone once say we have nothing to fear but fear itself?
The Citrix user community has taken off and there are so many of you who have provided answers and helped relieve the stress of our day to day problem solving. I will get the chance to work with peers whose work I deem exceptional and the bonus is it will be shared.
So, on a closing note I just want to say a big thank you to Citrix and the CTA program for recognising the community and lastly, I wish to salute all the sharers, helpers and contributors out there.
Further blog posts coming soon!

Login PI and Xenapp Optimisation – Part 4

Environment
My environment is a Xenapp 7.13 test lab using a 2016 image delivered via PVS.

I am using UPM best practices and folder redirection. It is fully patched as of time of writing.(16/07/2017)
Citrix OPTIMISER
The optimised image used the new Citrix Optimiser provided here:

https://support.citrix.com/article/CTX224676
When you run the Optimiser executable you are presented with a choice of predefined templates.
I chose the 2016 template.
 

As you can see it already has predefined best practice settings such as services to disable.

 
It also easily allows you to disable scheduled tasks with ease.
 There is also an analyse option which will let you know if settings for best practice are applied or not.


When you choose Execute mode the settings are applied.


A lot easier than manually carrying out individual optimisation!
Initiating LOGIN PI Workload
The login PI process once configured will start to initiate a launched desktop session.

The launcher will verify connectivity.


You will see a desktop session launch.


Within the session a workload will initialise.




In my example, I chose native apps to launch such as Paint, Calculator, Notepad and Wordpad.





The apps will close.
We will then see the session log off.
Results of non-optimised and Optimised Image
I ran the Workload for a good few hours in each scenario.

The LOGIN PI dashboard provided the following Insights:

NON OPTIMISED 



I could see the applications were all within their action response times the last 15 minutes. Login times were as follows.


OPTIMISED


Non- optimised

OPTIMISED


Director Console Results

NON OPTIMISED





OPTIMISED

 

 

 

 

NON OPTIMISED




OPTIMISED

 

 

 

Conclusion

The results are not what I was expecting. There was not much difference between my 2016 optimised image from my standard. I would therefore suggest moving to a 2016 O/S if you have not already done so as this is a very good base O/S for your workloads.

There were 137 login sessions in a 2 hours window as shown by the optimised screen shots compared to around 70 on the non optimised image in an hour. Almost the same.

The optimised image was slightly better but there was not much in it.

These results are based on a single user workload. I would like to run through the tests using multiple workloads and I will post results later.

I would also like to try my own set of customisation's to see if I can improve on the results found here.

What I hope to highlight is the usefulness and insights provided by Director and Login PI in order to assess login issues. 
Moving forward we are entering a world of automation and having the capability to simulate Xenapp workflows is very much welcome.

I have more work to be done so expect future posts on optimisation.

Login PI and Xenapp Optimisation – Part 3

Login PI Dashboard

Let me take you through the Dashboard for Login PI.

Once you have configured your information in the initial setup as described in part 2 of my blog posts here http://wp.me/p8leEE-9M
and you have run your workloads we get a nice graphical console showing us useful insights.

You can see your login success rate, performance and Application performance.

We can see the response times of our configured workloads.

Scrolling down the console we can see the following collected statistics.

- Avg latency for the last 24 hours

- Alerts for the last 24 hours

- Avg login for last 8 hours

- Alerts for the last hour

The time for each can be adjusted to 1hr, 2hr, 8hr, 24hr,1 week.

If we highlight some data in the GUI we will be presented with further information.

The picture below shows the details in the Avg latency section.

This highlights memory was at 86%.

The next example breaks down the login times over a 24 hr time period. Remember the time period is adjustable.

In the initial setup you set threshold values. Any time these threshold values are exceeded you can configure e mail notifications to go out to your IT.



Explanation on threshold values:

For certain applications, you might want to know how long it takes to respond or perform a workload action. This section lets you customise thresholds for each action, so you receive an alert regarding any overrun. The thresholds for non-customised values will be calculated as “Median * (100_Auto Threshold)%”.

To set the default threshold value that applies to all workload actions, simply adjust the Auto Thresholds slider

To set a specific threshold for each workload action, enter the appropriate value (in seconds) in the relevant Threshold value field and turn on the Actions switch at the end of each row.
The email configuration settings are shown below.The settings are self explanatory.



We can see all alerts exceeding thresholds highlighted in the console.

So, now we have familiarity with the Login PI Dashboard feature which provides very useful stats and alerts to provide proactive support.

The next section of posts will provide results of a 2016 non optimised image and then optimised using the Citrix Optimiser tool.

We will take a look at the comparison of the 2016 images via the dashboard insights of Login PI and Citrix Director.

Lets get optimising!

The Case of EXCEL/WORD 2010 docs not opening on Network shares on XA7 Farm

Issue
I had an interesting case where no office applications would open on network shares and if you went to Save As within the office applications the apps would crash (Excel/Word)

So, the steps to reproduce issue were:

Launch Citrix published Excel





and click Save As 

EXCEL CRASH



Launch published Excel and click open and browse to network share location. Click an Excel document on the network share -

Now this message

First Diagnosis
First, I believed the fix was to go into the Trust Center as shown below and the issue was to do with trusted network share locations. I set some settings manually.










Once I entered out of the Trust Center and ticked the boxes highlighted above, I repeated steps and no longer had an issue.
Then I thought this is a GPO setting. So I set the following in a GPO -



However, what I soon found out was no matter what GPO settings you made, if you disable UPM, Roaming Profiles, unlink the GPO, the problem would continue on a fresh relaunch of Excel or Word as a seamless application.

Then I figured out all I had to do was simply go in to the Trust Center without changing a thing and everything would work. At this point I started to scratch my head.
Thinking about this I decided to use a tool called PROCMON (good to see what is writing to files or registry when an action is performed) to capture any registry write values that applied when I entered the Trust Center.

Interestingly I applied the filters below and the following keys were captured.

Process name is Excel.exe

Operation is RegSetValue



I saw some IE Cache keys being written when I captured the trace and performed the user action of clicking on the Trust Center within published Excel.

That got me thinking.

All these registry keys were IE Cache.

I extracted these keys from registry by going to the Jump To setting.



I exported the keys on to my  desktop on the Xenapp server.



Now when I opened published Excel as my test user and imported the registry keys in to the users HKEY USER hive whilst they were logged on to my Xenapp server, I witnessed no issues with Excel.

I then traced it down to one exported registry key.



No crash on Save As within Excel and I could open office documents on network shares.
Next step I launched Published IE as the test user and deleted the IE cache as shown below.



I launched a new published Excel whilst IE was open and there was no issue. This confirmed my belief it was to do with the IE cache.

Further to this I also checked the following registry key where cache data is stored.

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders

Reason For The Issue
Looking at UPM and Folder redirection I could see no obvious issues.

Looking deeper to see what was happening with the user when logged in to a session I could see that the user did not have any INetCache in their folder location.

%USERPROFILE%\AppData\Local\Microsoft\Windows\INetCache

Solution
The solution is quite simple.

I added the following GPO preference to be created when users log in.

%USERPROFILE%\AppData\Local\Microsoft\Windows\INetCache







I set some specific Item Level Targeting to the newly created policy so the new GPO would apply only to my test users for verification.



Once I repeated tests I no longer had the issue.

This can be seen by checking the folder location whilst logged in as the user via published cmd prompt.Now you can see the INetCache folder!



Interestingly when I went to show this to my colleague he new the issue as they had come across this before but was not sure of the reason.

I could have saved hours! Always ask your co-workers to save time ;-)

Although that was time I definitely was not getting back I am hoping my troubleshooting analysis will help some of you who may have similar issues.

Happy troubleshooting!

 

Login PI and Xenapp Optimisation – Part 2

What is Login PI
Login PI is a new tool from those clever people who gave you Login VSI. I have decided to use this tool to test some of the optimisations in my Xenapp environment.

Login PI is an advanced VDI performance measuring system designed to help you deliver the best possible digital experience for your virtual desktop users—maximising worker productivity while minimising downtime and costly business interruptions. Login PI provides a new level of actionable, in-depth insights into the quality of your VDI’s digital experience that other solutions cannot match.

https://www.loginvsi.com/products/login-pi

This article will show you the installation steps and how to set the software up to simulate real world actions such as launching desktops and applications.

Following on from this article we will carry out these real-world tasks using various optimisation settings we highlighted in part 1 of this series.
Installation pre-requisites
I am using a Windows 2016 server.

Install .net 3.5

       Once you have installed the pre-requisites and downloaded the PI software click on the .exe and run it.
Run Setup
This will install IIS and a few other binaries. You will be prompted to reboot.

After this you connect to the web console: (Recommendation is to use Google Chrome)

Connect to http://localhost:8080

You will then add the SQL server details



The below highlights that I am using SQL1 as a server and I input my administrative credentials.

 

Now we are at the stage where we are ready to configure LOGIN PI.
Login PI Configuration
We can see a license error stating that no license is installed. So, first things first upload your license.

I have a trial license to demo this software.


Browse to your License file and upload.

 
Create Logon Accounts
Next, we need to create some logon accounts that LOGIN PI will use to generate session workloads.

Hit the cog wheel icon and put in the details of your Base OU, Username and desired password, Domain and number of users to create.

Then click GENERATE.



This will generate a powershell script for you to run on your Domain Controller.


Copy script to DC and run.

The script should generate a new OU (LoginPI) with a subfolder and some target users as shown below.



Next, we return to our LOGIN PI configuration console.
Create Profile
We will create a profile for LOGIN PI to use.

Click the + icon.

Enter Name, Type (of connection) and Description.



The various types of connections you can do are highlighted here:

 

Now configure your environment settings.


Choose your workload



You have two options.

Default workload - native windows apps will use applications already native to your O/S like notepad, calculator etc.

Default workload – office apps will use word, outlook, Excel etc.

The following office versions are supported. This can be seen under the office version tab within Environment Settings.

Next scroll down and you configure your connections.



Click the + icon and input a username and password (Previously generated via script or any other account that can launch sessions) and click CREATE. You can add as many accounts as you wish to test session launches.



To edit these settings, you can click the area highlighted in yellow above.

Next highlight the yellow edit area shown below and fill in your connection settings.

The example I have below is using a Storefront connection.

For the Storefront URL use the Store URL.

Put in your domain and the resource name is the name of your Published Desktop Resource.


Advanced settings you should not have to change.

Launcher
Next you configure your launcher.

This can be the same machine as your LOGIN PI server but the important thing to remember is this should be in the Xenapp site you are testing. If you have multiple sites you can configure multiple launchers.



Download the launcher setup file that is appropriate for your machine (32 or 64 bit).



Run the launcher.

  

In the next screen shot it is best to put the name of the LOGIN PI server you are connecting to if the machine is not the LOGIN PI server. Remember launchers can be put on multiple sites to test connectivity.

   

You will now have a new application icon



The above reminds me that your launcher machine must also have Receiver installed. (Try to use latest).

Now when you launch this it will not work straight away. We still have some actions to carry out and then we need to approve the launcher machine.
Set Schedule
The next thing we need to set is the Daily Schedule.



We can choose the hours we want the launcher tasks to run using the accounts we set up previously and to start this we need to tick the Enable scheduling box and choose an interval of time between session launches.
Thresholds
Finally, we have threshold settings. This defines thresholds for all actions or specific actions so that you receive alerts after a set overrun.

 
Final Actions
One more thing, we need to approve the launcher server.

To do this we hit the icon highlighted below.



Highlight your launcher by selecting the tick box and then hit ACCEPT.



Now when we click the LOGIN PI LAUNCHER we will initiate a connection to your desired published desktop resource and it will launch the native apps. This will be logged and recorded as part of your defined schedule for you to analyse in the LOGIN PI DASHBOARD.


You should now see a desktop launch and initiate applications and then close.

If you have an issue with the session connecting but no launching of applications the following needs to be installed on your Xenapp image.

More Prerequisites
Here are some prerequisites for your target image:

Target Environment Software

Windows-based operating system.

Microsoft .NET Framework 3.5

Connections

The test user(s) need to be able to:

Logon to the target environment.

Run the logon script.

Have connectivity to the Login PI server over a dedicated port. (Default is port 8080)

Access the %temp%

Make sure all these are in place and you should not have any issues.

Further requirements for a login PI environment can be found here:

https://www.loginvsi.com/documentation/index.php?title=Login_PI_Requirements#Target_Environment
Conclusion
Before I complete the testing of the various optimisations with a 2012/2016 image I view this tool as quite a useful proactive reporting mechanism on the session health of your RDS/Xenapp environments.

You can set up profiles direct to Xenapp/RDS servers and via Storefront and Netscaler Gateways.

One thing that grabbed my attention was if this tool could be multi tenanted. I spoke to the chaps at Login VSI who said that it could be used in such a manner.

If this is the case I would be able to analyse my different profiles that were created for different environments that use different launchers in multiple sites and receive proactive information should there be any issue with session launches or application launches. Remember the launchers must be able to see the LOGIN PI server on port 8080!

In part 3 we will delve in to the Dashboard and Insights supplied by Login PI.

 

 

 

 

 

Login PI and Xenapp Optimisation – Part 1

There are a lot of optimising tips and best practices that can be searched for on the internet for your Citrix environments. This article will collate some of these suggestions and then I would like to get down to some tests to see the improvements that can be made. I will use a new tool called Login PI which is made by those clever people at LoginVSI. This tool can log the speed of your Xenapp connections and session initialisation.

First thing is first – I would like to thank the amazing people out there who have already tested and provided optimisations. To this end I will provide the following links and they are all worth a good read. I have no doubt more recommendations will be added to this post over time.
http://benpiper.com/2011/12/7-ways-speed-citrix-xenapp-logons/

https://support.citrix.com/article/CTX101705

https://xenappblog.com/2016/optimize-logon-times/

https://lalmohan.co.nz/2015/10/07/citrix-xenapp-long-logon-times-and-potential-fixes/

https://wilkyit.com/2017/04/28/citrix-xenapp-and-windows-server-2016-optimisation-script/

https://virtualfeller.com/2016/04/18/microsoft-windows-10-citrix-xendesktop-and-logon-time/

https://msdn.microsoft.com/en-us/library/windows/hardware/dn567648(v=vs.85).aspx

https://support.microsoft.com/en-us/help/3147099/recommended-hotfixes-and-updates-for-remote-desktop-services-in-windows-server-2012-r2

https://support.citrix.com/article/CTX142357

http://www.carlstalhood.com/citrix-profile-management/#exclusions

https://www.loginvsi.com/blog/732-windows-server-2016-performance-tuning

My Generic Recommendations to apply are taken from all the above.

Generic Recommendations

Install all the recommended Security Microsoft Patches.

https://support.microsoft.com/en-us/help/3147099/recommended-hotfixes-and-updates-for-remote-desktop-services-in-windows-server-2012-r2

https://support.citrix.com/article/CTX142357
  • Set logon time expectation with users without session pre-launch or linger and this is from the point of application click after logon. Setting expectation is paramount. Why would you expect sub 10 seconds for a logon if your normal workstation cannot achieve this?
  • Design your profiles with folder redirection (User Configuration > Policies > Windows Settings > Folder Redirection.
  • Streamline your profile and use UPM exclusions - http://www.carlstalhood.com/citrix-profile-management/#exclusions .
Check the recommended exclusions after every UPM release.
  • Do not map every printer! Use default printer only if possible.
    Start this application without waiting for printers to be created. "Set-BrokerApplication APPNAME -WaitForPrinterCreation:0"
    
    https://support.citrix.com/article/CTX218333
  • Consolidate your GPO and enable Block Policy inheritance. Fewer GPO objects the faster logon will be.
  • Use Load throttling.
  • Use latest Receiver Client.
  • Use Director to provide you with valuable insights as to what parts of the logon process are causing issues.
  • Check logon scripts. Check for old mapped drives, printers that no longer exist.
  • Check for old, stale user profiles (not deleted after logoff). Configure profiles to be deleted after logoff (This does not enhance log on but is best practice).
  • Make sure users have full permission on HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft \MSLicensing registry key.
  • Disable virtual channels not in use (client drives, audio, printing, com ports, USB redirection) in the Citrix policies.
Disable unused parts of your GPO (Computer or User).

https://technet.microsoft.com/en-us/library/cc733163(v=ws.11).aspx
  • Use Asynchronous GPO processing (This should be enabled by default). Let's the system display the Windows desktop before it finishes updating user Group Policy. Setting can be found here:
    
    Computer Configuration\Administrative Templates\System\Group Policy
Disable or prevent apps from running once shell initialises. Use msconfig or right click app in task manager\Start up and set to disable.

Use Autoruns . This tool highlights what runs when a user logs in to a Windows Server. Run this and disable all that is not required for your environment.

Disable not delete all that is not required under the following:

HKLM\SOFTWARE\Microsoft\Active Setup\Installed Components and HKLM\SOFTWARE\Wow6432Node\Microsoft\Active Setup\Installed Components.

  • Remove Delay. VDAs based on Windows 8.x and Server 2012and 2016 Microsoft introduced a delay of 5-10 seconds for operating systems starting from Windows 8. To remove the delay, add the registry value StartupDelayInMSec (REG_DWORD) to 0 in HKEY_CURRENT_USER\Software\Microsoft\Windows \CurrentVersion\Explorer\Serialize   (You can add the key “Serialize” if not present already). This will greatly reduce “interactive logon” delays.
Exclude the whole of \AppData\Local\Google\Chrome. Include the following as a start:

AppData\Local\Google\Chrome\User Data\First Run AppData\Local\Google\Chrome\User Data\Local State
AppData\Local\Google\Chrome\User Data\Default\Bookmarks
AppData\Local\Google\Chrome\User Data\Default\Favicons
AppData\Local\Google\Chrome\User Data\Default\History
AppData\Local\Google\Chrome\User Data\Default\Preference
Slow Initial Login When Using Folder Redirection

Modify the following registry entry, which controls the time wait.

HKLM\Software\Microsoft\Windows\CurrentVersion\Explorer

FolderRedirectionWait (REG_DWORD) in milliseconds

Default value is 5000 milliseconds or 5 seconds for each folder.

Valid values would be from 0 to as high as you want to go which would be the DWORD maximum.
AntiVirus

Recommend turning OFF Real-time scanning for MCS/PVS created images as they are only read only.

Run Real-time scanning on the network shares that hosts the profiles/home folders and also on the Write Cache location in case of PVS images. Run a full scan on writable images only.
  • Enable the Microsoft policy “Set maximum wait time for the network if a user has a roaming user profile or remote home directory” and set the value to 0. The policy could be found under Computer Configuration – Policies – Administrative Templates – System – User Profiles - https://support.citrix.com/article/CTX133595/
In the system Control Panel, click the Environment  In the System Variables section, click the variable Path. Add the following to the end of the string in the Value field at the bottom of the panel:

 ;%SystemRoot%\Fonts

Click Set. The changes take effect immediately.
IPv6 turned off if not in use. Slow boots could occur due to IPv6. See also this TechNet article.

To disable IPV6 I would recommend using the registry key instead since there is known issue when you unselect it in the network adapter settings.
Black screen – Might not be relevant after 7.9

https://support.citrix.com/article/CTX205179

Remove the full path from the AppInit_DLLs key.

Key Location: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows

Entry Name: AppInit_DLLs

Entry Type: String

New Entry Value: mfaphook64.dll

Old Entry Value: C:\Program Files\Citrix\System32\mfaphook64.dll

Key Location: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows NT\CurrentVersion\Windows

Entry Name: AppInit_DLLs

Entry Type: String

New Entry Value: mfaphook.dll

Old Entry Value: C:\Program Files (x86)\Citrix\System32\mfaphook64.dll
  • Active Setup. Remove the key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Active Setup\Installed Components\{2C7339CF-2B09-4501-B3F3-F3508C9228ED}.Make sure that the key is removed for the user profile as well under HKCU . The above key is 2C7339CF-2B09-4501-B3F3-F3508C9228ED - Theme Setup Program (Non Critical)
Delete entry HKCU\Software\Microsoft\Windows\CurrentVersion\UFH\SHC. This can be achieved by a login script.

REG DELETE HKCU\Software\Microsoft\Windows\CurrentVersion\UFH\SHC /va /f
 Redused logon time from 55 seconds to 16-17 seconds. (KB 3161390)

OR

…add the location to the registry exclusion list in Citrix Profile Manager.

For memory consumption, you should consider the following:

Verify that DLLs loaded by an app are not relocated.

Relocated DLLs can be verified by selecting Process DLL view, as shown in the following figure, by using Process Explorer.

Here we can see that y.dll was relocated because x.dll already occupied its default base address and ASLR was not enabled



If DLLs are relocated, it is impossible to share their code across sessions, which significantly increases the footprint of a session. This is one of the most common memory-related performance issues on an RD Session Host server.
Disable NTFS Last Access Timestamps

By default, Windows keeps track of the last time a file was accessed through the “last access” time stamp. If you use this time stamp for backup purposes or you make frequent use of the Windows search function base on time stamp, then you may actually have a use for it.

In other cases you can disable the update and it will speed up Windows by avoiding having to update (write) that time stamp every time a file is read.

fsutil behavior set disablelastaccess 1

OR

Navigate to the following registry location:

HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlFileSystem

Right-click the right-side panel and select New > DWORD Value. Call it NtfsDisableLastAccessUpdate and give it a value of 1.
Here are some other optimizations you can add in to GPO preferences taken from Erics Xenapp Blog.

CtxStartMenuTaskbarUser – Windows 7 look on WS08R2 & XenApp 6.5
StatusTray – Provisioning Services
vDesk VDI – Personal vDisk
DisableStatus – Slow logon with black screen (Citrix XenApp 7.6 Slow Logon)

Generic AV recommendations

Recommend turning OFF Real-time scanning for MCS/PVS created images as they are only read only.

Run Real-time scanning on the network shares that hosts the profiles/home folders and also on the Write Cache location in case of PVS images.
  • Hardcore option – use Citrix universal printer and disallow printer mappings
  • Is the file server optimised? – Check the IOPS on the file server!
Virtual environments

Remove CD-ROM drives from your virtual Citrix servers.

Hide VMware Tools Systray Icon –
HKLM\SOFTWARE\VMware, Inc.\VMware Tools
 “ShowTray”=dword: 00000000
Note all your optimisations that are not out of the box!
Be careful when fully optimising an image as it might inadvertently break other stuff. I would go through my generic recommendations and if this proves a suitable logon time leave it there.

It might be better to trick user expectation by using session pre-launch or linger than go through a completely optimised image as if stuff does break troubleshooting might be difficult.

As with everything proof is in the pudding.
LOGIN PI Tests
I will reveal the tests of 3 scenarios using a tool called LOGIN PI in a future post.

1) Out of box Xenapp 2016 image.

2) My Rule of thumb recommendations applied.

3) 2016 optimisation using the Citrix Optimiser Tool.
Let’s see what we get!

 

 

Troubleshooting VDA Migration from 6.5 to 7.13

Choosing the option to let the 7.13 installation media remove Xenapp 6.5 resulted in a 1603 error.

Error 1603 and the details.

 

The VDA at this point did not install on the 2008R2 O/S.

I then installed the VDA from the Xenapp 7.13 ISO.

A few install errors appeared but the install carried through once you hit OK.

Interesting to note I checked to see the VDA was registered in XA7 Studio Console and indeed it was.

Problems continued and I was unable to launch any applications.

Checked STA configuration.

Checked Firewall.

Checked install logs in %AppData% on the VDA. (Local folder)

Because I knew there was a problem when installing the VDA on Server 2008r2 image I uninstalled the VDA software and any left over XA6.5 components.

From this point the VDA installed cleanly along with Receiver.

My apps could now be launched.

I will make another attempt at this to see if I can cleanly upgrade the VDA otherwise I will resort to a manual uninstall of 6.5.

I will update this post soon.

I know this is not rocket science but hopefully it will help someone.

Xenapp 7.x SQL Express Single Site to SQL Mirror Multi Site Migration – Part 3

Create 3 Xenapp Databases
You have migrated your Xenapp database and now you want to separate your database in to Site, Monitoring and Logging databases and introduce high availability. Let’s get started!

You are logged on to Xenapp Studio with an account that has sysadmin rights on SQL .

Open up Xenapp Studio console.

Highlight Logging and on the right click Change Database.

Enter new database and location.

Click OK so Studio can create the new databases automatically.

Go through the same process for the Monitoring DB.

Voila! You now have 3 separate Databases for your Xenapp environment.
In Xenapp Studio Management under configuration you can see the databases.

SQL Studio Management on your primary SQL will show 3 databases.

Change recovery Model of all 3 DB’s to Full
Next we get on with introducing HA in your database environment.

In this example we will use SQL mirroring and configure the Delivery controllers to be aware of the primary and Failover SQL partner.
All databases need to be backed up and restored with “No Recovery” option to the mirrored SQL partner. Before this is done the Recovery model should be changed to FULL on all the databases.

Right Click database/Options

Change the Recovery Model to FULL.

Make a full backup of all 3 DB’s
Right click DB/Tasks/Back up.

Select the location for your backup.



Once confirmed click OK to backup the DB.

Make a transaction log backup of all 3 DB’s
Right click DB/Tasks/Back up/Options

Backup Type: Transaction Log



Click OK
Backup the Transaction logs to the existing media set.


Do this for the Site, Monitoring and Logging databases.

Copy all backups to a local drive on the server acting as the SQL mirror.

Create the Controller logins on the SQL server acting as mirror
New Query


Create login [Your Domain\DDC Machine account$] from windows



Click !Execute

Restore the databases with the “NO RECOVERY” option
Do this on the SQL server acting as the mirror.

Choose back up DB copied locally on SQL mirror.



You will see the full and transaction logs appear as they were appended to the same backup set.

Before you commit and press OK for the restore make sure you are restoring with the “No Recovery” option.

On the right go to Options and choose RESTORE WITH NO RECOVERY.



Click OK and you will see a message confirming your database restored successfully.
You can now view the database in “Restoring” state in SQL studio.

Repeat the restore procedure for the remaining databases.

Create the mirror from the Principal SQL server
Choose database and right click.

Tasks/Mirror/

Click Configure Security tab

The Mirroring Security Wizard appears. Click Next.

In this example I am not configuring a witness server. You can use SQL express for this role if a witness is required. Using a witness will provide automatic failover should you have issues with your principal SQL server and is best practice.

Choose your Principal SQL.

Next choose the Mirror Server Instance. You must click Connect and authenticate to the server.

Click Connect

Click Next

Enter your credentials (Usually the administrative account you are logged in with).

Review and click Finish.



Once you click Close this pop up should appear. Click Start Mirroring.



The status will confirm successful synchronization.

The Databases on the Principal should now look like the below:

The databases on the Mirror should look like the below:

Note:

If you come across the following error when trying to mirror your Xenapp site database:


You will need to set Auto Close to OFF on the database.
This is achieved by running a New Query on the primary SQL server and executing the query:

ALTER DATABASE YourXenappDB SET AUTO_CLOSE OFF

Failover and test permissions
Initiate failover from the principal database and check permissions on the Controller machine account.

Right click database and choose TASKS/MIRROR.

Click Failover.

The database on the original SQL server you initiated FAILOVER on should now show the following status:

Do this on all 3 databases.
Now you should check your permissions on the SQL you failed over to. 
Check permissions on the Controller accounts for the databases. They should match the following:
Logging Database Permissions

Monitoring Database Permissions

Site Database Permissions

If all looks good initiate failover once again from the database that shows the principal role so all the databases are on the original SQL server that Xenapp was connected to.
TEST, NULL and SET Connections on your Delivery Controllers

The following actions will need to be performed on all your Delivery Controllers so they point to the new SQL setup.

Test Connections
We now need to test connections on both SQL servers to check if there are any issues.
This can be achieved by the following .ps1 script.
Remember to put YOUR SQL primary server and YOUR failover SQL partner.
$cs = "Data Source=SQL01; Failover Partner=SQL02; Initial Catalog=CITXENSITE; Integrated Security=True; Network=dbmssocn"

$controllers = Get-BrokerController | %{$_.DNSName}

foreach ($controller in $controllers)

{

Write-Host "Testing controller $controller ..."

Test-ConfigDBConnection -DBConnection $cs -AdminAddress $Controller

Test-AcctDBConnection -DBConnection $cs -AdminAddress $Controller

Test-HypDBConnection -DBConnection $cs -AdminAddress $Controller

Test-ProvDBConnection -DBConnection $cs -AdminAddress $Controller

Test-BrokerDBConnection -DBConnection $cs -AdminAddress $Controller

Test-EnvTestDBConnection -DBConnection $cs -AdminAddress $Controller

Test-SfDBConnection -DBConnection $cs -AdminAddress $Controller

Test-MonitorDBConnection -DBConnection $cs -AdminAddress $Controller

Test-MonitorDBConnection -DataStore Monitor -DBConnection $cs -AdminAddress $Controller

Test-AdminDBConnection -DBConnection $cs -AdminAddress $Controller

Test-LogDBConnection -DBConnection $cs -AdminAddress $Controller

Test-LogDBConnection -Datastore Logging -DBConnection $cs -AdminAddress $Controller

}
Null connections
Connections to the principal SQL server need to be nulled.

This can be achieved by the following ps1 script.
Set-LogSite -State Disabled

Set-LogDBConnection -DataStore Logging -DBConnection $null

Set-MonitorDBConnection -DataStore Monitor -DBConnection $null

Set-MonitorDBConnection -DBConnection $null

Set-AcctDBConnection -DBConnection $null

Set-ProvDBConnection -DBConnection $null

Set-BrokerDBConnection -DBConnection $null

Set-EnvTestDBConnection -DBConnection $null

Set-SfDBConnection -DBConnection $null

Set-HypDBConnection -DBConnection $null

Set-ConfigDBConnection –DBConnection $null -Force

Set-LogDBConnection -DBConnection $null –Force

Set-AdminDBConnection -DBConnection $null –Force
Screen shot highlighting results of script.
 
SET CONNECTIONS
Set the connections so the Delivery Controllers are aware of both SQL servers.

Connections to the SQL servers (Principal and Mirror) need to be set.

This can be achieved by the following ps1 script.

Remember to put YOUR SQL primary server and YOUR failover SQL partner.
$cs = "Server=SQL01; Initial Catalog=CitXenSite;Integrated Security=True;Failover Partner=SQL02"

$cl = "Server=SQL01;Initial Catalog=CitXenLogDB;Integrated Security=True;Failover Partner=SQL02"

$cm = "Server=SQL01;Initial Catalog=CitXenMonDB;Integrated Security=True;Failover Partner=SQL02"

Set-ConfigDBConnection -DBConnection $cs

Set-AdminDBConnection -DBConnection $cs

Set-LogDBConnection -DBConnection $cs

Set-AcctDBConnection -DBConnection $cs

Set-BrokerDBConnection -DBConnection $cs

Set-EnvTestDBConnection -DBConnection $cs

Set-HypDBConnection -DBConnection $cs

Set-MonitorDBConnection -DBConnection $cs

Set-ProvDBConnection -DBConnection $cs

Set-sfDBConnection –DBConnection $cs

Set-LogDBConnection -DataStore Logging -DBConnection $cl

Set-MonitorDBConnection -DataStore Monitor -DBConnection $cm

Set-LogSite -State Enabled
Screen shot of results of script.

Confirm and test connections to both SQL servers
Confirm that the Delivery Controller has connections to the principal and mirror SQL servers.


Get-BrokerDBConnection

Get-LogDBConnection

Get-MonitorDBConnection
The result within Studio should show connections to the SQL server address and the Mirror server address.

Final test is to initiate failover from principal databases and run these commands again:

Get-BrokerDBConnection

Get-LogDBConnection

Get-MonitorDBConnection

Finally open up Xenapp Studio Console.

Final Word
So in this 3 part series we have shown you the following:

Migrate SQL express to SQL production.

Create 3 separate databases for Xenapp.

Introduce resiliency by mirroring.

Hope you enjoy!

Remember to do everything in TEST FIRST!

 

Removing Problematic Delivery Controller – Method 1

This article will show you how to remove a delivery controller from your environment that is no longer required or functioning. Attempts to re add the controller fail with the same machine name. You do not have access to SQL but you can hand over eviction scripts to your DBA to clean up your Xenapp database.

This procedure worked in my Xenapp 7.x environment with a working Delivery Controller left in my Site.

OBTAIN CONTROLLER SID
NULL CONNECTIONS
RUN EVICTION SCRIPTS
EXECUTE SCRIPTS ON SQL
CLEAN UP REGISTERED SERVICE INSTANCES
RE-ADD DELIVERY CONTROLLER

Example 1

Obtain Controller SID

Launch Powershell as an administrator on your remaining Delivery Controller.

Run Get-BrokerController



Take note of the SID of the Delivery Controller that is no longer functioning.You will need this SID. The state may still show as Active if connections are still active.
Null Connections

Now run the following to null connections to the controller you wish to remove from your Xenapp database. This is carried out on a working Delivery Controller.

Set-ConfigDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-BrokerDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-ProvDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-AcctDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-HypDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-EnvTestDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-MonitorDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-SfDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-LogDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-AdminDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM

Set-AnalyticsDBConnection -DBConnection $null -AdminAddress DDC02.TSCLAB.COM (XD 7.6 ONLY)
Get-BrokerController will now show the state of the second DDC as Off.

Run Eviction Scripts

Next we need to run the following powershell script using the SID identified on the controller that you are going to remove These commands will generate eviction scripts.

Take care to point the site, monitoring and logging parts to your correct database.

Get-BrokerDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\brokerevict.sql
 Get-ConfigDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\configevict.sql
 Get-HypDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\hostevict.sql
 Get-ProvDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\provevict.sql
 Get-AcctDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\adevict.sql
 Get-EnvtestDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\envtestevict.sql
 Get-LogDBSchema -DatabaseName CITXENLOGDB -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\logevict.sql
 Get-MonitorDBSchema -DatabaseName CITXENMONDB -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\monitorevict.sql
 Get-sfDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\Sfevict.sql
 Get-AdminDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\adminevict.sql
 Get-AnalyticsDBSchema -DatabaseName CITXENSITE -ScriptType evict -sid S-1-5-21-40836310-432886117-331853842-1171 > c:\analyticsevict.sql (XD 7.6 ONLY)

Execute Scripts on SQL
The above script will generate eviction scripts to run on SQL.
Scripts appear locally on your delivery controller C drive.

Copy these over to your SQL server acting as Principal.

Execute the eviction scripts on the sql server in SQLCMD mode

Open Sql Studio and click OPEN/FILE and choose your .sql script.
 

Your script will be imported into SQL.

Run your query in SQL CMD MODE.



Then click !Execute
You should get a result similar to the below.



Repeat this procedure for all your eviction scripts that you created.
Run Get-BrokerController. You should only see your remaining Delivery Controllers in your environment.

Clean up Registered Service Instances

Once this is done you need to clean up the registered service instances. You can see the controllers assigned to the services by running the below command.

Get-ConfigRegisteredServiceInstance

You will see that the faulty delivery controller is still registered to services.

Run the following in your powershell window.

Get-ConfigRegisteredServiceInstance | select serviceaccount, serviceinstanceuid | sort-object -property serviceaccount > c:\registeredinstances.txt

This will generate a text file on c:\registeredinstances.txt.

Inside this file you will see something similar to the below:
In this example we can see DDC01 and DDC02 are registered.

Once you have the output, you can use an advanced text editor like Notepad++ to select the ServiceInstanceUid’s for the service instances on ddc02 and use the data to build and run a simple unregister script:

Copy your amended text and create a .ps1 file on your local C drive of the Delivery Controller.



Run the file within your administrative powershell cmd window.

Once complete check the registered service instances once again.

You should not see any registered service instances on the delivery controller you have removed.

You should now be able to add your Delivery Controller back in to the environment.

Voila!