Category Archives: Desktop Virtualization

What You Need to Know About GPUs for Windows 10

Dedicated GPUs aren’t just for gamers and designers anymore. The modern workspace is experiencing increasingly vivid and interactive software that is challenging entrenched beliefs about the nature of corporate work. Back in the day, IT supplied users with hardware and software that far exceeded anything employees interacted with in their off-time. The field has changed, and now users are the ones setting the pace for technology needs and adoption. Virtual assistants like Cortana have piqued user interest in AI and intuitive software experiences, which users now expect to follow them across locations and devices. Business leaders are looking to harness this evolving demand to accelerate the implementation of technology with the aim of enhancing employee engagement and performance.

We see growing awareness of this shift in conversations with our clients, who are looking for smarter ways to manage hardware and software transformations. One of the most discussed projects in this space is Windows 10 adoption. Many CIOs have yet to upgrade their users to Windows 10, but are gearing up for a transition in hopes of improving end-user experience and productivity. While we’ve been talking to IT professionals about the differences between Windows 7 and Windows 10 since the Windows 10 launch in 2015, recently we’ve noticed an uptick in questions specific to graphics requirements. “How will my Windows 7 users be affected by Windows 10 graphics demands?” is a fair question, as is “What can I do to prepare my VDI environment for Windows 10?” We knew that the user-focused features available in Windows 10 would demand increased GPU usage, but to answer the question of degree, we turned to data supplied by our customers to achieve an accurate view of graphics needs in Windows 10. Our analysis of customer data focused on GPU and CPU consumption as well as user experience, which we quantify as the percentage of time a user’s experience is not being degraded by performance issues.

Key findings from our assessment include:

  • The amount of time users are consuming GPU increases 32% from Windows 7 to Windows 10
  • Systems without dedicated GPUs show higher average active CPU usage
  • Windows 10 virtual systems with GPUs consume 30% less CPU than those without

  • Presence of a dedicated GPU improves user experience with either OS on both physical and virtual machines

Overall, we found sufficient evidence to recommend implementation of discrete GPUs in both physical and virtual environments, especially for Windows 10 virtual users. Shared resources make the increased graphics requirements in Windows 10 potentially damaging for VDI because high CPU consumption by one user could degrade performance for everyone; however, we found that implementation of virtual GPU could allow IT to not only avoid CPU-load issues, but actually increase density on a server by 30%. Scaled, increased density means fewer servers to purchase and maintain, potentially freeing up resources to direct towards other IT projects.

Whatever stage you’re at in your Windows 10 transformation or other software projects, SysTrack can help you anticipate your users’ graphical needs. As developers continue to release software that enables users to have greater flexibility and creativity in the way they work, IT teams will need to ensure that users have adequate tools at their disposal to power a tech-charged workforce.

Try Out the Citrix Digital Workspace Transformation Assessment

A constantly changing landscape in the modern workplace has led to a constantly changing landscape in the technology that serves that workplace. Over the last few years, we’ve seen a clear shift away from local storage of data and apps. This is due, in part, to workers becoming more mobile and requiring access to their data outside of the office. Keeping it all saved on a laptop can be a security risk, and leads to workers being tethered to their systems, which defeats the purpose of being a truly mobile worker. The real solution to providing secure, anytime, anywhere access to apps and data is for IT to retain control. This means IT needs technology to deliver and manage remote apps, virtual desktops, and storage, all while protecting corporate data. Citrix has recently introduced the Citrix Workspace – a complete digital workspace offering enterprise-grade delivery of apps, desktops, and data to solve this problem. Flagship products XenApp and XenDesktop are included along with XenMobile and ShareFile, creating a full solution for IT and users alike.

Understanding how all the technology included in the Citrix Workspace can benefit your organization can be a little unclear without supporting data. That’s why we’ve teamed up with Citrix to create the Digital Workspace Transformation Assessment – a free, cloud-based assessment that uses SysTrack to evaluate the scope of the environment and provide relevant datasets around user experience, mobility, cloud-storage use, application usage and complexity, and XenApp usage, among other things. Let’s take a closer look at a few of the items included in the free assessment.

SysTrack Visualizer

Desktop and Server Visualizer are included in the assessment. These web apps contain dashboards and tables that provide useful insight around user experience, software usage, and demand and performance in the environment. A few of the benefits of understanding these kinds of metrics are identifying issues causing poor user experience, quantifying the computing demand which is particularly useful for shared resource environments, and identifying which applications can be virtualized. The user experience score, in particular, is a great metric for quickly quantifying the quality of service the users are enjoying, and what might be causing problems. It’s a 0-100 score measuring the percent of a user’s active time that was impacted by any of 13 different KPIs including disk issues, latency, virtual machine problems, app faults, and more. So how does all this apply to Citrix Workspace? Well, Workspace offers multiple methods of delivering applications, and making data-driven decisions on which of those apps should be published, which should be installed on a fully-featured virtual desktop, and which should be removed from the portfolio due to being unused is critically important to maintaining a well-functioning and efficient environment. Aside from software, having data around concurrent usage and computing demand helps to properly size virtual environments. Knowing which users require more resources and which are lighter users is a big help when establishing XenApp server densities, for example.

Citrix Specific Reports

SysTrack Visualizer is more of an open platform that lets you browse through a large amount of data. While that’s incredibly helpful, it can also be helpful to have more focused datasets directly related to Citrix technologies. The assessment provides four in-depth reports regarding things like XenDesktop fit, published app health, user concurrency, browser usage, Microsoft collaboration tools usage, and an overall assessment report detailing how the organization could benefit from Citrix Workspace from the perspective of security, application management, and mobility.

XenApp Dashboard

A very detailed dashboard around usage and health of existing XenApp published apps. This is useful to understand which published apps are having health issues, what might be causing those health issues, time of peak concurrencies, average latency, and more. Having this kind of data is critical to maintaining a healthy environment. Investing more into Citrix technologies means you’ll need to have insight like this to ensure you’re getting the most out of those investments.

The assessment is a great way to scope out and plan for adopting Citrix Workspace. Of course, after the Workspace technologies have been integrated into your environment, you need to make sure they’re properly maintained and continue to deliver value to the users as well as IT. That continuous, proactive monitoring allows you to identify problems before they become too pervasive, maintain an efficient software portfolio, and keep your users happy and productive. Try out the assessment today and learn how you can benefit from Citrix Workspace.

Director Integration for Ask SysTrack

One of the unintuitive results of the progression of technology is the massive proliferation of different sources for different pieces of information that are critical to managing an environment. There are just so many tools that provide a depth of detailed data that the sheer number of them makes it difficult to figure out which one to use and how to find it within the interface. Information seeking behavior then takes users across multiple tools with multiple methods of interaction; the net result can be confusion and lost time. This is where cognitive analytics and the ability to ask simple questions can make the difference between solving a problem and bouncing between reporting tools.

The popularity of Ask SysTrack’s recent set of advanced integrations has been very eye opening to how pervasive the need to have a single, easy to use interface for getting contextually relevant answers to questions can be. Because of this we’ve worked with our partners to try and provide a single source to answer IT questions that then provide what’s needed when it’s needed.

At Citrix Summit we’re showcasing one of our most recent examples: plugin integration with Citrix Director. This plugin not only displays SysTrack information in the Director interface, but also provides Director and Citrix related answers to questions that are found in the interface through Ask SysTrack.

The key is providing the Ask SysTrack plugin interface directly in the Director interface home page. Now any IT administrator that makes use of Director has a Watson power cognitive helper to answer questions like “What is the user experience of my environment?”

Clicking the link takes them directly into the relevant data in SysTrack. Alternatively, they can also just ask questions about Director.

We’ve also added a User Experience Trend for delivery groups that are discovered in association with the instance that allows administrators to view what kind of user experience their end users have been getting alongside the other data presented in Director.

This makes it much easier for administrators to now get the key details they need when they need it without having to spend time working through multiple interfaces.

For more details check out a quick video run through.

Introducing the NVIDIA Graphics Assessment

When NVIDIA first announced their groundbreaking approach to introducing accelerated graphics to virtualization they began bridging one of the last gaps in making sure that users get the best possible experience with virtual desktops and applications. Building on their success NVIDIA has introduced newer, Maxwell™ based GRID cards NVIDIA that go even further to create a rich graphical experience for users and decrease complexity for IT administrators looking to optimize the visual experience of their users.

The evolution of the GRID solution set coupled with their new software means that more users than ever can take advantage of graphical acceleration. This couldn’t come at a better time given the rise of advanced media usage in enterprise. As an increasing number of organizations begin to explore making their virtualized environments even better many want to explore what their current graphical needs are and plan for the future with NVIDIA.

This is why Lakeside Software has partnered with NVIDIA to develop a totally free graphical assessment hosted on Azure to deliver a detailed series of reports leveraging the SysTrack workplace analytics platform. Available for a 30-day period for up to 500 systems at no cost, this allows any interested organization to deploy SysTrack and review their current environment. At the end of the data collection period a customized report can be generated to provide key insights on how users are using graphically accelerated applications and what GRID profiles may work best for them.

Segmentation

Additionally, NVIDIA has also introduced vGPU monitoring as a key component of their GRID technology. As an early access partner Lakeside has been able to leverage this to build out new management and monitoring components to help ensure that critical end user experience components deliver the immersive experience users expect. This is showcased in some of the updated NVIDIA Kit contents that IT administrators can now use to monitor their vGPU implementations.

DeviceMonitoring

So, get started today. Check out https://nvidia.lakesidesoftware.com to learn more about how you can use vGPU and SysTrack to help give your users the best possible experience.

Citrix Secure Browser Assessment with SysTrack Cloud Edition

The web browser has come into its own as an indispensable part of the enterprise software portfolio. With web based apps an amazing amount of flexibility can be achieved, but paradoxically one of the ubiquitous and useful applications can also be the most frustrating. For an end user that has to interact with potentially dozens of web portals with numerous plugin dependencies they can often find themselves moving experiencing browser hangs or crashes with great frequency or having to switch between different browsers to just complete different business functions. More critically IT has to support any number of different browser types with any combination of required plugins, user added extra components, and possibly dozens (or hundreds) of different versions. The reality of the situation is that the component that’s supposed to be “platform independent” or give a uniform user experience can create headaches for everyone involved.

There’s another side to this problem, too: how do you make sure that users get the most straightforward pathway to their applications? With internal web apps especially there’s frequently a need for a user to either be connected directly to the network or use a VPN to broker a connection. This introduces yet another component that can fail or make basic user interaction a hassle. Worse, in some scenarios, especially when users are highly mobile, this also potentially exposes data to loss.

That means there are really two problems: making sure that users get access to a browser that always works and making sure they can connect securely and minimize breach potential. This is where Citrix Secure Browser introduces a really interesting resolution. By publishing a known good browser that can be embedded into any modern browser seamlessly existing XenApp customers can provide their end users with a great experience and minimize their support needs.

The SysTrack Citrix Secure Browser Analysis assesses the current state of browser usage in the environment to try and articulate the net advantage of moving to Secure Browser. How many different web applications are currently used? How many internal web applications are interacted with? How frequently do browsers hang or crash? What plugins are the most common in the enterprise?

SecureBrowserSummary

The lead in summary from the Citrix Secure Browser Analysis establishes the massive number of different browsers in active usage in most environments, and the numerous plugins that are employed. From there we break out more interesting pieces of information like the monumental number of application faults associated with browsing apps that users interact with daily. Throughout the report we expand on all of the details that are critical in determining what kind of impact implementation of Secure Browser may have in an enterprise. Brett Waldman covers some of the key details on Secure Browser in a blog entry, but essentially imagine taking all of the aforementioned concerns and eliminating by publishing a browser that always works with business critical web applications and is, by nature, secure.
The SysTrack Cloud Edition for Citrix is a free service that allows you to assess your environment for Citrix solution fit, including another report focused on Skype for Business and how Citrix can optimize delivery of Microsoft collaboration tools. Check it out here.

The Citrix Lifecycle Management official launch enables a hands-free installation of Lakeside SysTrack to any Citrix environment

Back in August, Citrix announced the long awaited Citrix Workspace Cloud technology along with the associated Lifecycle Management  tools. The blogs by my friends Kailas Jawadekar and Joe Vaccaro explain these stacks in detail, but here’s the gist the way I see it:

Workspace Cloud adds the ability to manage Citrix environments (XenApp, XenDesktop, XenMobile, etc.) from a cloud-hosted control plane. Gunnar Berger  has a few videos out that explain the concept. The key to this technology is a new cloud connector that allow your environment to communicate with the Citrix hosted consoles. Why would you care, you might ask? Because at some point, you might wish to have multiple Citrix deployments in disjointed networks, or have a portion of your infrastructure or session hosts in a private, public, or hybrid-cloud. Rather than introducing more complexity in the configuration and management, workspace cloud gives you the ability to manage all these otherwise independent environments centrally.

So far so good.

Many of you, who have been managing dynamic datacenters for a while, are pretty familiar with the concepts of virtual machine templates, Provisioning Services golden images, and similar tools that helps you to “build once, and deploy many times”. These approaches, however, are  not entirely without challenges as these templates are often closely bound to the specific hypervisors you wish to use and are not easily re-usable across all instances of hybrid clouds. That’s where Lifecycle Management comes in. Think of it as an automation / scripting engine that allows you to define all software installation and configuration steps that have to be performed to turn an plain  OS image into the workload you desire. This is called a Blueprint in LCM parlance and I have written about the concept in a recent blog.

Well, today is the day that we’re ready to announce that we have developed and published a Citrix Lifecycle Management Blueprint for the SysTrack master server, which is the central component of any SysTrack deployment. It is available for you by logging into manage.citrix.com and selecting the Blue Print Catalog. You will see the Lakeside SysTrack blue print in the partner section and you can simply add it to your library by clicking on the little ‘+’ symbol on the bottom right.

The SysTrack blue print takes your standard Windows Server 2012 base image and then automatically downloads and installs the SysTrack master server along with all its technical pre-requisites. After the blue print has been deployed successfully, all that is left for you to do is to request a SysTrack license from us and deploy the agents to your workloads. These can of course be other Citrix infrastructure or session hosts in your hybrid cloud, but also general server and desktops, including the physical machines that you already have.

The SysTrack Blue Print for LCM simply allows you to add the award winning success platform for end user computing to your environment without having to manually install and configure an additional server.

 

Announcing SysTrack Cloud Edition for VMware vCloud Air

One of the greatest advantages of the growing influx of cloud based solutions is the opportunity to move IT to be service oriented. The ability to take advantage of consumption based models for everything from infrastructure all the way through software subscriptions frees up substantial time that would otherwise be spent with complex and expensive provisioning and management tasks. More and more IT organizations are taking advantage of various cloud services providers to make their lives easier, and this has created a wide set of different, potentially disconnected data sources that can be difficult to unify and report across. Lakeside sees this as a fantastic opportunity not only to tie together all of the various data feeds and tracking areas necessary to understand all of the service provider performance and usage, but to also expand into offering SysTrack as a service as well. This provides a simple way to generate consistent reporting using a system of record that’s been proven across millions of endpoints with the ease of a simple subscription. This is why the SysTrack Desktop Assessment (SDA) has been expanded to provide continuing collection after the assessment, allowing the use of SysTrack through the lifespan of a transformation and beyond.

The introduction of the SDA service in conjunction with VMware has been a resounding success, and so far through that offering we’ve helped with analysis and reporting across many thousands of endpoints. With all of this activity, a resounding request that we’ve received time and time again is to provide access to this data collection throughout the lifespan of a project, and I’m happy to announce that this is now available through a subscription to SysTrack Cloud Edition for vCloud Air. Basically if you want to keep your collection going to show how your end user experience evolves with the solutions you implement, and if you want to have meaningful, quantitative methods to resolve any end user problems that may arise, it’s as simple as continuing your SDA project indefinitely. There’s no need to provision heavy on premises infrastructure, you’ll continue to use the interface you’re familiar with, and the solution will ramp up with you as your needs evolve. We’re also introducing some newer features that expand the value of SysTrack in the cloud and bridge the gap between a full on premises deployment and cloud based SysTrack.

You’ll now have access to tools like Resolve through the use of our simple SysTrack Forwarder (an easily deployed proxy service), as well as our Image Planner. That means that the full life cycle of a project can now be assessed, planned, and tracked continuously using the same toolset for fair and accurate comparison and analysis. With that it’s simple to prove that you’ve succeeded in optimizing your environment with real insight into the end user experience improvements you get with your transformation.

To see how easy it is to get started (if you haven’t already), why not register today? Just head to the registration site and start a new project. It’s simple and straightforward to begin the data collection, and, if you like the depth and quality of environmental visibility you gain, now it’s simple to keep that collection going as long as you need it.

IT Assessments and Flying Airplanes

What do these two topics have in common? More than you might think…

I spent the majority of my career in professional services and product management in the software industry. Every product lifecycle follows the established pattern of “Assess, Design, Deploy, and Manage” or something along those lines. The focus is often brought on the assessment phase where we gather technical and business requirements, see what our users have and use today, add future requirements and then use that collected wisdom to design and build the “new” solution – whatever the new thing is. In my past at Citrix Systems and now at Lakeside Software, our customers are mostly concerned about assessing the existing physical desktop estate and translating the data into future virtual desktop and application delivery architectures. As a matter of fact, since joining Lakeside Software in the Summer of 2013, I have heard numerous times from customers, partners, and even competitors, that we’re known as the Assessment Company in the desktop virtualization and application delivery space.

Let me take a step back for a second and talk to you about my other career – that of a passionate pilot and flight instructor.

While I was in grad school back in 1999 my cousin gave me a ride in a 2 seat Cessna 152 over the Dallas / Fort Worth area at night. I was instantly hooked and started taking flying lessons about a week later. After a couple of months of hard work, plenty of tutoring sessions for high school students, building websites for the local flight school and other activities that would earn me some time in a 30 year old prop plane, I finally was the proud holder of a private pilot certificate. The flying bug had bitten me big time. I continued to earn my instrument rating, commercial pilot’s license, multi-engine rating and glider rating in the ensuing months and years. One night, I was invited to a barbeque with a couple of flight instructors and professional pilots and there was talk about how difficult it was to obtain a flight instructor license and how high the failure rate for the practical test was – especially compared to the practical tests for other certificates and ratings. “I can do it!” I blurted out (this was a few beers into the night) and found myself having to defend my personal honor. I studied hard and became a flight instructor in early 2005 (Yes, I did pass that beast of a practical test on the first try, but it wasn’t all that easy as I thought it would be.)

While I never attempted to earn a living in the aviation community, I have been teaching aviation safety and flying quite a bit. First as a weekend instructor at the local flight school and then conducting mostly checkouts and flight reviews as the chief pilot of my local aero club here in Florida. I also did a stint as a mission pilot with the civil air patrol.

PA32 over Miami

Now – what does that have to do with assessments?

Let’s talk about how a typical flight is conducted. It all starts with the pre-flight planning probably a couple of days or hours before we go somewhere. This is all about where to go, what airports and facilities are available, what the weather might be like, if an airplane is available in the club or at the local flight school / rental place, etc. This is the general and initial assessment of the situation.

As we get to the day of the flight, we assess a bunch of additional things. The man (or woman) – in terms of physical fitness to fly. I assess how I feel and if I have taken any medication or gotten enough sleep. Next, I’d assess the overall environment. Weather, Air Traffic Control delays, best route of flight, best altitude, runway closures, etc. Then the machine (the aircraft): Does it have all the required documentation on board? Have the required inspections been performed? Do we have enough fuel and oil? Is the total weight and balance within the envelope? And is the airplane airworthy and fit to fly? All these items are assessed with the help of a checklist and by physically walking around the aircraft and asking ourselves every step of the way “Is this still good to go?” after checking the requisite items.

Then I’ll do an assessment of my passengers – are they good to go, comfortable, prone to motion sickness?

Finally, we get into the airplane and I again grab the checklist and follow the procedures to start the engine, taxi to the runway, talk to air traffic control and watch for other aircraft, people, and equipment on the airport. You can call that an on-going assessment of the situation.

After taxiing to the runway, there is a pre-flight run-up check where we test that the engine is producing power, all instrument show airplane parameters within the prescribed limits and then we can finally begin to be ready for takeoff.

My radio call is promptly answered by the tower “CHEROKEE FOUR SEVEN LIMA HOTEL, CLEARED FOR TAKEOFF RUNWAY ONE ZERO, LEFT TURN OUT APPROVED, CLIMB AND MAINTAIN TWO THOUSAND FEET”.

Ready to go. As I advance the throttle, I quickly check my engine instruments and we roll down the runway. At about 60 knots indicated airspeed, I gently pull back on the yoke which causes the nose wheel to just lift off the tarmac and we’re in the air a moment later.

I again constantly check for birds, other traffic, radio calls, changing weather, fuel status, passenger well-being, and so forth. I absolutely love the feeling of being in the air and controlling the aircraft, but I have to be constantly assessing the situation (again, man (or woman), machine, environment, external factors, etc.)

After landing, I taxi the plane to the ramp or back to the hangar, conduct my post-flight checklist, turn all systems off and basically conclude this final assessment before I begin to enjoy the destination.

Did you notice what I did NOT do? I did NOT unplug the GPS, the fuel gauges, engine monitor, volt and ammeter, oil pressure gauge, etc. the second I got into the air. Why not? Because I need those things to constantly assess the situation and bring the flight to a successful and safe conclusion. It would be insane to turn off my instrumentation the second the nose wheel leaves the ground, the air grabs the wings, and the ground vanishes beneath me.

Dash PA28
Having said this – WHY then do we in the IT world simply stop the assessment the minute the first user is live on our new system? Why do we think that once we assessed the current environment, that the users need, the system status and other things remain stable and constant? Sure – you might argue that nobody’s life is at risk if a server goes down, a service crashes, or the user experience starts to degrade. But come on – if I am trying to be as diligent and professional on the ground as I am in the air, I have to be in the habit of constantly assessing and reassessing the situation, recognizing patterns, learning how to remediate adverse situations and basically keeping the IT environment in perfect shape so that all users can successfully complete their flight, I mean, their work or project.

Some people call this “monitoring” or “IT operations” but what we’re really doing is continuously assessing very large and very complex IT systems and trying to control and manage them in the safest, most stable, and most flexible way.

As an example, this is particularly important for organizations who run Citrix XenApp in their environment and are looking to upgrade from the IMA architecture of XenApp 6.5 (and prior versions) to the FMA architecture in XenApp 7.0 an higher.  This whitepaper describes the process in detail.

Another interesting read is out solution brief for end user success.

Thoughts? Ideas? Please comment or contact me on twitter:

@florianbecker

 

 

What does “End-to-end” monitoring really mean?

The old saying goes that if all you have a hammer, every problem looks like a nail. This is certainly true in the IT world. There are a broad number of vendors and technologies that claim to provide “end-to-end” monitoring for systems, applications, users, the user experience and so forth.

When one starts to peel back the proverbial onion on this topic, it becomes clear that any of these technologies is only providing “end-to-end” visibility if you’re really flexible with the definition of the word “end”. Let’s elaborate.

If I am interested in the end user experience of a given system or IT service, I would certainly start with what the end user is seeing. Is the system responsive to inputs? Are the systems free of crashes or annoying application hangs? Do the systems function for office locations as well as for remote access scenarios? Do complex tasks and processes complete in a reasonable amount of time? Is the user experience consistent?

These are the questions that the business users care about. In the world of IT, however, the topic of user experience is often discussed in rather technical terms. On top of that, there is no such thing as a single point of contact in any larger IT organization for all the systems that make up the service that users have to interact with. Case in point – there is a network team (maybe even split in between local area networks, wide area networks, and wireless technologies), there is a server virtualization team, there is a storage team, there is a PC support team, various application teams, and we can think of many other silos.

So, the monitoring tools that are available in the market place basically map into these silos as well. Broadly speaking, there are tools that are really good at network monitoring, which means they look at the network infrastructure (routers, switches, and so forth) as well as the packets that are flowing through the infrastructure. Thanks to the seven layer OSI model, there is data available not only around connections, TCP ports, IP addresses, network latency, but also the ability to look into the payload of the packets themselves. The latter means being able to understand if the network connection is about the HTTP protocol for web browsing, PCoIP or ICA/HDX for application and desktop virtualization, SQL database queries, etc. Because that type of protocol information is in the top layer of the model, also called the application layer, vendors often position this type of monitoring as “application monitoring”, although it really has little to do with looking into the applications and their behavior on the system. Despite this kind of application layer detail in the networking stack, the data is not sufficient at all to figure out the end user experience. We may be able to see that a web server takes longer than expected to return the requested object of a web page, but we have no idea WHY that might be so. This is because the network monitoring only sees network packets – from the point when they leave one system and are received by another system and then have a corresponding response go the other way – back and forth, but with no idea what is happening on the inside of the systems that are communicating with each other.

The story repeats itself in other silos as well. The hypervisor teams are pretty good at determining that a specific virtual machine is consuming more than its “fair share” of resources on the physical server and is therefore forcing other workloads to have to wait for CPU cycles or memory allocation. They key is that they won’t know what activity in which workload is causing a spike in resources. The storage teams can get really detailed about the sizing and allocation of LUNs, the IOPS load on the storage system and the request volumes, but they won’t know WHY the storage system is spiking at a given point in time.

The desktop or PC support teams… oh, wait – many of them don’t have a monitoring system, so they are basically guessing and asking users to reboot the system, reset the windows profile, or blame it on the network. Before I get a ton of hate mail on the subject – it’s really hard to provide end-user support because we don’t typically have the tools to see what the user is really doing (and users are notoriously bad in terms of accurately describing the symptoms they are seeing.)

Then there’s application monitoring, which is the art and science of base lining and measuring specific transaction execution times on complex applications such as ERP systems or electronic medical records applications. This is very useful to see if a configuration change or systems upgrade has a systemic impact, but beyond the actual timing of events, there is little visibility into the root cause of things (is it the server, the efficiency of the code itself, the load on the database, etc.?)

What all this leads to is that a user may experience performance degradation that impacts their quality of work (or worse, their ability to do any meaningful work) and each silo is then looking at their specific dashboards and monitoring tools just to raise their hands and shout “it’s not me!” That is hardly end-to-end, but just a justification to carry on and leave the users to fight for themselves.

Most well-run IT organizations actually have a pretty good handle at their operational areas and can quickly spot and remediate any infrastructure problems. However, the vast majority of challenges that impact the users directly and that don’t lead to a flat out system outage is simply that users and their systems compete for resources with each other. This is especially true in the age of server based computing and VDI. One user is doing something busy and all other users who happen to have their applications or desktops hosted on the same physical device will suffer as a result. This is exacerbated by the desire to keep cost in check and many VDI and application hosting environments are sized with very little room to spare for flare-ups in user demand.

This is exactly why it is so important to have a monitoring solution that has deep insights into the operating system of the server, virtual server, desktop, vdi image, end-point, PC, laptop, etc. and can actually discern what is going on, which applications are running, crashing, misbehaving, consuming resources etc.

Only that type of technology (combined with the infrastructure pieces mentioned above) can provide true end-to-end visibility into the user experience.

It is one thing to notice that there is a problem or “slowness” on the network and it is something else entirely to be able to pinpoint the possible root causes, establish patterns, and then proactively alarm, alert, and remediate the issues.

Speaking to IT organizations, System Integrators, and customers over the years reveals one common theme: IT administrators would like to have ALL of the pertinent data available AND have it all presented in a single dashboard or single pane of glass. Vendors are just responding to that desire by talking about their products as “end-to-end”, even though most of the monitoring aspects are not end-to-end at all, as we have seen. If you have the same requirement, have a look at SysTrack. It’s the leading tool to collect thousands of data points from PCs, Desktops, Laptops, Servers, Virtual Servers, and Virtual Desktops and can seamlessly integrate with third party data sources to provide actionable data in a way that you would like to see it. We’re not networking experts in the packet analysis business, but we can tap into data sources from networking monitors and present it along with the user behavior and system performance. That is a powerful combination of granular data and provides truly end-to-end capabilities as a system of record and an end user success platform.

 

Check out our latest solution brief on end user computing success.

Let me know what you think and follow me on twitter @florianbecker

Citrix Licensing – Deciding between concurrent and user/device licenses

Citrix XenApp and XenDesktop are available in two general licensing models:

  1.  Concurrent licensing. This model is intended for one connection to a virtual desktop or unlimited apps for any user and any device –  a license is only consumed for the duration of an active session. If the session disconnects or is terminated, the license will be checked back into the pool.
  2. User/Device licensing. Under this model, the license is either assigned to a unique user or shared device. If assigned to a user, it allows that single user unlimited connections from unlimited devices. If assigned to a device, it allows unlimited users, unlimited connections from that single device.

The User/Device license is typically half of the price of a concurrent license and can be an attractive model for organizations that follow a “traditional” work schedule (as opposed to shift workers in manufacturing or healthcare, where they may be a large number of individuals, but only a fraction of which are concurrently using the XenApp or XenDesktop environment.)

Internally, and this is the topic of this article, if Citrix XenApp / XenDesktop is configured for the user/device license model, the Citrix license server has to decide whether to assign the license to a user OR to a device. These are two different things, although customers purchase a user/device (as in user SLASH device)  license. So, how does this work?

Assume I, florianb, log into my organization’s environment and launch a session. At that time, a user license is consumed. I can run as many sessions from as many XenDesktop sites that share the license server as I like and use as many devices as I care to – it’s still one user license.

Assume that one of the devices I use is a shared thin client in the office. An hour after I leave, my co-worker Alex uses the same client to access his virtual desktop. Citrix internally then marks that particular thin client as a shared device and it consumes a device license. Theoretically, I could have 100 employees each use the same thin client and only consume a single  user/device license.

It becomes apparent that the recognition of shared devices is an automated way for organizations to minimize the number of licenses they need.

Most of us, however, have a mix of environments, so Citrix is calculating the total number of user/device licenses as follows:

# User/Device licenses = (# of total users) + (# of shared devices) – (# of users who only access from a shared device)

Makes sense?

Here’s a simple example:

User/Device Devices Used User License Consumed? Device License Consumed?
Paul Client01 No, because Paul is only using a shared device (Client01, which is also used by Florian, Alex, and Amanda) N/A
Florian Client01
Florian’s PC
Florian’s iPad
Florian’s Laptop
Yes, because he is using one or more non-shared devices N/A
Alex Client01 No, because Alex is only using a shared device N/A
Amanda Amanda’s iPad
Client01
Yes, because Amanda is using a non-shared device (her iPad) N/A
Client01 Used by: Paul, Florian, Alex, and Amanda N/A Yes – because Client01 is used by more than one user
Florian’s PC Used by Florian N/A No – because Florian is consuming a user license so he can use an unlimited number of licenses
Florian’s iPad Used by Florian N/A No – Florian is consuming a user license so he can use an unlimited number of devices
Florian’s Laptop Used by Florian N/A No – Florian is consuming a user license so he can use an unlimited number of devices
Amanda’s iPad Used by Amanda N/A No, Amanda is consuming a user license so she can use an unlimited number of devices

 

So, in this example, we would need a total of 3 user/device licenses, even though we have 4 individual users and 6 individual devices in the mix. Given that the price point for a concurrent license is 2x that of a user/device license, this small sample organization would absolutely benefit from user/device licensing as they may need as many as 4 concurrent users licenses.

The Citrix license optimization definitely works in the customer’s favor and the license allocation happens on a 90 day schedule for user/device licenses (i.e. the license of a user who is no longer in the organization or a device that is no longer in use get automatically released after 90 days or can be released immediately with a license management tool under terms of the Citrix EULA).

However, it can be a little difficult to predict what an organization might need. Lakeside SysTrack is a great tool to look at all sessions (say in an existing XenApp concurrent environment) to determine if a trade-up to user/device licensing would make sense. To illustrate the point, I’ve mocked up a quick and easy dashboard in SysTrack’s dashboard builder to look at one of the many environments we’re running internally.

license_dashboard2

 

In this particular example, our peak user concurrency was 11 and we would have needed 29 user/device licenses. We’re better off staying with concurrent licensing in this example.

Equally, if a traditional desktop environment is being assessed, SysTrack can make the choice between concurrent and user/device licensing very easy.

 

Florian

Twitter: @florianbecker and @lakesidesoft

Email: florian.becker@lakesidesoftware.com

On the web: www.lakesidesoftware.com

References/Notes:

  • While Citrix has reviewed this blog for accuracy at the time of this writing, Lakeside Software cannot make any representations on behalf of Citrix. Please always check with your authorized reseller, Citrix account manager and on citrix.com for the latest updates in product and licensing functionality.