Category Archives: vGPU

What You Need to Know About GPUs for Windows 10

Dedicated GPUs aren’t just for gamers and designers anymore. The modern workspace is experiencing increasingly vivid and interactive software that is challenging entrenched beliefs about the nature of corporate work. Back in the day, IT supplied users with hardware and software that far exceeded anything employees interacted with in their off-time. The field has changed, and now users are the ones setting the pace for technology needs and adoption. Virtual assistants like Cortana have piqued user interest in AI and intuitive software experiences, which users now expect to follow them across locations and devices. Business leaders are looking to harness this evolving demand to accelerate the implementation of technology with the aim of enhancing employee engagement and performance.

We see growing awareness of this shift in conversations with our clients, who are looking for smarter ways to manage hardware and software transformations. One of the most discussed projects in this space is Windows 10 adoption. Many CIOs have yet to upgrade their users to Windows 10, but are gearing up for a transition in hopes of improving end-user experience and productivity. While we’ve been talking to IT professionals about the differences between Windows 7 and Windows 10 since the Windows 10 launch in 2015, recently we’ve noticed an uptick in questions specific to graphics requirements. “How will my Windows 7 users be affected by Windows 10 graphics demands?” is a fair question, as is “What can I do to prepare my VDI environment for Windows 10?” We knew that the user-focused features available in Windows 10 would demand increased GPU usage, but to answer the question of degree, we turned to data supplied by our customers to achieve an accurate view of graphics needs in Windows 10. Our analysis of customer data focused on GPU and CPU consumption as well as user experience, which we quantify as the percentage of time a user’s experience is not being degraded by performance issues.

Key findings from our assessment include:

  • The amount of time users are consuming GPU increases 32% from Windows 7 to Windows 10
  • Systems without dedicated GPUs show higher average active CPU usage
  • Windows 10 virtual systems with GPUs consume 30% less CPU than those without

  • Presence of a dedicated GPU improves user experience with either OS on both physical and virtual machines

Overall, we found sufficient evidence to recommend implementation of discrete GPUs in both physical and virtual environments, especially for Windows 10 virtual users. Shared resources make the increased graphics requirements in Windows 10 potentially damaging for VDI because high CPU consumption by one user could degrade performance for everyone; however, we found that implementation of virtual GPU could allow IT to not only avoid CPU-load issues, but actually increase density on a server by 30%. Scaled, increased density means fewer servers to purchase and maintain, potentially freeing up resources to direct towards other IT projects.

Whatever stage you’re at in your Windows 10 transformation or other software projects, SysTrack can help you anticipate your users’ graphical needs. As developers continue to release software that enables users to have greater flexibility and creativity in the way they work, IT teams will need to ensure that users have adequate tools at their disposal to power a tech-charged workforce.

Answering GPU Questions with SysTrack

Ask SysTrack has become one of the most popular topics we’ve ever discussed in the industry, and our top question is always what we’re adding next. The benefits of using our Natural Language Processing (NLP) tool for common IT questions has appealed to a massive number of our partners in the industry as well as customers. Basically, our goal is to provide an analytical system that takes any question you may have about IT and tries to hook you into the best source of information available to help.

Zach mentioned our first integration in a previous post, and in the new year we’ll be launching a series of new integrations that cover different areas. One I’m personally excited about is our added GPU-based monitoring and reporting from NVIDIA GRID.

GPU utilization in general has been a hot topic for a while, and with the great progress NVIDIA has made with the creation of the first vGPU profiles for VDI, the potential to bring a great graphical experience to anyone has exploded. We’ve provided support for NVIDIA GRID from the very beginning, offering a cloud based assessment tool that can help plan what profiles would work for a currently physical environment to make the move to vGPU and VDI. As a natural progression to that we’ve implemented new monitoring with NVIDIA to help understand workload and usage in VDI systems. 

This kind of insight is especially critical when first undertaking a project to start transforming an environment using new technologies. John Fanelli, vice president of NVIDIA GRID, agrees, “With Lakeside Software’s Ask SysTrack workspace analytics insight engine administrators can make natural language queries to gain contextually relevant NVIDIA vGPU insights and help continuously assess and align vGPU benefits to user personas.”

Of course, the key point is connecting users to all that great content. This is where the expansion to Ask SysTrack comes in. Specifically, we’ve now integrated our additional collection and planning for Ask SysTrack to be able to help answer basic questions like “What kind of NVIDIA vGPU profiles do my users need?”

We can also answer other questions post migration that are critical to maintaining user experience. Things like, “What’s the total GPU usage on Ben’s system?”

Basically, if you can think of a question relating to GPU utilization we have the answer available.

To get started, check out our assessment site at to first size out a new environment or just get an introduction to SysTrack.

Introducing the NVIDIA Graphics Assessment

When NVIDIA first announced their groundbreaking approach to introducing accelerated graphics to virtualization they began bridging one of the last gaps in making sure that users get the best possible experience with virtual desktops and applications. Building on their success NVIDIA has introduced newer, Maxwell™ based GRID cards NVIDIA that go even further to create a rich graphical experience for users and decrease complexity for IT administrators looking to optimize the visual experience of their users.

The evolution of the GRID solution set coupled with their new software means that more users than ever can take advantage of graphical acceleration. This couldn’t come at a better time given the rise of advanced media usage in enterprise. As an increasing number of organizations begin to explore making their virtualized environments even better many want to explore what their current graphical needs are and plan for the future with NVIDIA.

This is why Lakeside Software has partnered with NVIDIA to develop a totally free graphical assessment hosted on Azure to deliver a detailed series of reports leveraging the SysTrack workplace analytics platform. Available for a 30-day period for up to 500 systems at no cost, this allows any interested organization to deploy SysTrack and review their current environment. At the end of the data collection period a customized report can be generated to provide key insights on how users are using graphically accelerated applications and what GRID profiles may work best for them.


Additionally, NVIDIA has also introduced vGPU monitoring as a key component of their GRID technology. As an early access partner Lakeside has been able to leverage this to build out new management and monitoring components to help ensure that critical end user experience components deliver the immersive experience users expect. This is showcased in some of the updated NVIDIA Kit contents that IT administrators can now use to monitor their vGPU implementations.


So, get started today. Check out to learn more about how you can use vGPU and SysTrack to help give your users the best possible experience.

Citrix Guest Blog: SysTrack and XenApp/XenDesktop

My name is Mayunk Jain and I manage Technical Marketing for HDX technologies at Citrix Systems, Inc. I am excited to author my first guest article on the Lakeside Software’s blog today and would like to talk about the Citrix – Lakeside Partnership and specifically about our joint value in the XenApp and XenDesktop space.

Citrix and Lakeside Software have been working together for almost two decades now and our relationship goes back to the roots of Resource Manager Services in MetaFrame and maybe even back in the WinFrame days.

Today, Lakeside Software markets and sells its SysTrack product, which is certified by the Citrix Ready partner program.

Here at Citrix on the Windows App Delivery side, we’re focused on the development of the XenApp and XenDesktop product lines and many of our customers are thinking about upgrading their IMA based XenApp farms to the latest and greatest FMA based architecture. In order to make this process as easy as possible, we developed Project Serenity (try the tech preview here), which helps customers automate the replication of published applications, policies, and settings from XenApp 6.5 environments to a new deployment running XenApp 7.6 or XenDesktop 7.6.

SysTrack can actually add tremendous value to this process and Lakeside Software describes the process and the methodology in their latest XenApp migration whitepaper.

In a nutshell, organizations can leverage SysTrack to determine detailed statistics about the existing XenApp deployment. Think about it as deep insights into  the daily operational items. What applications are being launched? What other processes and applications are executed as part of that initial session? Which backend application resources does the XenApp server connect to? What resources does each user consume over the course of a session, a week, a month? What is the user experience like? The answer to all these questions can contribute to optimize the environment when implementing XenApp 7.6.

More importantly, SysTrack adds value as an on-going assessment tool. Many organizations think of IT assessment as a one-time event. However, the truth is that visibility into every operational parameter, alarm, alert, dashboard, or trend is an on-going assessment of the day-to-day situation that triggers certain actions. One example of this is the integration Lakeside has been working on with the Citrix HDX protocol and NVIDIA GRID cards, which enable virtual delivery of complex and high-fidelity visual computing and 3D graphics use cases when GPUs are used. SysTrack can uncover the applications that benefit from this technology, providing guidance on the sizing, segmentation, and health monitoring of user sessions and their GPU-specific parameters.

Another example is SysTrack’s ability to consume, process, and analyze additional data from the Citrix Director and provide a single pane of glass reporting across all end user computing systems.

While I can’t give away the details quite yet, I fully expect us to deepen the relationship and SysTrack adding more functionality and value to our joint customers. Watch this space, and follow me on @mayunkj for more information as it becomes available.

We recently released a joint solution brief for Citrix and Lakeside technologies. If you are interested, download a 90-day free trial for XenApp and contact Lakeside Software for more information on SysTrack.

I am excited about the possibilities for our customers and hope to back on these pages soon.


What does “End-to-end” monitoring really mean?

The old saying goes that if all you have a hammer, every problem looks like a nail. This is certainly true in the IT world. There are a broad number of vendors and technologies that claim to provide “end-to-end” monitoring for systems, applications, users, the user experience and so forth.

When one starts to peel back the proverbial onion on this topic, it becomes clear that any of these technologies is only providing “end-to-end” visibility if you’re really flexible with the definition of the word “end”. Let’s elaborate.

If I am interested in the end user experience of a given system or IT service, I would certainly start with what the end user is seeing. Is the system responsive to inputs? Are the systems free of crashes or annoying application hangs? Do the systems function for office locations as well as for remote access scenarios? Do complex tasks and processes complete in a reasonable amount of time? Is the user experience consistent?

These are the questions that the business users care about. In the world of IT, however, the topic of user experience is often discussed in rather technical terms. On top of that, there is no such thing as a single point of contact in any larger IT organization for all the systems that make up the service that users have to interact with. Case in point – there is a network team (maybe even split in between local area networks, wide area networks, and wireless technologies), there is a server virtualization team, there is a storage team, there is a PC support team, various application teams, and we can think of many other silos.

So, the monitoring tools that are available in the market place basically map into these silos as well. Broadly speaking, there are tools that are really good at network monitoring, which means they look at the network infrastructure (routers, switches, and so forth) as well as the packets that are flowing through the infrastructure. Thanks to the seven layer OSI model, there is data available not only around connections, TCP ports, IP addresses, network latency, but also the ability to look into the payload of the packets themselves. The latter means being able to understand if the network connection is about the HTTP protocol for web browsing, PCoIP or ICA/HDX for application and desktop virtualization, SQL database queries, etc. Because that type of protocol information is in the top layer of the model, also called the application layer, vendors often position this type of monitoring as “application monitoring”, although it really has little to do with looking into the applications and their behavior on the system. Despite this kind of application layer detail in the networking stack, the data is not sufficient at all to figure out the end user experience. We may be able to see that a web server takes longer than expected to return the requested object of a web page, but we have no idea WHY that might be so. This is because the network monitoring only sees network packets – from the point when they leave one system and are received by another system and then have a corresponding response go the other way – back and forth, but with no idea what is happening on the inside of the systems that are communicating with each other.

The story repeats itself in other silos as well. The hypervisor teams are pretty good at determining that a specific virtual machine is consuming more than its “fair share” of resources on the physical server and is therefore forcing other workloads to have to wait for CPU cycles or memory allocation. They key is that they won’t know what activity in which workload is causing a spike in resources. The storage teams can get really detailed about the sizing and allocation of LUNs, the IOPS load on the storage system and the request volumes, but they won’t know WHY the storage system is spiking at a given point in time.

The desktop or PC support teams… oh, wait – many of them don’t have a monitoring system, so they are basically guessing and asking users to reboot the system, reset the windows profile, or blame it on the network. Before I get a ton of hate mail on the subject – it’s really hard to provide end-user support because we don’t typically have the tools to see what the user is really doing (and users are notoriously bad in terms of accurately describing the symptoms they are seeing.)

Then there’s application monitoring, which is the art and science of base lining and measuring specific transaction execution times on complex applications such as ERP systems or electronic medical records applications. This is very useful to see if a configuration change or systems upgrade has a systemic impact, but beyond the actual timing of events, there is little visibility into the root cause of things (is it the server, the efficiency of the code itself, the load on the database, etc.?)

What all this leads to is that a user may experience performance degradation that impacts their quality of work (or worse, their ability to do any meaningful work) and each silo is then looking at their specific dashboards and monitoring tools just to raise their hands and shout “it’s not me!” That is hardly end-to-end, but just a justification to carry on and leave the users to fight for themselves.

Most well-run IT organizations actually have a pretty good handle at their operational areas and can quickly spot and remediate any infrastructure problems. However, the vast majority of challenges that impact the users directly and that don’t lead to a flat out system outage is simply that users and their systems compete for resources with each other. This is especially true in the age of server based computing and VDI. One user is doing something busy and all other users who happen to have their applications or desktops hosted on the same physical device will suffer as a result. This is exacerbated by the desire to keep cost in check and many VDI and application hosting environments are sized with very little room to spare for flare-ups in user demand.

This is exactly why it is so important to have a monitoring solution that has deep insights into the operating system of the server, virtual server, desktop, vdi image, end-point, PC, laptop, etc. and can actually discern what is going on, which applications are running, crashing, misbehaving, consuming resources etc.

Only that type of technology (combined with the infrastructure pieces mentioned above) can provide true end-to-end visibility into the user experience.

It is one thing to notice that there is a problem or “slowness” on the network and it is something else entirely to be able to pinpoint the possible root causes, establish patterns, and then proactively alarm, alert, and remediate the issues.

Speaking to IT organizations, System Integrators, and customers over the years reveals one common theme: IT administrators would like to have ALL of the pertinent data available AND have it all presented in a single dashboard or single pane of glass. Vendors are just responding to that desire by talking about their products as “end-to-end”, even though most of the monitoring aspects are not end-to-end at all, as we have seen. If you have the same requirement, have a look at SysTrack. It’s the leading tool to collect thousands of data points from PCs, Desktops, Laptops, Servers, Virtual Servers, and Virtual Desktops and can seamlessly integrate with third party data sources to provide actionable data in a way that you would like to see it. We’re not networking experts in the packet analysis business, but we can tap into data sources from networking monitors and present it along with the user behavior and system performance. That is a powerful combination of granular data and provides truly end-to-end capabilities as a system of record and an end user success platform.


Check out our latest solution brief on end user computing success.

Let me know what you think and follow me on twitter @florianbecker

Lakeside Software at GTC – vGPU Planning and Optimization

With GTC coming up shortly it seems like an ideal time to discuss some of the key concepts that we’ll be covering in our talk at the conference about designing and optimizing a virtual environment with complex graphical needs using NVIDIA’s innovative GRID technology.  A recurring theme with us here at Lakeside is a focus on characterizing end-user demand, and planning for successful vGPU provisioning is another thing that’s totally contingent on taking the actual user usage and applying solid mathematical analysis for use case development. This is where SysTrack comes in.

The overall strategy is covered in more detail in a white paper and a MarketPlace report, but to summarize, the key to assessing and delivering a useable environment is understanding the usage habits and needs of the users, including the GPU demand of the current applications they interact with. By continually collecting these details and providing quantitative analysis of the different types of graphical profiles people may require SysTrack provides an in-depth, accurate way to architect a solution that will provide the best possible experience for end-users.

Once this plan has been developed, the next step is delivering and ensuring steady state performance through observation and optimization. Through the use of NVIDIA-specific driver details SysTrack can provide vGPU utilization metrics in the virtual environment to ensure that as usage evolves the profiles and provisioning can keep up. Ultimately this improves the adoption rate by providing advanced users with more demanding needs a seamless, well performing experience.

For more details, check out the talk that Florian and I will be giving at GTC on Tuesday: “S4686 – NVIDIA GRID for VDI: How To Design And Monitor Your Implementation” or our website ( for details about solutions for your VDI or application delivery implementation.