It’s an exciting day here in Bloomfield Hills, Michigan (not because Spring has come – that hasn’t happened yet). Today sees the culmination of many months of work and years of experience.
You may have noticed that today VMware announced a new cloud-based desktop assessment service called “SysTrack Desktop Assessment.” Many of you have long known us as “the assessment company” – but this offering takes our capabilities to a new level by removing the operational requirement of standing up a SysTrack server in your infrastructure, and putting it all in the cloud.
We believe continuous assessment is the most important component of ensuring EUC success – first, assessing the needs of your users in order to provide them with desktops and endpoints that keep them productive, and then ensuring that those desktops continue to meet their needs as operating systems are patched or upgraded, and applications are added or updated.
SysTrack end user computing assessment technology running on top of VMware vCloud® Air™ helps optimize VMware Horizon® installations, and continuous assessment keeps Horizon desktops running smoothly.
Better Plan, Optimize and Deploy
Available directly from VMware, SysTrack Desktop Assessment is our free cloud-based assessment solution that enables customers and partners to capture detailed metrics and user behavior data from diverse and evolving environments. Built upon nearly two decades of end user computing assessment experience, this assessment tool provides you with a self-service platform to assess and quantify user, application and infrastructure requirements to better plan, optimize and deploy virtual desktops and applications. Our goal is to help you accelerate time to value and ensure that your environments are right-sized to best meet the requirements of your end users, and keep them productive.
Use Before Your Horizon Migration
Moving to virtual desktops and applications such as VMware Horizon lets our customers manage their desktops from a central location while allowing users to access applications from any location and any device. But developing a strategy for deciding what users fit where requires information driven insights that can be significantly challenging without the right tools for continuous user and system data collection. We want more people to be successful, so we made initial desktop assessment easy and free.
The SysTrack Desktop Assessment gathers baseline performance data to fully understand all end user computing requirements and help you determine which platform is best for your corporate environment and users: Desktop as a service (DaaS), a hybrid managed physical deployment, or on-premises desktops and applications. We’re already seeing approximately 20 new assessments kick off every day.
Dynamic Reporting and Metrics
The custom report you’ll receive will contain:
System level build and configuration reports
Managed physical vs. virtual
Horizon product selection (on-Prem vs. Flex vs. Air)
Horizon Air Desktop (Standard vs. Advanced vs. Enterprise)
It’s easy (and free).
Visit the SysTrack Desktop Assessment launch page at assessment.vmware.com to learn more and get started with our solution – and keep coming back. In the coming months we’ll be adding additional features to help you gain visibility, understand deployment options, manage costs, and make better EUC decisions.
In just one short week from today many of us will be descending on the beautiful city of San Jose in the heart of Silicon Valley. The occasion is the start of NIVIDIA’s GPU Tech Conference GTC 2015, which is promising to be jam-packed with interesting technologies, even more interesting people, and so many interesting sessions that it’s hard to choose which ones to attend.
Oh, and did I mention the exhibit floor which will feature a lot of cool stuff to look at, play with, experience and touch?
I will have to divide my attention between sessions (mainly on the virtualization track) and the exhibit floor.
I hope you come see us at Lakeside Software’s booth #510 to chat or mark your calendar for Thursday for one or even both of my sessions in the afternoon:
There are few companies that I truly admire. Most tech companies specifically start with a really cool idea and then tend to lose some of their focus on innovation as they grow up. Not so NVIDIA – Other than recognizing NVIDIA as a maker of most of the graphics cards I put into my PCs over the years, I didn’t know much more about them until I attended and spoke at last year’s GTC conference. What a phenomenally nerdy crowd! Tons of attendees (and employees) hold PhDs and do some really cool stuff with parallel computing using GPUs that goes much beyond graphics. The applications are endless in science and make me reminisce of the days in the mid-90s where I programmed a DEC VAX using a mix of FORTRAN 77 and FORTRAN 90 to solve non-linear differential equations as part of a physics project. Hey, energy bands in solid state physics don’t come easy!
I digress – the introduction of the Shield cloud gaming console was a cool milestone. The NVIDIA team set out to solve a very hard problem and target it at the most critical user group. The problem can be described as remoting 3D graphics and the user group targeted with the Shield were those hard core gamers who wish they could take their water-cooled $5,000+ gaming PC to bed with them and keep playing while their significant other is looking for a bit of physical proximity. Who would not want to keep roaming around Kryat without having to abandon their girlfriend or boyfriend? Any lag or reduction in the all-important frames-per-second metric would cause those guys or girls to toss the device, take their blanket and red bull move back towards their desks. So, this simply had to work and NVIDIA attempted to solve this problem not because it was easy, but because it was hard.
This technology appeared to be the pre-cursor to remote professional graphics and builds on NVIDIA’s success with the Quadro product line. The new thing is called GRID and allows customers to add GPU boards to servers in the datacenter and then pass a full or partial GPU through to a virtualized workload running on the server on top of a hypervisor.
The results are nothing but spectacular and allow organizations to bring the core benefits of application and desktop virtualization to a very demanding user group – engineers. Both VMware and Citrix are supporting virtual GPUs now. I am going to describe a use case with Citrix XenApp, but it’s equally applicable to VMware implementations and even physical desktops.
Over the years, organizations have used Citrix XenApp and its predecessors to centrally execute applications and let user access them remotely. This had two primary benefits: IT could centrally manage and update those applications without having to worry much about the capabilities or physical location of the users end-points, and secondly centralize the backend data in the datacenter without having that important intellectual property float around on hundreds or even thousands of laptops and PCs. That approach worked very well for applications that were playing nicely on a server operating system, did not require user admin rights, and were not relying on GPUs as servers back then didn’t really have those.
Then, the industry tried to tackle the problem of remote developers and software engineers that need their integrated development environments and other little tools to do their work. This gave rise to the development of VDI, which now allowed an entire desktop Windows operating system instance (as opposed to a server OS) to be made accessible remotely. That was another major milestone that also gave rise to a trend in the industry of remoting entire desktops for broader purposes. But still, the entire 3D graphics aspect of applications had to be done by the CPU without the benefit of having GPUs around.
NVIDIA fixed that by introducing the GRID product line and vendors like Citrix and VMware have introduced GPU virtualization support in their hypervisors. Now, organizations can centralize the intellectual property behind core engineering applications and be comfortable that their very demanding and very expensive users have a good user experience. Did I say comfortable? Well, most IT shops want to know and want to be proactive about the user experience and that’s where Lakeside SysTrack comes in.
About a year ago, we started shipping NVIDIA GPU support in our award winning SysTrack product and have been working with customer and partners since then to do two important things:
First, look at existing physical workstations and measure the GPU consumption in order to make a decision on the specific virtual GPU a user might need, and secondly (and more importantly) allow customers to directly monitor the resource consumption on their GRID enabled servers and virtual desktops in order to maintain a great user experience.
By the way, I am honored to have the opportunity at this year’s GTC conference to speak about both of those topics, but I would like to provide a quick sneak preview of the technology.
In this particular example, an organization has built a Citrix XenApp farm on a number of physical servers. Each physical server is running Citrix XenServer and has a single GRID K2 card, which contains two separate GPUs on the same board. Each physical server runs two virtual Citrix XenApp servers and each of those has access to a full GPU via the GPU pass through configuration.
This particular environment was designed, sized, and built based on – well, let’s say – intuition, experience, gut, and some known best practices. In other words, this organization had little initial data to predict how these servers would be utilized and if the allocated resources would be enough to satisfy the users. This is not that unusual, as the tools to assess user needs are not nearly as widely known or used and organizations often over-provision environments to be on the safe side. In the end, nobody complained, which is arguably the first success criteria for any IT organization, but nobody in IT knew the facts about the user experience either. This is where SysTrack was introduced. The environment is running the latest version 7.1 and we provided a very specific dashboard to the organization to show all of the important key metrics in a single pane-of glass. Let’s have a look:
First, I select a date or time range of interest. In this example, it’s simply the last seven days. I see a list of all my XenApp servers and for each one details the high water marks – max CPU utilization, Max memory, max GPU utilization, max GPU memory, max # of GPU enabled apps, max disk time, max disk queue length, max IOPS, max number of total XenApp sessions and max number of active sessions.
One thing that becomes immediately apparent from the data is that I have two servers where the CPU spiked to 100% but the memory utilization never went beyond 30%. This means that I had more memory than I needed (better that than too little!) and I also see that no GPU went above 84% and the GPU memory utilization never exceeded 50%. This basically means that the K2 GPU is adequate for the work load and it has plenty of memory for the applications running. I do see disk times and disk queue length coming up there and that may indicate that I could have used faster or higher performing storage. The peak utilization was 15 users on the very first server in the list.
Again, these are just the high water marks. Let’s now have a look at the data over time for this particular server:
The following chart shows the maximum number of sessions (and active sessions) for each hour long interval:
We’ll look at CPU peak values and averages:
Again, it looks like these servers have plenty of memory to meet the demands of this environment.
Now, let’s look at the GPU:
One thing that becomes clear is that GPU utilization is very “bursty” – so much so, that it doesn’t make much sense to report on average GPU consumption, because the averages will always be relatively low. Unlike an example related to PC gaming, where GPU utilization is consistently high, engineering applications are relatively easy on the GPU until a user rotates a 3D model or kicks off a complex calculation.
Again, in this case we don’t see spikes going all the way up to 100% GPU utilization which leads to the conclusion that the K2 GRID GPU that is being passed through to this XenApp server is adequate for the use case.
It may also be interesting to look at the number of applications that are calling on the GPU:
Other performance indicators include the storage sub system and we can look at % Disk Time, Disk Queue Length and IOPS:
The storage performance counters just show one peak IOPS value that is related to read activity. A drill down into the SysTrack Black Box Data Recorder reveals more:
We have focused on the time of the read IOPS peak (07 FEB around 06:00A) and see that a particular system process is consuming a high number of IOPS. This correlates well with a high number of page reads per second and is overall consistent with the observations. In this particular case, it’s good that this system process is running at a time when the user density of the server is very low – a great validation of good IT practices.
In summary, this example shows a properly sized, well managed Citrix XenApp with NVIDIA GRID enabled HDX 3D PRO environment. The SysTrack toolset is the preferred managing and monitoring platform for many XenApp and XenDesktop environments. Data gathered from existing physical desktops and workstation can be leveraged by SysTrack to properly size and plan XenApp and XenDesktop farms to help in the transition and right-size the future environment.
Our SysTrack product collects a lot of invaluable data points across a potentially very large and diverse IT environment. Each individual system provides up to 10,000 data points every 15 seconds and the IT landscape can include everything from physical desktops and endpoints to a myriad of servers with various functions to the Citrix XenApp and XenDesktop environments.
Because of that broad diversity, it is sometimes important and desirable to boil down the data to the specific use case and area of interest for a key class of IT stakeholders.
The dashboard builder functionality that was first introduced in SysTrack 7.0 provides just such a facility. SysTrack users can create their own dashboards and even include data from a variety of non-SysTrack data sources (such as Citrix Director, but also other items like HR systems or software license management tools). In many cases, the construction of a meaningful dashboard requires good knowledge of the underlying data structures. Therefore, we at Lakeside develop dashboards and make them available at no additional cost to any of our customers through the dashboard builder functionality.
Today, I would like to introduce a series of very useful dashboards specifically targeted at the XenApp and XenDesktop administrators and stakeholders.
Here’s how they work:
First, as many of you know, SysTrack assigns a health score to each system or user session. It is expressed as a value between 0 and 100 and is an indication of the percent of time (measured in clock minutes) that the system is operating without resource limitations. We have a secret sauce algorithm internally that weighs the various factors depending on their severity; i.e. an application crash may weigh heavier than a temporary spike in CPU or disk utilization.
I took the health score as a starting point and provided a mechanism to group the relevant systems. For example, I would look at all the servers and images in each of the following categories:
Each group can contain one of more systems depending on my environment. At first, I pick a time frame in the dashboard and it shows the lowest observed health score in each group. That gives me an idea in which area I might want to have an additional look.
In this case, it looks like my infrastructure servers are doing mostly fine, but at least one of my XenApp hosts experienced a health score of 50, which I am trying to investigate now.
The expanded node shows the server with the potential issue and a double-click on the system name takes me into the next dashboard:
This view shows all the health alerts that indicate a reduced health score over the past 24 hours and I highlight the one with the score of 50. By doing so, the rest of the dashboard refreshes and shows only data that is time correlated to the time frame around that diminished health score – give or take a few minutes in each direction. The next pane in the dashboard now gives me a pretty good indication on where I need to focus my attention:
50% of the diminished health score was related to disk, 10% related to Event Logs and 40% related to application faults.
The rest of the dashboard has a number of detailed panes that I can use to get a better idea of what’s going on here. Let’s start with the Disk:
Application Disk details show all the running applications along with their disk related performance indicators:
The one application towards the bottom sports almost 700 read IOPS, 29 write IOPS, and large number of total IO operations and data read from disk. Now, let’s have a look at the disk volume metrics:
The C drive (which happens to be the only drive on this server) has a 32% disk time, indicating that the disk is not fast enough to deliver the IO load demanded by the applications.
Before we dig deeper into the disk topic, let’s have a quick look at application faults and the events:
It looks like that this example shows a single application that is faulting, which in this particular case is also showing up in the events log. This may or may not be related to the disk topics we investigated earlier and we can now focus our investigation. The application memory list may show more relevant information:
I may also wish to look at additional panes on this dashboard that show virtualization impacts like CPU Ready Time or the effects of memory ballooning, network details, latency to user sessions or backend systems, and a slew of other metrics.
Alternatively, I can simply drill down into the black box data recorder by double clicking on the alarm that was shown at the very top. This brings me right to the specific server and the specific time frame:
From here I can see the general state of the XenApp Server, the applications that were in focus at the selected point in time, and a slew of other data to help me in the IT efforts. In this particular case, the disk state, application focus, and application faults all point to the same application that I can now investigate further and work with the vendor or the in-house development team to address.
SysTrack provides a wealth of data about the infrastructure and the happenings from within the XenApp or XenDesktop workload, as well as from within the physical end-point. It can sometimes be daunting to focus on the pieces of information that are helpful for me in my specific role in the IT organization. I hear from Citrix administrators over and over again that their primary objective is to either show that “it is not Citrix”, or to resolve the problem quickly and efficiently and take steps to prevent a reoccurrence.
The dashboards provide customers and partners the opportunity to create detailed visualizations that can be very specifically targeted at a job role, team, or function within the organization. SysTrack dashboards also integrate very easily with non-SysTrack data sources such as the Citrix Director database, ERP systems, HR systems, etc.
This specific pair of Citrix XenApp/XenDesktop health dashboards is available to all SysTrack customers and partners via the download function in the Dashboard Builder.
All health alarms described here can be disseminated to the right target audience in the organization via SNMP or email alerts in real time.
What ideas do you have? Please provide your feedback and comments!
Citrix XenApp and XenDesktop are available in two general licensing models:
Concurrent licensing. This model is intended for one connection to a virtual desktop or unlimited apps for any user and any device – a license is only consumed for the duration of an active session. If the session disconnects or is terminated, the license will be checked back into the pool.
User/Device licensing. Under this model, the license is either assigned to a unique user or shared device. If assigned to a user, it allows that single user unlimited connections from unlimited devices. If assigned to a device, it allows unlimited users, unlimited connections from that single device.
The User/Device license is typically half of the price of a concurrent license and can be an attractive model for organizations that follow a “traditional” work schedule (as opposed to shift workers in manufacturing or healthcare, where they may be a large number of individuals, but only a fraction of which are concurrently using the XenApp or XenDesktop environment.)
Internally, and this is the topic of this article, if Citrix XenApp / XenDesktop is configured for the user/device license model, the Citrix license server has to decide whether to assign the license to a user OR to a device. These are two different things, although customers purchase a user/device (as in user SLASH device) license. So, how does this work?
Assume I, florianb, log into my organization’s environment and launch a session. At that time, a user license is consumed. I can run as many sessions from as many XenDesktop sites that share the license server as I like and use as many devices as I care to – it’s still one user license.
Assume that one of the devices I use is a shared thin client in the office. An hour after I leave, my co-worker Alex uses the same client to access his virtual desktop. Citrix internally then marks that particular thin client as a shared device and it consumes a device license. Theoretically, I could have 100 employees each use the same thin client and only consume a single user/device license.
It becomes apparent that the recognition of shared devices is an automated way for organizations to minimize the number of licenses they need.
Most of us, however, have a mix of environments, so Citrix is calculating the total number of user/device licenses as follows:
# User/Device licenses = (# of total users) + (# of shared devices) – (# of users who only access from a shared device)
Here’s a simple example:
User License Consumed?
Device License Consumed?
No, because Paul is only using a shared device (Client01, which is also used by Florian, Alex, and Amanda)
Yes, because he is using one or more non-shared devices
No, because Alex is only using a shared device
Yes, because Amanda is using a non-shared device (her iPad)
Used by: Paul, Florian, Alex, and Amanda
Yes – because Client01 is used by more than one user
Used by Florian
No – because Florian is consuming a user license so he can use an unlimited number of licenses
Used by Florian
No – Florian is consuming a user license so he can use an unlimited number of devices
Used by Florian
No – Florian is consuming a user license so he can use an unlimited number of devices
Used by Amanda
No, Amanda is consuming a user license so she can use an unlimited number of devices
So, in this example, we would need a total of 3 user/device licenses, even though we have 4 individual users and 6 individual devices in the mix. Given that the price point for a concurrent license is 2x that of a user/device license, this small sample organization would absolutely benefit from user/device licensing as they may need as many as 4 concurrent users licenses.
The Citrix license optimization definitely works in the customer’s favor and the license allocation happens on a 90 day schedule for user/device licenses (i.e. the license of a user who is no longer in the organization or a device that is no longer in use get automatically released after 90 days or can be released immediately with a license management tool under terms of the Citrix EULA).
However, it can be a little difficult to predict what an organization might need. Lakeside SysTrack is a great tool to look at all sessions (say in an existing XenApp concurrent environment) to determine if a trade-up to user/device licensing would make sense. To illustrate the point, I’ve mocked up a quick and easy dashboard in SysTrack’s dashboard builder to look at one of the many environments we’re running internally.
In this particular example, our peak user concurrency was 11 and we would have needed 29 user/device licenses. We’re better off staying with concurrent licensing in this example.
Equally, if a traditional desktop environment is being assessed, SysTrack can make the choice between concurrent and user/device licensing very easy.
While Citrix has reviewed this blog for accuracy at the time of this writing, Lakeside Software cannot make any representations on behalf of Citrix. Please always check with your authorized reseller, Citrix account manager and on citrix.com for the latest updates in product and licensing functionality.
My good friend and former colleague Chris Fleck (@chrisfleck) is a well-known enthusiast for mobile work styles and the latest gadgets. His recent blog post asks his readers why they are not using their iPad or other tablet for all of their work. His embedded poll lists all the right reasons, but the first thought that came to my mind was “Why would anybody want to do that?” There are, of course, iPad enthusiasts who would like to use their device for absolutely everything and show that it’s possible to do so, but I personally don’t think that this is realistic for most of us.
I believe that many of us are most productive in our jobs when we have the best resources available for the job at hand. If I am mostly attending meetings, taking notes, or firing off short emails, I will be in heaven with just a tablet. In my personal case, I have a pretty powerful workstation with dual monitors, plenty of memory and CPU and fast SSD’s. Most of the time I don’t need all of that power, but it allows me to run complex data queries, run separate VMs with server images, etc. A big factor in my productivity is the full keyboard/mouse and the large screen real estate. So, that’s my ideal rig. But, it’s not mobile. So, the more mobile I need to be, the more capability I will need to forfeit. The following chart, which is neither scientific nor necessarily representative of your environment, may illustrate the point a little better:
I think that each task or application falls somewhere on my arbitrary capability scale. For example, writing a blog or creating a presentation would be in the 30-40 range, meaning that a laptop is good enough for the job, but the tasks becomes tedious on a tablet and close to impossible on a smart phone, even if I have the ability to access a virtual desktop or application from it. In comparison, clicking on an approval link sent to you in an email by your ERP system requires not much and the smart phone is the preferred way of handling that task for many of us today.
Why do we then use these mobile devices if the productivity can drop so much? It’s obvious: Because the alternative is that we wouldn’t be able to do anything at all while on the road, on an airplane, in the car, bus stop, at a bar, etc. There clearly is a factor of “good enough” here that applies and people accept inferior user experiences if they are offset by another advantage such as mobility. A great book by Clayton Christensen describes this in great detail and with many examples unrelated to the IT industry. (http://www.claytonchristensen.com/books/the-innovators-dilemma/)
A different way to look at the same problem is to determine how much capability I may need on average and what my peak demand may be and then plot that against what I have available. Again, the scaling here is somewhat arbitrary.
In the example above, I have plenty of extra resources available on the desktop and it appears that this particular example of Thin Clients with VDI provides just about enough of what I need to do. And just in case you wonder: I consider the Thin Client / VDI solution to be somewhat mobile because it allows me to roam around in the office and in between offices and still connect to the same desktop and applications. But what if it didn’t live up the capability I need? What if my IT department were to provide me with something like this:
I could do most of my work, but I would have to spend plenty of time waiting for resources on the machine, be limited to only a few concurrent applications, and even simple tasks may require significantly more time and patience than necessary. For some applications, the user experience may be poor enough that they would not be usable at all. I would probably get frustrated sooner rather than later and seek alternative ways to boost my productivity – maybe by bringing in my own laptop or PC. To paraphrase my initial question, “Why would anybody (IT?) do that to me”? That’s mostly for a number of reasons, but the primary one is that today’s IT departments are flying blind when it comes to End User Computing. It’s not IT’s fault, but IT has very limited insights into what users are actually experiencing. It’s been traditionally very difficult as a practical matter to do comprehensive surveys of the end user community. To make things worse, things are constantly changing and a static IT has little chance to keep up. It would not be right to point fingers at IT though – traditionally, there are very few tools and processes available to determine the end user experience or gather the data that would establish a baseline ahead of any BYO or virtualization initiative.
That’s where SysTrack comes in. It’s the only true Big Data for End User Computing toolset to let IT get real insights into the user experience, design the technology behind transformational initiatives, and stay on top of changing requirements.
The following is just one of many data visualizers that clearly show – in hrs. per week – how much and in what areas the user experience is impacted:
The system on the bottom of the graph has more than 10 impacted hours per week – mostly due to CPU constraints (blue), while the one right above it experiences mostly disk/IO/storage limitations (red.) You can see all the different limiting factors in the chart legend and SysTrack allows the administrator to drill deeper into the specific system, session, or user and get to the application and process level along with their resource consumption.
This is just one small example how SysTrack can the feeling of flying blind into 100% visibility.
Lakeside will be at VMworld 2014 San Francisco, August 23-28 in Booth 821. Stop on by!
100% of your Users’ Experience 50% of your IT budget 0% Visibility
That’s the problem. Most organizations spend about half of their IT budget and effort on delivering quality experience to 100% of their people, but they have 0% visibility into their user experience. That’s not a recipe for success.
You need a way to make your IT decisions in an evidence-based way. Turn 0% visibility into 100% knowledge.
Most organizations spend about half of their IT budget and effort on end user computing (EUC): on endpoint hardware, applications, support, and more. It’s a huge cost, and literally all of their users are touched by the experience that they get from that investment. Unlike some more intangible expenses there’s a direct effect on productivity, but a recent survey of Fortune 500 enterprises found that nearly all companies are flying blind. They don’t know what their users do with their computers, how their users use the tools provided to them by IT, or what kind of experience their users receive every day. So, 50% of their budget is committed to delivering quality experience to 100% of their people, but they have 0% visibility into their user experience. That’s not a recipe for success.
How are you supposed to figure out from that what’s really going on?
One thing is certain: to get visibility into your end user computing investment, you need a lot of information. You need to know what people have, what they actually use, how their applications and devices perform, how they’re configured, and more. That’s a monumental amount of data, and this is why it’s essential to have a Big Data Solution for End User Computing. When you have complete visibility from the endpoint to the datacenter you can deliver a better user experience more cost effectively, and this is especially true if you’re using VDI or hosted applications. All of this depends on having the information that you need to make great decisions instead of guessing.
THE SOLUTION IS EVIDENCE-BASED DECISION-MAKING
This opens the door to something we call “evidence-based decision making”. Most forward looking enterprises have started the shift to an evidence-based approach because they’re finding that there are many myths mixed into their current “best practices.” Sometimes they’ve been doing the same thing for a long time, and nobody knows why. They are shifting to the use of large, fact-based collections of knowledge as the basis for their decisions, and eliminating the guesswork and high cost of making the wrong calls.
This is exactly what Lakeside’s SysTrack software provides. In a nutshell, it builds a data repository that describes what’s happening in your EUC environment. It observes and records at an extraordinary level of detail to capture all of the granular data necessary to get real understanding. It’s architected from the ground up for low overhead and extreme scalability, meaning it’s a unique and patented solution suitable to run on every endpoint all the time. That means that your knowledge can finally be complete. It also means that when something happens, you’ll have a record of it and what caused it when you analyze the situation later.
But SysTrack is more than data: it is information. Turning raw data into useful information is not trivial. When that process fails the data isn’t very helpful by itself. SysTrack avoids this by organizing information as it is collected, storing it efficiently and preparing it for access as it is collected. That’s important, because it’s almost impossible to build the context for data after the fact. If you don’t do that organization up front, you’ll be looking for a needle in a stack of needles. This overwhelming amount of post-event analysis is one of the biggest problems with solutions that depend on mass centralization of raw data and leave you to put the pieces together. SysTrack isn’t a search engine; that kind of thing is best left to folks like Google. Instead SysTrack is designed to process and organize environmental data to make it immediately accessible. So, after collection, how do we start to get real value from this complete set of information?
The next step is applying analytics; that means doing something with the information. SysTrack supports both built-in and third party analytics. The analytics for a lot of business problems in the market today, like virtualization planning, software rationalization, hardware right-sizing, root cause analysis, and lots more are in the built-in variety: they come with the SysTrack product. But EUC is a hot market, and there are lots of creative companies who would like to do more. SysTrack supplies easy-to-use APIs that industry partners and our customers build on; they can concentrate on their value-add, which is the analytics, and can rely on SysTrack to provide well-organized data. So, while some great analytics come from Lakeside, we realize that there are smart people out there who have great ideas for analytics and specific business cases that we haven’t thought about yet. We want to provide the foundation to power those next generation solutions. Taking and working with the information in SysTrack helps deliver the answers to some of the most complex questions anyone may ask about an environment.
The third step is visualization. Having step one, the information collection, and step two, the analytics, in place you now need to be able to get the results into the hands of people that need it. This isn’t easy because different information consumers have different needs. Some need dashboards, some need reports, and some need detailed, granular data. To flexibly meet all of these needs, SysTrack includes a wide variety of visualization tools. That means excel-type “pure data” reports, full-scale documents that look like manuals, out-of-the-box and user-configurable dashboards, and programmatic access through APIs and PowerShell. The key point here is that you can drive visualization in whatever way is convenient to you. Most organizations use more than one.
Maybe it will be helpful to be a little more specific, and some examples may help to make this more concrete. Let’s start with a look at the lifecycle for a virtualized desktop.
The very first thing that everybody needs to do in order to be successful is a proper assessment. In fact, it doesn’t matter what project you’re looking at: physical to virtual, Windows 7 and 8 migrations, virtualization adoption, whatever. You first need to understand what your people are using today and what resources they need.
IT typically supports a list of applications and a handful of PC and laptop models. But what applications do your users need or want? What apps are purchased and self-installed by certain departments or teams? How hungry are those apps? Answers to these questions are important, because just missing one or two critical items here can break your project before you rack the first server or install the first image.
So, assessments are important. You’d expect that a good assessment methodology has a high degree of automation, is transparent to the users, and is accurate. Lakeside SysTrack does all of that and a number of highly successful system integrators and consultants have long standardized on SysTrack for all of their assessments in IT projects.
Next is the actual Transformation. There’s always transformation, because if everything were static, you probably wouldn’t have much of a project. A serious complication to your project is that things continue to change while you are doing the planning. You can’t stop the user and app churn as that is part of the business. SysTrack can help deal with that.
One of the most crucial parts of the transformation is the image design. IT has to make sure to get all of the applications users depend on without bloating the images, by striking that fine balance between providing what’s really needed and not much more. You also want to see if your users are ready to make the switch once you finalize a new solution. You need to determine if all the applications are actually ready to go and identify the best order in which to migrate your users to the new platform. Lastly, you will want to have direct visibility into how things are going and adapt your project if your early adopters have a poor user experience.
With assessment and transformation completed, you still have to operate the constantly changing environment. Having a user explain an issue to the help desk is time-consuming, subjective and problematic; frequently users only see the symptom (e.g. “my computer’s slow”) and have no understanding or visibility into the underlying problem. Wouldn’t it be great to have a black box that records what has been going on so that you can have a look without having to try and replicate the problem? How about looking at system dependencies? Did you know your proxy server was 200 milliseconds away? What about raising some alarms when something goes wrong and pro-actively alerting your IT team about it? How do you know that the user isn’t just overly sensitive? Wouldn’t it make sense if you could see how their system performance is stacking up compared to everybody else in their peer group so you can determine if there is an isolated problem or a more widespread issue? That’s what IT operations is all about.
Finally, how about measuring what’s going on? Not only purely on the IT side, but measuring business metrics so that you can really have an impact on your people and your business. The new SysTrack dashboard builder not only visualizes the SysTrack data, but actually pulls data from a lot of other sources and silos as well – that’s real data integration you can actually use. How about correlating help desk ticket volumes to system health? How about tracking application usage against owned application licenses? How about correlating user experience with productivity of your sales force or manufacturing flow? SysTrack does all that.
All right, so let’s recap – you need to have a way to make your IT decisions in an evidence-based way. To turn 0% visibility into 100% knowledge. The evidence comes from SysTrack as a Big Data for End User Computing solution that provides everything you need through the entire system lifecycle – Assess, Transform, Operate, and Measure.
SysTrack MarketPlace is a dynamic library of customized, vendor-specific reports that IT administrators run anytime to evaluate, measure and tune their IT solutions. SysTrack MarketPlace leverages vendor-specific algorithms and methodologies to deliver quantitative, actionable data on infrastructure performance and provide deep insight on how to optimize solution deployment.
Each customized report provides detailed solution performance metrics, enabling customers to assess, optimize, measure and validate their solution implementation. Customers can utilize this data to tune a provider’s solution(s) to extract maximum benefit and value.
HP:The HP Thin Client Device Analysis report recommends which HP Thin Client is best based on detailed environmental and user information such as how many users there are, what kind of devices and software they’re currently using, how many of them are using graphically intensive applications, and how mobile they are.
VMware:The VMware Horizon™ 6 report uses SysTrack’s dynamic collection of end-user and system characteristics to develop a plan for how VMware Horizon 6 can be implemented more fully in an environment, including insight into image management with VMware Horizon Mirage™, workflow and security optimization with Workspace, and desktop delivery with VMware Horizon View™ and VMware Horizon DaaS®.
MarketPlace reports are now available for products from AppSense, Atlantis Computing, Cisco, Citrix, Dell Wyse, EMC, HP, IBM, Login VSI, NComputing, Nexenta, Nutanix, NVIDIA, RES Software, Samsung, Teradici, Trend Micro, Unidesk, VMware, World Wide Technology and X-IO Technologies. Learn more about these integrations.
Back in March, Ben Murphy and I had the pleasure of speaking at the NVIDIA GPU Tech Conference in San Jose about best practices for the implementation of virtual GPU loads with virtual desktops and apps. Since then, I had the pleasure of speaking to numerous customers, partners, and analysts about the power of NVIDIA GRID and Lakeside SysTrack.
At Lakeside, we incorporated a number of NVIDIA APIs into the SysTrack product to allow us to monitor physical (and virtual) GPUs. By capturing granular utilization data, we allow customers and partners to model the observed GPU usage into appropriate vGPU profiles in a virtual world. Many customers who have adopted virtual desktop and applications have been reluctant to do so with graphically intensive workloads. The introduction of NVIDIA GRID and the K1 and K2 boards are changing that rational though – corporations can now easily virtualize even the most intensive graphical use cases – and therefore reap the benefits of protecting their intellectual property in the datacenter – while still presenting a full or fractional GPU to that user. (if you have any doubts about what’s possible – try it for yourself at http://nvidia.com/tryGRID – it’ll blow you away!)
Of course, as we’re implementing this technology in our product, I had to test it myself. With a fresh (and slightly more powerful) GeForce card now in my arsenal (purely for professional testing and evaluation purposes, of course – thank you, NVIDIA!), I ventured out to put it into my system (which also serves as my primary work computer in my home office) and give it a spin.
Not so fast – I realized that the card and its physical profile would not fit into my 7 year old Mid-ATX tower case, so I had to order an appropriate power supply and full ATX case. Here’s a picture of the work in progress:
After 2 hours of open heart surgery and tons of parts in my living room floor, the deed was done and the GeForce card safely installed in the bigger case along with the rest of stuff that makes up a computer.
Next, I ran a quick test, got some food, and then took advantage of being home alone while my wife was visiting her former college roommate a couple of hundred miles away. The graph below shows the key metrics collected by SysTrack over the rest of the night. It didn’t take long to find a suitable app that would tax the GPU close to its limit 🙂
All the gaming fun aside, there is a professional angle to this:
When deciding whether or not to virtualize a graphically intensive use case IT architects must decide what virtual desktop spec and what virtual GPU spec to provide to each user. SysTrack is the only solution in the industry today that collects and analyzes pertinent data from existing physical desktop systems:
All applications running on the system along with their resource consumption across compute, memory, storage, graphical protocols etc.
Time correlated information across a large user base to determine concurrency, potential boot storms, and peak utilization
Correlation of existing GPU utilization in relation to their relative capacity and mapping that usage to the vGPU profiles available with NVIDIAGRID
Industry-proven and award winning virtual machine planner technology to design VMware, Citrix, and Microsoft virtualization scenarios.
Assessing and designing a virtual desktop solution for professional graphical use cases is just the beginning of the fun, though. SysTrack provides all the in-depth data collection, tools, visualizers, dashboards etc. to equip organizations with the best on-going monitoring and management tools that enable them to provide the best end-user experience.
Curious? Contact us or send us a note with your questions and comments!