Category Archives: End User Analytics

Explaining and Expanding the SLA Conversation

Service Level Agreements (SLAs) come in many forms and descriptions in life, promising a basic level of acceptable experience. Typically, SLAs have some measurable component, i.e. a metric or performance indicator.

Take, for example, the minimum speed limit for interstates. Usually, the sign would read “Minimum Speed 45 mph”. I always thought the signs existed to keep those who got confused by the top posting of 70 mph (considering that to be the minimum) from running over those who got confused thinking 45 mph to be the maximum.

It turns out the “minimum speed” concept is enforced in some states in the U.S. to prevent anyone from impeding traffic flow. For those who recall the very old “Beverly Hillbillies” TV show, I’ve often wondered if Granny sitting in a rocking chair atop a pile of junk in an open bed truck, driving across the country might be a good example of “impeding the flow of traffic” at any speed. Although, from the looks of the old truck, it probably couldn’t manage the 45 mph minimum either.

In the world of IT, there are all sorts of things that can “impede the flow” of data transfer, data processing, and/or data storage. While there’s nothing as obvious as Granny atop an old truck, there are frequently Key Performance Indicators (KPIs) that could indicate when things aren’t going according to plan.

Historically, IT SLAs have focused on a Reliability, Availability, and Serviceability (RAS) model. While not directly related to specific events/obstacles to optimum IT performance, RAS has become the norm:

  • Reliability – This “thing of interest” shouldn’t break, but it will. Let’s put a number on it.
  • Availability – This “thing of interest” needs to be available for use all the time. That’s not really practical. Let’s put a number on it.
  • Serviceability – When this “thing of interest” breaks or is not available, it must be put back into service instantly. In the real world, that’s not going to happen. Let’s put a number on it.

In the IT industry, there exist many creative variations on the basic theme described above, but RAS is at the heart of this thing called SLA performance. The problem with this approach from an end-user standpoint is that it misses the intent of the SLA, which is to ensure the productivity/usefulness of “the thing of interest”.  In the case of a desktop, that means ensuring that the desktop is performing properly to support the needs of the end user. Thus, the end user’s productivity/usefulness is optimized if the desktop is reliable, available, and serviceable… but is it really?

Consider the following commonplace scenarios:

  • The desktop is available 100% of the time, but 50% of the time it doesn’t meet the needs of the end user, e.g. it has insufficient memory to run an Excel spreadsheet with its huge, memory-eating macros?
  • A critical application keeps crashing, but every call to the service desk results in, “Is it doing it now?” After the inevitable “No” is heard, the service desk responds, “Please call back when the application is failing.” This kind of behavior frequently results in end users becoming discouraged and simply continuing to use a faulty application by frequently restarting it. It also results in a false sense of “reliability” because the user simply quits calling the service desk, resulting in fewer incidents being recorded.
  • A system’s performance is slowed to a crawl for no apparent reason at various times of the day. When the caller gets through to the service desk, the system may/may not be behaving poorly. Regardless, the caller can only report, “My system has been running slowly.” The service desk may ask, “Is it doing it now?” If the answer is “Yes”, they may be able to log into the system and have a look around using basic tools, only to find none of the system KPIs are being challenged (i.e. CPU, memory, IOPs, storage, all are fine). In this scenario, the underlying problem may have nothing to do with the desktop or application. Let’s assume it to be the network latency to the user’s home drive and further complicate it by the high latency only being prevalent during peak network traffic periods. Clearly, this will be outside the scope of the traditional RAS approach to SLA management. Result: again a caller who probably learns to simply tolerate poor performance and do the best they can with a sub-optimized workplace experience.

So, how does one improve on the traditional RAS approach to SLA management? Why not monitor the metrics known to be strong indicators of a healthy/not so healthy workstation? In this SLA world, objective, observable system performance metrics are the basis for the measurement of a system’s health. For example, if the CPU is insufficient, track that metric and determine to what extent it is impacting the end user’s productivity. Then do the same for multiple KPIs. The result is very meaningful number that indicates how much of a user’s time is encumbered by poor system performance.

In the case of SLAs based on observable system KPIs, once a baseline is established, variations from the baseline are easily observable. Simply focusing on counting system outages and breakage doesn’t get to the heart of what an IT department wants to achieve.  Namely, we all want the end user to have an excellent workspace experience, unencumbered by “impeded flow” of any type.  The ultimate outcome of this proposed KPI vs RAS based SLA approach will be more productive end users. In future blogs, I will expand on how various industries are putting into practice a KPI based SLA governance model.

Learn More About Maximizing User Productivity

Foundations of Success: Digital Experience Monitoring

We’ve all seen the rapid evolution of the workplace; the introduction of personal consumer devices, the massive explosion of SaaS providers, and the gradual blurring of the lines of responsibility for IT have introduced new complications to a role that once had very clearly defined purview. In a previous post, we discussed quantification of user experience as a key metric for success in IT, and, in turn, we introduced a key piece of workspace analytics: Digital Experience Monitoring (DEM).  This raises the question, though, what exactly is DEM about?

At its very heart, DEM is a method of understanding end users’ computing experience and how well IT is enabling them to be productive. This begins with establishing a concept of a user experience score as an underlying KPI for IT. With this score, it’s possible to proactively spot degradation as it occurs, and – perhaps even more importantly – it introduces a method for IT to quantifiably track its impact on the business. With this as a mechanism of accountability, the results of changes and new strategies can be trended and monitored as a benchmark for success.

That measurable success criterion is then a baseline for comparison that threads its way through every aspect of DEM. It also provides a more informed basis for key analytical components that stem from observation of real user behavior, like continuous segmentation of users into personas. By starting with an analysis of how well the current computing environment meets the needs of users, it opens the door to exploring each aspect of their usage: application behaviors, mobility requirements, system resource consumption, and so on. From there users can be assigned into Gartner defined workstyles and roles, creating a mapping of what behaviors can be expected for certain types of users. This leads to more data driven procurement practices, easier budget rationalization, and overall a more successful and satisfied user base.

Pie chart showing the number of critical applications segmented by persona

Taking an active example from a sample analysis, there are only a handful of critical applications per persona. Those applications represent what users spend most of their productive time working on, and therefore have a much larger business impact. Discovery and planning around these critical applications also can dictate how to best provision resources for net new employees that may have a similar job function. This prioritization of business-critical applications based on usage means that proactive management becomes much more clear cut. The experience on systems where users are most active can be focused on with automated analysis and resolution of problems, and this will have the maximum overall impact on improving user experience. In fact, that user experience can then be trended over time to show what the real business impact is of IT problem solving:

Chart showing the user experience trend for an enterprise

Various voices within Lakeside will go through pieces of workspace analytics over the coming months, and we’ll be starting with a more in-depth discussion of DEM. This will touch on several aspects of monitoring and managing the Digital Experience of a user, including the definition of Personas, management of SLAs and IT service quality measurements, and budget rationalization. Throughout, we’ll be exploring the role of IT as an enabler of business productivity, and how the concept of a single user experience score can provide an organization a level of insight into their business-critical resources that wouldn’t otherwise be possible.

Learn more about End User Analytics

The Quantified User: Redefining How to Measure IT Success

In 1989, the blockbuster hit Back to the Future took the world by a storm with wild technology predictions. Now, we know the film might have missed the mark on flying cars and power clothes, but many of its predictions were more accurate than expected. Case in point: wearables.

From virtual reality headsets to fitness trackers, wearable technology powers this notion of the “quantified self” where our personal lives can be digitalized and analyzed with the end goal of being “improved.”

But when it comes to our professional lives, can we similarly analyze and improve them in order to enable productivity? Yes! Just as we are living in the era of the “quantified self” the enterprise is now entering the era of the “quantified user.”

But don’t just take my word for it. Here is how and, more importantly, why you should care…

What is the “quantified user?”

Think about the workplace today: it is one where people, processes and technologies are overwhelmingly digital and largely managed by third parties (eg: Office 365 and other business-critical SaaS offerings). And this is a great thing!

But this also presents an unforeseen challenge for IT which is, “how do we support a workforce that is largely digital and whose technology resources may or may not be managed by us?”

The key to supporting today’s workforce lies within the concept of the “quantified user” where, just as we are able to quantify the number of steps we take per day to help us improve our personal health, the “quantified user” is one whose end user experience within the digital workspace is quantified and given a score in order to enable productivity.

Learn more about End User Analytics

You might think that, at a glance, there is a loose relationship between a user’s experience and their productivity. However, over the past 20 years, workspace analytics provider, Lakeside Software has found the better the user experience score an employee has, the lower the barriers to productivity within the digital workspace. How? Via a healthier, more available desktop experience.

End user experience score: the most important metric in IT.

A bold statement, I know, but the end user experience score is the most important metric in IT because it accurately and objectively measures how all the technologies and actions taken by IT are enabling business productivity, which is the original purpose of any IT team.

The end user experience score is one that is normalized and is not touched by IT or IT vendors, and serves two purposes: inform what factors are impacting productivity and improve visibility into the health and performance of technology investments.

So how do we calculate the end user experience score?

Calculating employees’ end user experience score is done by analyzing and managing all the factors that could impact their productivity using a data-collection agent right where they conduct most if not all their work: the endpoint.

Why the endpoint? Because as critical IT functions are being outsourced and managed by third parties, reduced visibility into network transactions, data center transactions, and overall IT infrastructure is inevitable. Therefore, an employee’s workspace, the endpoint, has become the most privileged point of view IT can have into the state and the health of an increasingly scattered IT environment.

The end user experience score is one that should be calculated based on the availability and performance of all the factors that could impact the end user experience, that is everything from network and resource problems to infrastructure and system configuration issues.

The result is a score that is normalized and supports the “quantified user.” It is one that can be compared across teams and groups of users, and one that the IT team can work to improve in order to enable business productivity of those who matter most: the end users.

How to start using your end user experience score to enable productivity

Lakeside Software’s flagship solution, SysTrack, is based on permeating the use of the end user experience score throughout IT. A solution for workspace analytics, SysTrack is an endpoint agent that gathers and analyzes end user data on usage, experience and the overall endpoint in order to help IT teams in the following key areas:

– Asset Optimization: ensuring the cost of IT is being optimized for the captured needs and usage of the users

– Event Correlation and Analysis: pinpointing and resolving IT issues blocking users from being productive

– Digital Experience Monitoring: monitoring and most importantly, analyzing end users’ experience with all the technologies and business processes provided for by the organization

Learn more about SysTrack

 

We Know You Don’t WannaCry

By now you likely know that WannaCry is a malicious widely distributed ransomware variant that is wreaking havoc over enterprise IT. The most important thing to know is that Microsoft has issued patches for nearly every flavor of the Windows operating system (including Windows XP) to prevent any further attacks.

Since AV (even next-gen AV) and other security tools have not been very effective at mitigating the WannaCry threat, our advice to our customers is to ensure you have a complete inventory of every Windows instance and its respective patch level. This will enable you to identify which Windows instances in your environment are still vulnerable so you could focus your energies on finding and patching them.

To help you accomplish this, we’re offering Lakeside customers several complimentary dashboards that can help you identify Windows instances that are at risk of being infected by WannaCry or other security threats:

  • Security Patch Details: We’ve developed a new kit, Patch Summary Kit, that provides details on security patches based on operating system. It also provides details for a specific patch if you know the patch’s KB or definition. The details include if the security patch was installed in a system and which patch it was. This kit provides clear and precise data to help users remain safe.
  • Risk Score: SysTrack provides a risk score in Risk Visualizer. The risk score is an uncapped integer that takes into account all potential ways a system may be vulnerable. Risk Visualizer allows you to view the risk scores of all systems in your environment to easily identify systems of concern. A higher risk score implies that your system is at greater risk of attack.

You can use the table below in conjunction with the Patch Summary kit to check whether a security patch has been applied to systems with the corresponding OS. An example of this feature is shown in a screenshot taken of the kit.

Operating System (Version Number) Security Patch KB
Windows XP KB4012598
Windows Vista KB4012598
Windows Server 2008 KB4012598
Windows 7 KB4012212
Windows Server 2008 R2 KB4012212
Windows 8 KB4012598
Windows 8.1 KB4012213
Windows Server 2012 KB4012214
Windows Server 2012 R2 KB4012213
Windows 10 (1511) KB4013198
Windows 10 (1607) KB4012606
Windows Server 2016 KB4013429

Our goal at Lakeside is to help keep our customers’ end users productive. We hope that by providing these risk management and compliance dashboards, we can help IT departments continue to improve organizational digital experience.

Software Asset Optimization with SysTrack

Workplace analytics encompasses a vast amount of end user computing related information collected from a variety of sources, and a vital component of the topic is the observation of software assets. Obviously, a broad topic, we’ve chosen to break that further into three key categories: performance, usage and dependencies. Software performance monitoring is driven by the need to understand how well applications are working in the environment. Software usage is predicated on the idea of optimizing licensing and delivery to provide necessary applications. The last category, Dependencies, is vital to understand what pieces are necessary for software to function.

Software performance is itself a complex topic, but broadly the idea is to identify the answer to key questions like “why does my application keep crashing?” and “what applications take the longest time to load?” This incorporates key metrics like resource consumption details (CPU, memory, IOPS, network bandwidth) as well as number and frequency of faults or hangs. In many ways, this is one of the first items thought of in the context of software asset analytics, and it’s often one of the first things an end user notices about the environment. Diagnosing performance issues and understanding the resource consumption for the average user can help steer hardware requisition and delivery methods. Clearly, though, a preliminary question in many cases is exactly what packages belong in the environment.

Accurately observing software usage can be invaluable to a company. The ability to know which applications are used versus installed directly relates to the distribution of licenses, and that’s a direct cost driver. Another consideration is support cost savings made possible by making images less complicated. Intrinsic to rationalization is a host of potential ways to make sure that the delivery of applications to end users is as closely tailored to their needs as possible. There are some technical considerations to this as well, not the least of which is exploring the components or backend connections required for software in the environment.

Gaining insight into the required components a given package needs to function can be very important to choosing appropriate delivery mechanisms and options. Application compatibility concerns driven by incompatible components, fundamentally unsupportable system components, and complex networking requirements are all key to understand. Identifying what applications call on to function on a day to day basis dictates many of the decisions IT need to make to modernize and continually innovate with their delivery options.

We’ll be going into more depth on each of these categories as we release our upcoming Software Asset Analytics Kit. With each area, we’ll expand on some real use cases and provide some real-world examples of how each provides essential information for an environment.

Digital Experience Management and Event Correlation with SysTrack

SysTrack provides the ability to score an environment’s end user experience using digital experience management metrics. The resulting end user experience score provides a clear view of the end user’s experience in that environment and is composed by a series of Key Performance Indicators (KPIs). These KPIs are structured to provide direction to any problems in the environment that may affect the end user. The key to the philosophical approach with SysTrack, though, is the joining of this scoring to event correlation and analysis through the use of proactive alerts. These proactive alerts tie that overarching score to triggered, targeted events to provide a fuller and easier to understand portrait of the IT environment.

This starts with our end user experience score, and it’s best thought of as a simple grade. Basically, the score comes in a range of 0 to 100, with a score of a 100 implying the environment is providing the best end user experience. The score is composed of 13 different KPIs that represent aggregate data points on potential sources of impact. These  roughly fall into the categories of resource issues, networking problems,  system configuration issues, and infrastructure problems. This results in a great, normalized score to understand across a broad set of different systems what kind of performance issues are occurring. Even more importantly, it provides a platform for long-term analysis for trending to see the trajectory and evolution of that experience over time. The image below displays an overall view of the end user’s experience of the environment and the ability to monitor the evolution of those impacts over time. 

For more operational items that require an immediate response the alerting mechanism comes into play. Alerts are an active component that are triggered by events generally correlated with negative impact. Alert items roughly correlate with the end user experience score KPIs to help further IT administrators’ direction towards resolving problems. The image below demonstrates an environment with active alerts.
The key piece is correlating these items to that impact in a meaningful way. So, the big question is this: how do they work with one another?

One of the most common ways alerts and user experience scores are used in conjunction is through a feedback loop.  An administrator determines which KPI is causing the largest source of impact and continues to drill down providing a clear view of placed and potentially triggered correlating alerts. The alerts will direct the administrator towards the underlying causes of the problem and finally to the potential source of impact. After the resolution the administrator can track the increase in user experience as a result of their improvements to see how successful their changes have been.

End user experience scores provide an overall indicator of the quality end users are experiencing, while alerts provide immediate information on the source of impact. The integration of both tools provides an easy and clear way for IT administrators to discover the source of a system’s impact. To learn more on this topic check out our white paper, Utilizing KPIs and Alarm Methods!

Director Integration for Ask SysTrack

One of the unintuitive results of the progression of technology is the massive proliferation of different sources for different pieces of information that are critical to managing an environment. There are just so many tools that provide a depth of detailed data that the sheer number of them makes it difficult to figure out which one to use and how to find it within the interface. Information seeking behavior then takes users across multiple tools with multiple methods of interaction; the net result can be confusion and lost time. This is where cognitive analytics and the ability to ask simple questions can make the difference between solving a problem and bouncing between reporting tools.

The popularity of Ask SysTrack’s recent set of advanced integrations has been very eye opening to how pervasive the need to have a single, easy to use interface for getting contextually relevant answers to questions can be. Because of this we’ve worked with our partners to try and provide a single source to answer IT questions that then provide what’s needed when it’s needed.

At Citrix Summit we’re showcasing one of our most recent examples: plugin integration with Citrix Director. This plugin not only displays SysTrack information in the Director interface, but also provides Director and Citrix related answers to questions that are found in the interface through Ask SysTrack.

The key is providing the Ask SysTrack plugin interface directly in the Director interface home page. Now any IT administrator that makes use of Director has a Watson power cognitive helper to answer questions like “What is the user experience of my environment?”

Clicking the link takes them directly into the relevant data in SysTrack. Alternatively, they can also just ask questions about Director.

We’ve also added a User Experience Trend for delivery groups that are discovered in association with the instance that allows administrators to view what kind of user experience their end users have been getting alongside the other data presented in Director.

This makes it much easier for administrators to now get the key details they need when they need it without having to spend time working through multiple interfaces.

For more details check out a quick video run through.

Answering GPU Questions with SysTrack

Ask SysTrack has become one of the most popular topics we’ve ever discussed in the industry, and our top question is always what we’re adding next. The benefits of using our Natural Language Processing (NLP) tool for common IT questions has appealed to a massive number of our partners in the industry as well as customers. Basically, our goal is to provide an analytical system that takes any question you may have about IT and tries to hook you into the best source of information available to help.

Zach mentioned our first integration in a previous post, and in the new year we’ll be launching a series of new integrations that cover different areas. One I’m personally excited about is our added GPU-based monitoring and reporting from NVIDIA GRID.

GPU utilization in general has been a hot topic for a while, and with the great progress NVIDIA has made with the creation of the first vGPU profiles for VDI, the potential to bring a great graphical experience to anyone has exploded. We’ve provided support for NVIDIA GRID from the very beginning, offering a cloud based assessment tool that can help plan what profiles would work for a currently physical environment to make the move to vGPU and VDI. As a natural progression to that we’ve implemented new monitoring with NVIDIA to help understand workload and usage in VDI systems. 

This kind of insight is especially critical when first undertaking a project to start transforming an environment using new technologies. John Fanelli, vice president of NVIDIA GRID, agrees, “With Lakeside Software’s Ask SysTrack workspace analytics insight engine administrators can make natural language queries to gain contextually relevant NVIDIA vGPU insights and help continuously assess and align vGPU benefits to user personas.”

Of course, the key point is connecting users to all that great content. This is where the expansion to Ask SysTrack comes in. Specifically, we’ve now integrated our additional collection and planning for Ask SysTrack to be able to help answer basic questions like “What kind of NVIDIA vGPU profiles do my users need?”

We can also answer other questions post migration that are critical to maintaining user experience. Things like, “What’s the total GPU usage on Ben’s system?”

Basically, if you can think of a question relating to GPU utilization we have the answer available.

To get started, check out our assessment site at nvidia.lakesidesoftware.com to first size out a new environment or just get an introduction to SysTrack.

INTRODUCING ASK SYSTRACK FOR AIRWATCH

At Lakeside Software our goal has long been to make insightful, high impact analytics readily available to help answer questions and enable better decision making in IT.

In August we took a major step forward in data accessibility with the introduction of Ask SysTrack, in partnership with IBM Watson cognitive services. This Natural Language Processing (NLP) question tool made it possible to find highly specific SysTrack data using nothing but everyday questions, greatly reducing the barrier to entry for all SysTrack tools. A basic introduction to the Ask SysTrack was provided in a previous blog post by Ben Murphy. You can download a white paper for more in-depth information on how the tool works.

One of the interesting things we discovered in the intervening months has been that Ask SysTrack was getting asked questions it understood but didn’t know the answer to. We inadvertently trained Ask SysTrack’s AI dictionary to understand nearly every question someone in IT might ask it. The best metaphor for this would be like being asked for directions to somewhere you don’t know how to get to. Say someone stopped you on the street and asked:

“How do I get to Bob’s burgers?”

You understand they are looking for directions to an eatery named Bob’s burgers. But you don’t know the answer.

Something similar was happening to Ask SysTrack in production – it was getting asked lots of questions about mobile devices.

Since SysTrack is traditionally a desktop analytics tool, it offers only limited visibility into the mobile device space. It’s difficult for users who are unfamiliar with the vast quantities of data available to them through SysTrack and other tools to navigate to the mobility data they need in the moment. But that data was easily found in their EMM console.

Since the most popular EMM tool in SysTrack Community is Airwatch, we reached out to our friends at VMware with a proposition: Let us extend our natural language insight engine to your platform.

One thing led to another, and today we’re introducing Ask SysTrack for AirWatch. Through partnership with VMware AirWatch, the Enterprise Mobility Management leader in the Gartner VMM Magic Quadrant, the Ask SysTrack workspace analytics insight engine now includes Natural Language Processing capabilities for the entirety of the AirWatch platform. This means that the ease of use made possible by the industry first Ask SysTrack now expands into the mobility space.

Using nothing but simple questions you can track down otherwise hard to find data that typically requires a large amount of familiarity with AirWatch to locate. Say for instance that you want to know where to access your compliance polices.

airex3

Or maybe you want to know how many of your employees use iPhones.

airex4

Once again, what would normally be difficult to find only took a simple question.

The Ask SysTrack tool is available with SysTrack 8.2 while Ask SysTrack for AirWatch is made available through installation of an AirWatch SysTrack Kit.

Have a Question? Ask SysTrack

The overarching theme of all the content we’ve created here has been trying make it easier to find interesting data and allow organizations to engage in intuitive information seeking behavior. If there’s anything the outgrowth of chat based interfaces has taught us, though, it’s that often the most meaningful way to bring the answers that users need to them it’s to allow them to craft their own questions in their own terms. So, the logical evolution here is to introduce a method to make it easier to get answers out of SysTrack with simple, natural language.

This is why Lakeside Software has partnered with IBM to use their Watson cognitive API library to introduce Ask SysTrack. This Natural Language Processing (NLP) question tool allows anyone to ask questions they may have for the SysTrack Workspace Analytics platform in basic, conversational form.

For example, say I want to find what my most heavily faulting application is. I can ask a simple question: “What application faults the most?”

ApplicationFaultsQuestion

Just asking that question gives me the most heavily faulting applications in order and even some other related things for me to look into if I’m interested.

Let’s say I wanted to find out specifically what’s wrong with a system.

Specific System Problems

Again, with a simple question I have the ability to know what this user has been having problems with lately.

With GA of SysTrack 8.2 this feature is now available for anyone, and I’d encourage you to check it out. You can download a white paper that expands on how it all works.