Category Archives: Workplace Analytics

Explaining and Expanding the SLA Conversation

Service Level Agreements (SLAs) come in many forms and descriptions in life, promising a basic level of acceptable experience. Typically, SLAs have some measurable component, i.e. a metric or performance indicator.

Take, for example, the minimum speed limit for interstates. Usually, the sign would read “Minimum Speed 45 mph”. I always thought the signs existed to keep those who got confused by the top posting of 70 mph (considering that to be the minimum) from running over those who got confused thinking 45 mph to be the maximum.

It turns out the “minimum speed” concept is enforced in some states in the U.S. to prevent anyone from impeding traffic flow. For those who recall the very old “Beverly Hillbillies” TV show, I’ve often wondered if Granny sitting in a rocking chair atop a pile of junk in an open bed truck, driving across the country might be a good example of “impeding the flow of traffic” at any speed. Although, from the looks of the old truck, it probably couldn’t manage the 45 mph minimum either.

In the world of IT, there are all sorts of things that can “impede the flow” of data transfer, data processing, and/or data storage. While there’s nothing as obvious as Granny atop an old truck, there are frequently Key Performance Indicators (KPIs) that could indicate when things aren’t going according to plan.

Historically, IT SLAs have focused on a Reliability, Availability, and Serviceability (RAS) model. While not directly related to specific events/obstacles to optimum IT performance, RAS has become the norm:

  • Reliability – This “thing of interest” shouldn’t break, but it will. Let’s put a number on it.
  • Availability – This “thing of interest” needs to be available for use all the time. That’s not really practical. Let’s put a number on it.
  • Serviceability – When this “thing of interest” breaks or is not available, it must be put back into service instantly. In the real world, that’s not going to happen. Let’s put a number on it.

In the IT industry, there exist many creative variations on the basic theme described above, but RAS is at the heart of this thing called SLA performance. The problem with this approach from an end-user standpoint is that it misses the intent of the SLA, which is to ensure the productivity/usefulness of “the thing of interest”.  In the case of a desktop, that means ensuring that the desktop is performing properly to support the needs of the end user. Thus, the end user’s productivity/usefulness is optimized if the desktop is reliable, available, and serviceable… but is it really?

Consider the following commonplace scenarios:

  • The desktop is available 100% of the time, but 50% of the time it doesn’t meet the needs of the end user, e.g. it has insufficient memory to run an Excel spreadsheet with its huge, memory-eating macros?
  • A critical application keeps crashing, but every call to the service desk results in, “Is it doing it now?” After the inevitable “No” is heard, the service desk responds, “Please call back when the application is failing.” This kind of behavior frequently results in end users becoming discouraged and simply continuing to use a faulty application by frequently restarting it. It also results in a false sense of “reliability” because the user simply quits calling the service desk, resulting in fewer incidents being recorded.
  • A system’s performance is slowed to a crawl for no apparent reason at various times of the day. When the caller gets through to the service desk, the system may/may not be behaving poorly. Regardless, the caller can only report, “My system has been running slowly.” The service desk may ask, “Is it doing it now?” If the answer is “Yes”, they may be able to log into the system and have a look around using basic tools, only to find none of the system KPIs are being challenged (i.e. CPU, memory, IOPs, storage, all are fine). In this scenario, the underlying problem may have nothing to do with the desktop or application. Let’s assume it to be the network latency to the user’s home drive and further complicate it by the high latency only being prevalent during peak network traffic periods. Clearly, this will be outside the scope of the traditional RAS approach to SLA management. Result: again a caller who probably learns to simply tolerate poor performance and do the best they can with a sub-optimized workplace experience.

So, how does one improve on the traditional RAS approach to SLA management? Why not monitor the metrics known to be strong indicators of a healthy/not so healthy workstation? In this SLA world, objective, observable system performance metrics are the basis for the measurement of a system’s health. For example, if the CPU is insufficient, track that metric and determine to what extent it is impacting the end user’s productivity. Then do the same for multiple KPIs. The result is very meaningful number that indicates how much of a user’s time is encumbered by poor system performance.

In the case of SLAs based on observable system KPIs, once a baseline is established, variations from the baseline are easily observable. Simply focusing on counting system outages and breakage doesn’t get to the heart of what an IT department wants to achieve.  Namely, we all want the end user to have an excellent workspace experience, unencumbered by “impeded flow” of any type.  The ultimate outcome of this proposed KPI vs RAS based SLA approach will be more productive end users. In future blogs, I will expand on how various industries are putting into practice a KPI based SLA governance model.

Learn More About Maximizing User Productivity

End Users Are People Too

Companies are finding that the traditional approach of a four-year, one-size-fits-all technology refresh cycle no longer works for today’s tech-charged workforce. For some employees, that cycle is too long and limits their ability to be productive by keeping them from the latest hardware and applications that they’re accustomed to in their personal lives. Other workers are less demanding, and a refresh may arrive years too early for them, resulting in unnecessary system downtime and wasteful spending.

In theory, surveying employees about what technology they use and need to be most productive would result in harmonious unions between people and technologies. However, this ideal scenario breaks down pretty quickly when you consider the time it would take to process that feedback at the enterprise level. And, even if you could, does the user really know best? The average user isn’t going to be able to name every application they’ve interacted with, provide an unbiased portrayal of their system performance, or be willing to disclose their use of Shadow IT. Not to mention that people change job roles and leave companies frequently, which immediately nullifies the project of matching resources to those individuals.

Thankfully, there is a better approach that will allow you to make purchasing and provisioning decisions based on facts rather than user perception. While the basic concept behind this approach may sound familiar to you, the addition of collection and analysis of real user data makes all the difference between a time-intensive effort with minimal returns and an ongoing way of tailoring end-user experience improvements to employee workstyles.

A Personalized Approach to IT

Continuous user segmentation, also known as personas, is a way of grouping users based on their job roles, patterns, behaviors, and technology. Personas provide a meaningful lens for IT to understand what different types of users need to be productive, allowing IT to optimize assets accordingly.

Workspace analytics software for IT automates the segmentation process and continues to assess user characteristics and experiences to update groupings based on quantitative metrics. As a result, once persona groupings are defined, IT can focus on addressing the needs of different groups and let the software do the work of updating the populations within each persona. This functionality is key to any Digital Experience Monitoring strategy.

It Pays to Segment Users Right

Overlooking personas can lead to over- or under-provisioning assets to a job role. This can be costly to a company in several ways. Over-provisioning licenses can be wasteful of a company’s money while under-provisioning can become a nightmare for IT administrators. Under-provisioning encourages users to install their own applications and allows their user profiles to be personally optimized. However, all the miscellaneous applications can burden IT administrators with the multitude of unique problems for each user and application. Applications that users installed might also not be compatible with each other. Additionally, users may use applications not compatible within the workspace, disabling the ease of sharing files.

Optimizing assets for a company with the aid of personas can enable an increase in productivity. With the use of personas, job roles can be catered to uniquely, but with the provisioning remaining consistent. Each job role, based on real user data, can be provisioned unique licenses and applications that cater to their needs. This prevents users from feeling the need to install their own versions of missing applications, ultimately allowing IT administrators to limit any potential application or license errors.

Segmenting Users in Practice

Using common persona categories, a company may have deskbound users who are provisioned with expensive laptops when a desktop would do, or they may have knowledge workers with expensive i7 CPUs when a PC with an i5 or i3 makes more sense. We have also had customers report that they found that their power users needed to be refreshed every year because of the productivity improvement, while their task workers didn’t need a refresh for as long as five years.

Using personas to segment the end-user environment for a targeted refresh allows an enterprise to provide the right end-user device for a given end user based on their CPU consumption, critical application usage, network usage, and other key metrics. The benefits are numerous and include reduced cost, higher end-user productivity, better security, and a device custom-fit to the end user’s needs.

Learn more about Enterprise Personas

Foundations of Success: Digital Experience Monitoring

We’ve all seen the rapid evolution of the workplace; the introduction of personal consumer devices, the massive explosion of SaaS providers, and the gradual blurring of the lines of responsibility for IT have introduced new complications to a role that once had very clearly defined purview. In a previous post, we discussed quantification of user experience as a key metric for success in IT, and, in turn, we introduced a key piece of workspace analytics: Digital Experience Monitoring (DEM).  This raises the question, though, what exactly is DEM about?

At its very heart, DEM is a method of understanding end users’ computing experience and how well IT is enabling them to be productive. This begins with establishing a concept of a user experience score as an underlying KPI for IT. With this score, it’s possible to proactively spot degradation as it occurs, and – perhaps even more importantly – it introduces a method for IT to quantifiably track its impact on the business. With this as a mechanism of accountability, the results of changes and new strategies can be trended and monitored as a benchmark for success.

That measurable success criterion is then a baseline for comparison that threads its way through every aspect of DEM. It also provides a more informed basis for key analytical components that stem from observation of real user behavior, like continuous segmentation of users into personas. By starting with an analysis of how well the current computing environment meets the needs of users, it opens the door to exploring each aspect of their usage: application behaviors, mobility requirements, system resource consumption, and so on. From there users can be assigned into Gartner defined workstyles and roles, creating a mapping of what behaviors can be expected for certain types of users. This leads to more data driven procurement practices, easier budget rationalization, and overall a more successful and satisfied user base.

Pie chart showing the number of critical applications segmented by persona

Taking an active example from a sample analysis, there are only a handful of critical applications per persona. Those applications represent what users spend most of their productive time working on, and therefore have a much larger business impact. Discovery and planning around these critical applications also can dictate how to best provision resources for net new employees that may have a similar job function. This prioritization of business-critical applications based on usage means that proactive management becomes much more clear cut. The experience on systems where users are most active can be focused on with automated analysis and resolution of problems, and this will have the maximum overall impact on improving user experience. In fact, that user experience can then be trended over time to show what the real business impact is of IT problem solving:

Chart showing the user experience trend for an enterprise

Various voices within Lakeside will go through pieces of workspace analytics over the coming months, and we’ll be starting with a more in-depth discussion of DEM. This will touch on several aspects of monitoring and managing the Digital Experience of a user, including the definition of Personas, management of SLAs and IT service quality measurements, and budget rationalization. Throughout, we’ll be exploring the role of IT as an enabler of business productivity, and how the concept of a single user experience score can provide an organization a level of insight into their business-critical resources that wouldn’t otherwise be possible.

Learn more about End User Analytics

The Quantified User: Redefining How to Measure IT Success

In 1989, the blockbuster hit Back to the Future took the world by a storm with wild technology predictions. Now, we know the film might have missed the mark on flying cars and power clothes, but many of its predictions were more accurate than expected. Case in point: wearables.

From virtual reality headsets to fitness trackers, wearable technology powers this notion of the “quantified self” where our personal lives can be digitalized and analyzed with the end goal of being “improved.”

But when it comes to our professional lives, can we similarly analyze and improve them in order to enable productivity? Yes! Just as we are living in the era of the “quantified self” the enterprise is now entering the era of the “quantified user.”

But don’t just take my word for it. Here is how and, more importantly, why you should care…

What is the “quantified user?”

Think about the workplace today: it is one where people, processes and technologies are overwhelmingly digital and largely managed by third parties (eg: Office 365 and other business-critical SaaS offerings). And this is a great thing!

But this also presents an unforeseen challenge for IT which is, “how do we support a workforce that is largely digital and whose technology resources may or may not be managed by us?”

The key to supporting today’s workforce lies within the concept of the “quantified user” where, just as we are able to quantify the number of steps we take per day to help us improve our personal health, the “quantified user” is one whose end user experience within the digital workspace is quantified and given a score in order to enable productivity.

Learn more about End User Analytics

You might think that, at a glance, there is a loose relationship between a user’s experience and their productivity. However, over the past 20 years, workspace analytics provider, Lakeside Software has found the better the user experience score an employee has, the lower the barriers to productivity within the digital workspace. How? Via a healthier, more available desktop experience.

End user experience score: the most important metric in IT.

A bold statement, I know, but the end user experience score is the most important metric in IT because it accurately and objectively measures how all the technologies and actions taken by IT are enabling business productivity, which is the original purpose of any IT team.

The end user experience score is one that is normalized and is not touched by IT or IT vendors, and serves two purposes: inform what factors are impacting productivity and improve visibility into the health and performance of technology investments.

So how do we calculate the end user experience score?

Calculating employees’ end user experience score is done by analyzing and managing all the factors that could impact their productivity using a data-collection agent right where they conduct most if not all their work: the endpoint.

Why the endpoint? Because as critical IT functions are being outsourced and managed by third parties, reduced visibility into network transactions, data center transactions, and overall IT infrastructure is inevitable. Therefore, an employee’s workspace, the endpoint, has become the most privileged point of view IT can have into the state and the health of an increasingly scattered IT environment.

The end user experience score is one that should be calculated based on the availability and performance of all the factors that could impact the end user experience, that is everything from network and resource problems to infrastructure and system configuration issues.

The result is a score that is normalized and supports the “quantified user.” It is one that can be compared across teams and groups of users, and one that the IT team can work to improve in order to enable business productivity of those who matter most: the end users.

How to start using your end user experience score to enable productivity

Lakeside Software’s flagship solution, SysTrack, is based on permeating the use of the end user experience score throughout IT. A solution for workspace analytics, SysTrack is an endpoint agent that gathers and analyzes end user data on usage, experience and the overall endpoint in order to help IT teams in the following key areas:

– Asset Optimization: ensuring the cost of IT is being optimized for the captured needs and usage of the users

– Event Correlation and Analysis: pinpointing and resolving IT issues blocking users from being productive

– Digital Experience Monitoring: monitoring and most importantly, analyzing end users’ experience with all the technologies and business processes provided for by the organization

Learn more about SysTrack

 

How does Office 365 perform across Windows operating systems?

Modern users have the choice between a variety of Windows OS and Office versions. In relation to this mix, a common question we have come across in the past is “How does Windows 10 performance compare with Windows 7?” While we have addressed the situation in the past, it remains a popular question to this day. However, users are now becoming curious about the performance implications of Office versions against the operation systems. Through analysis of SysTrack Community data, we were able to reevaluate Windows 7 and Windows 10 performance implications against Office 365.

A feature that Windows 10 has is its integration with various components of Microsoft’s cloud portfolio. With this new component, we felt compelled to look at how Office 365 ran against past operating systems and how past versions of Office, specifically Office 2013, ran against current operating systems. Office 365 may look very similar to older versions, there are quite a few notable differences. While Office 2013 required a product key, Office 365 handles licensing more efficiently for users, potentially allowing each job role to be given a best fit licensing level. This is just an example of how Office 365 is now closely reliant on the cloud. The cloud allows Office 365 applications to be available from any device and encourages collaboration among users while Office 2013 requires a local installation. Office 2013 did not allow for as smooth of collaboration, requiring the user to share files that have been saved locally or manually stored in a place that can be reached by others. Finally, with Office 365 being software-as-a-service, it has improved security and user experience by seamlessly providing small, frequent patches.

With all these updates to Office 365, how does it affect the overall performance characteristics? We ran a comparison of Office 365 against Office 2013 with different operating systems to see how their load times compared (displayed below in Figure 1).

Figure 1

While it is interesting that Office 365 seems to take a slightly longer time to load, it is mostly due to external connectivity and tying the user account context for Office 365 to the application itself. However, looking at application stability (displayed below in Figure 2), Office 365 has significantly fewer faults than Office 2013.

Figure 2

As displayed above, Office 365 faults significantly less in Windows 10 than Office 2013. This can be a result of both being “as-a-service” products ultimately resulting in less downtime users (thus a higher end user experience) and less maintenance for IT administrators. It can be concluded that while Office 365 takes a little more time to load, it is more stable than Office 2013 among the various operating systems.

So what does all this analysis mean? Ideally, Windows 10 and Office 365 should be used together to achieve high end user experience. Office 365, overall, is more stable providing less application faults. However, other operating systems are also compatible with Office 365 despite the slightly longer load time. To evaluate readiness for a Windows 10 migration, or performance monitoring, check out our free Windows 10 assessment, with the addition of SysTrack to provide the transparency of end user experience monitoring.

We Know You Don’t WannaCry

By now you likely know that WannaCry is a malicious widely distributed ransomware variant that is wreaking havoc over enterprise IT. The most important thing to know is that Microsoft has issued patches for nearly every flavor of the Windows operating system (including Windows XP) to prevent any further attacks.

Since AV (even next-gen AV) and other security tools have not been very effective at mitigating the WannaCry threat, our advice to our customers is to ensure you have a complete inventory of every Windows instance and its respective patch level. This will enable you to identify which Windows instances in your environment are still vulnerable so you could focus your energies on finding and patching them.

To help you accomplish this, we’re offering Lakeside customers several complimentary dashboards that can help you identify Windows instances that are at risk of being infected by WannaCry or other security threats:

  • Security Patch Details: We’ve developed a new kit, Patch Summary Kit, that provides details on security patches based on operating system. It also provides details for a specific patch if you know the patch’s KB or definition. The details include if the security patch was installed in a system and which patch it was. This kit provides clear and precise data to help users remain safe.
  • Risk Score: SysTrack provides a risk score in Risk Visualizer. The risk score is an uncapped integer that takes into account all potential ways a system may be vulnerable. Risk Visualizer allows you to view the risk scores of all systems in your environment to easily identify systems of concern. A higher risk score implies that your system is at greater risk of attack.

You can use the table below in conjunction with the Patch Summary kit to check whether a security patch has been applied to systems with the corresponding OS. An example of this feature is shown in a screenshot taken of the kit.

Operating System (Version Number) Security Patch KB
Windows XP KB4012598
Windows Vista KB4012598
Windows Server 2008 KB4012598
Windows 7 KB4012212
Windows Server 2008 R2 KB4012212
Windows 8 KB4012598
Windows 8.1 KB4012213
Windows Server 2012 KB4012214
Windows Server 2012 R2 KB4012213
Windows 10 (1511) KB4013198
Windows 10 (1607) KB4012606
Windows Server 2016 KB4013429

Our goal at Lakeside is to help keep our customers’ end users productive. We hope that by providing these risk management and compliance dashboards, we can help IT departments continue to improve organizational digital experience.

Digital Experience Management and Event Correlation with SysTrack

SysTrack provides the ability to score an environment’s end user experience using digital experience management metrics. The resulting end user experience score provides a clear view of the end user’s experience in that environment and is composed by a series of Key Performance Indicators (KPIs). These KPIs are structured to provide direction to any problems in the environment that may affect the end user. The key to the philosophical approach with SysTrack, though, is the joining of this scoring to event correlation and analysis through the use of proactive alerts. These proactive alerts tie that overarching score to triggered, targeted events to provide a fuller and easier to understand portrait of the IT environment.

This starts with our end user experience score, and it’s best thought of as a simple grade. Basically, the score comes in a range of 0 to 100, with a score of a 100 implying the environment is providing the best end user experience. The score is composed of 13 different KPIs that represent aggregate data points on potential sources of impact. These  roughly fall into the categories of resource issues, networking problems,  system configuration issues, and infrastructure problems. This results in a great, normalized score to understand across a broad set of different systems what kind of performance issues are occurring. Even more importantly, it provides a platform for long-term analysis for trending to see the trajectory and evolution of that experience over time. The image below displays an overall view of the end user’s experience of the environment and the ability to monitor the evolution of those impacts over time. 

For more operational items that require an immediate response the alerting mechanism comes into play. Alerts are an active component that are triggered by events generally correlated with negative impact. Alert items roughly correlate with the end user experience score KPIs to help further IT administrators’ direction towards resolving problems. The image below demonstrates an environment with active alerts.
The key piece is correlating these items to that impact in a meaningful way. So, the big question is this: how do they work with one another?

One of the most common ways alerts and user experience scores are used in conjunction is through a feedback loop.  An administrator determines which KPI is causing the largest source of impact and continues to drill down providing a clear view of placed and potentially triggered correlating alerts. The alerts will direct the administrator towards the underlying causes of the problem and finally to the potential source of impact. After the resolution the administrator can track the increase in user experience as a result of their improvements to see how successful their changes have been.

End user experience scores provide an overall indicator of the quality end users are experiencing, while alerts provide immediate information on the source of impact. The integration of both tools provides an easy and clear way for IT administrators to discover the source of a system’s impact. To learn more on this topic check out our white paper, Utilizing KPIs and Alarm Methods!

Director Integration for Ask SysTrack

One of the unintuitive results of the progression of technology is the massive proliferation of different sources for different pieces of information that are critical to managing an environment. There are just so many tools that provide a depth of detailed data that the sheer number of them makes it difficult to figure out which one to use and how to find it within the interface. Information seeking behavior then takes users across multiple tools with multiple methods of interaction; the net result can be confusion and lost time. This is where cognitive analytics and the ability to ask simple questions can make the difference between solving a problem and bouncing between reporting tools.

The popularity of Ask SysTrack’s recent set of advanced integrations has been very eye opening to how pervasive the need to have a single, easy to use interface for getting contextually relevant answers to questions can be. Because of this we’ve worked with our partners to try and provide a single source to answer IT questions that then provide what’s needed when it’s needed.

At Citrix Summit we’re showcasing one of our most recent examples: plugin integration with Citrix Director. This plugin not only displays SysTrack information in the Director interface, but also provides Director and Citrix related answers to questions that are found in the interface through Ask SysTrack.

The key is providing the Ask SysTrack plugin interface directly in the Director interface home page. Now any IT administrator that makes use of Director has a Watson power cognitive helper to answer questions like “What is the user experience of my environment?”

Clicking the link takes them directly into the relevant data in SysTrack. Alternatively, they can also just ask questions about Director.

We’ve also added a User Experience Trend for delivery groups that are discovered in association with the instance that allows administrators to view what kind of user experience their end users have been getting alongside the other data presented in Director.

This makes it much easier for administrators to now get the key details they need when they need it without having to spend time working through multiple interfaces.

For more details check out a quick video run through.

INTRODUCING ASK SYSTRACK FOR AIRWATCH

At Lakeside Software our goal has long been to make insightful, high impact analytics readily available to help answer questions and enable better decision making in IT.

In August we took a major step forward in data accessibility with the introduction of Ask SysTrack, in partnership with IBM Watson cognitive services. This Natural Language Processing (NLP) question tool made it possible to find highly specific SysTrack data using nothing but everyday questions, greatly reducing the barrier to entry for all SysTrack tools. A basic introduction to the Ask SysTrack was provided in a previous blog post by Ben Murphy. You can download a white paper for more in-depth information on how the tool works.

One of the interesting things we discovered in the intervening months has been that Ask SysTrack was getting asked questions it understood but didn’t know the answer to. We inadvertently trained Ask SysTrack’s AI dictionary to understand nearly every question someone in IT might ask it. The best metaphor for this would be like being asked for directions to somewhere you don’t know how to get to. Say someone stopped you on the street and asked:

“How do I get to Bob’s burgers?”

You understand they are looking for directions to an eatery named Bob’s burgers. But you don’t know the answer.

Something similar was happening to Ask SysTrack in production – it was getting asked lots of questions about mobile devices.

Since SysTrack is traditionally a desktop analytics tool, it offers only limited visibility into the mobile device space. It’s difficult for users who are unfamiliar with the vast quantities of data available to them through SysTrack and other tools to navigate to the mobility data they need in the moment. But that data was easily found in their EMM console.

Since the most popular EMM tool in SysTrack Community is Airwatch, we reached out to our friends at VMware with a proposition: Let us extend our natural language insight engine to your platform.

One thing led to another, and today we’re introducing Ask SysTrack for AirWatch. Through partnership with VMware AirWatch, the Enterprise Mobility Management leader in the Gartner VMM Magic Quadrant, the Ask SysTrack workspace analytics insight engine now includes Natural Language Processing capabilities for the entirety of the AirWatch platform. This means that the ease of use made possible by the industry first Ask SysTrack now expands into the mobility space.

Using nothing but simple questions you can track down otherwise hard to find data that typically requires a large amount of familiarity with AirWatch to locate. Say for instance that you want to know where to access your compliance polices.

airex3

Or maybe you want to know how many of your employees use iPhones.

airex4

Once again, what would normally be difficult to find only took a simple question.

The Ask SysTrack tool is available with SysTrack 8.2 while Ask SysTrack for AirWatch is made available through installation of an AirWatch SysTrack Kit.

Have a Question? Ask SysTrack

The overarching theme of all the content we’ve created here has been trying make it easier to find interesting data and allow organizations to engage in intuitive information seeking behavior. If there’s anything the outgrowth of chat based interfaces has taught us, though, it’s that often the most meaningful way to bring the answers that users need to them it’s to allow them to craft their own questions in their own terms. So, the logical evolution here is to introduce a method to make it easier to get answers out of SysTrack with simple, natural language.

This is why Lakeside Software has partnered with IBM to use their Watson cognitive API library to introduce Ask SysTrack. This Natural Language Processing (NLP) question tool allows anyone to ask questions they may have for the SysTrack Workspace Analytics platform in basic, conversational form.

For example, say I want to find what my most heavily faulting application is. I can ask a simple question: “What application faults the most?”

ApplicationFaultsQuestion

Just asking that question gives me the most heavily faulting applications in order and even some other related things for me to look into if I’m interested.

Let’s say I wanted to find out specifically what’s wrong with a system.

Specific System Problems

Again, with a simple question I have the ability to know what this user has been having problems with lately.

With GA of SysTrack 8.2 this feature is now available for anyone, and I’d encourage you to check it out. You can download a white paper that expands on how it all works.