The SysTrack Software Analytics Kit: Software Performance

Monitoring software performance plays a vital role in the observation of software assets. Software performance monitoring is driven by the need to understand how well applications are working in the environment, and where resources should be directed to improve the performance. We’ve created a Software Asset Analytics kit to make it easy for IT admins to understand and observe software performance in order to help maintain a successful environment. There is a section dedicated to monitoring software performance involving key metrics like resource consumption details (CPU, memory, IOPS, network bandwidth) as well as number and frequency of app faults and hangs. Having a full understanding can help answer daily questions like “why does my application keep crashing?” and “what application takes the longest to load?”. Gaining insight in software performance can lead to a successful environment.

The observation of software performance is a vital component to understanding the source of impact to the user experience, preventing that impact from getting worse, and understanding how well an environment is working together. It is a category often noticed first by an end user, which makes it very important since it has a direct impact on productivity and user experience. Our performance dashboards make observing aggregated data easy for IT by highlighting trends and details in resource consumption metrics as well as app performance metrics like load time and faults. To identify issues or track performance, IT can choose between the provided dashboards that provide both summary views and detailed, deep-dive looks at application data. To prevent app issues from spreading through the environment, IT can easily see where trends may start to go down, implying the end user environment may become poor. After big changes to an environment, such as a new version of Outlook, IT can easily monitor how well the environment is performing based on the observation of resource consumption, user experience, app faults, and similar metrics.

A simple use case can help illustrate the value in app performance data. Let’s say an IT administrator notices that an application consistently crashes, but isn’t sure of the root cause. The Application Faults and Apps Running at Time of Fault dashboards in the kit provide details on crashing applications. They start with the more general dashboard, Application Faults, and search for the application in question in the chart displayed below.

They now can take note of details such as how many systems this application crash is affecting and the number of faults, providing an idea of whether the issue is isolated or widespread. They venture further into the Apps Running at Time of Fault dashboard and again search for the crashing application. This dashboard highlights details like what kind of fault occurred, faulting module, time of fault, and more. They also have the ability to see their system at the time of fault to understand what other apps were running as well as system stats like resource consumption. This added context provides a much more complete picture of what was happening around the time of app crash.

As they proceed further down the dashboard, they can now observe trends on CPU, memory, IOPs, or disk space to help determine the reason for the application fault as displayed below.

This finalizes our categories covering our newly released Software Asset Analytics Kit. For more details on this topic, read our upcoming white paper, Software Asset Analytics!

The SysTrack Software Analytics Kit: Software Usage

Taking stock of an environment’s software portfolio – what’s installed, what’s being used, what isn’t being used – has consistently been one of the most common use cases of SysTrack. The basic philosophy of SysTrack is to improve the user experience through data-driven business intelligence, and maintaining an efficient, well understood software portfolio is a big part of that. Unused software means you could be paying for unnecessary licenses, and puts more of a burden on IT through additional management overhead caused by expanding the number of applications installed. And the software that is being used needs to be well understood, standardized to recent versions, and delivered through the appropriate mechanism.

At a glance, those examples may seem like something that isn’t all that important to IT, especially if they’re spending the majority of their time fixing issues and responding to help-desk tickets, but being proactive with asset management can dramatically reduce the amount of those issues and tickets that creep up in the future. A few of the main benefits of being proactive with tracking software usage are: reclaiming unused licenses to save costs, mitigating security risks by ensuring recent versions and patches are installed, deciding which applications should be published versus which should be installed locally, and identifying business critical applications for different job roles.

Out-of-the-box datasets displayed through visualizers and reports, available in the standard SysTrack product suite, contain a variety of valuable software data that can put you on the path to realizing those benefits. But given the importance of software asset management, we’ve introduced a Kit that provides focused, interactive dashboards to dig through your software data and provide the insight needed. The Kit contains dashboards related to software performance, usage, and dependencies. In this post, we’ll go over how the content pertaining to usage can be applied to a real-world scenario.

The IT administrator starts off with our Software Portfolio Usage Summary dashboard. It provides an overview of the software package usage within the environment. Right away, the IT administrator can see that among the systems the packages were installed on, very few are actually being used, as displayed below.

As they proceed down the dashboard, they have the ability to view applications within a certain usage percentage, and further down it highlights all the systems that have the previously selected application, as displayed below.

The IT administrator now has the information to start piecing together what each job role requires and how to adjust the licensing accordingly. The IT administrator continues on to the Software Usage for Target System dashboard where they obtain further details of application usage for each system like which applications are most used, as displayed below.

The IT administrator now concludes that the applications that make up the top level license are not being used by most workers and the ones that are being used have a very low usage frequency. This leads the IT administrator to replace the low usage application with a different online application and thus allowing a lower license level. They notice that while this new license applies to most job roles, there is a job role that only requires the lowest license level. Not only were they able to save the company money, but environment is now also less vulnerable to impact due to containing only the necessary applications.

The understanding of software usage is one of the vital components when observing software assets. We will continue to expand on our final category, Performance, with real-world examples of our Software Asset Analytics Kit to show how valuable observing this data is to maintain a successful environment.

The SysTrack Software Analytics Kit: Software Dependencies

A key component of observing software assets is understanding software dependencies. To address this, we here at Lakeside have developed the SysTrack Software Asset Analytics kit. A portion of this kit is entirely centered around discovering and monitoring software dependencies within an environment to meet the needs of the IT professional. These dependencies provide insight into the requirements needed for the proper functionality of software and identifying answers to important questions that IT might have such as “are all my software packages being used?” or “what are the connections required for my applications?”. The driving force behind understanding dependencies is the promotion of innovation for software package delivery and thus a more positive end user experience.

Dependencies allow for the ability to observe what the applications that make up a software package require to function every day.  Requirements for software can vary, but the core attributes to monitor are application connections, required systems components, compatibility, and application usage. The ability to identify required connections and system components is vital due to potential system restrictions such as unusable ports or unsupportable system

Let’s say an IT administrator, Joe, is analyzing software packages that he provides and is wondering how to make it more optimal for him and end users by trimming deadweight from his packages to reduce install and delivery size, limit the chance of errant components interfering with one another, and streamlining application connections. The perfect place for him to start would be our Software Dependencies Summary dashboard. He notices in the Software Summary panel that there is a software package installed in most systems but only used by half of the systems it’s installed on. It is also clearly highlighted in a graph next to the given data and displayed below.

 

By diving into one of the detailed dashboards provided by our Software Asset Analytics Kit, he can easily search for the package and see additional details including the number of associated applications, how many of those applications require connections, and where those connections are going. He goes further down the dashboard and takes note of which systems are using the software and even the last time it was used. With this information, he can conclude that only certain groups within the company need to have that software. As he continues to follow the flow of the dashboard, he notices that only some of the applications within the package are being used and many of the system components required by the unused applications are unnecessary. He can use this information to optimize the software package and only include the necessary applications and components. The image below shows how easy it is for Joe to view this information and thus reach his conclusions.

Finally, he ensures that the applications that do require connections are using approved correct ports to guarantee the security of the environment and potentially simplify network traffic. Through proper use of this dashboard, Joe could easily navigate the pertinent data and know what to trim from the software package and how to limit its delivery to only the groups that required it. He even confirmed that the software package would only make connections through approved network ports.

Dependencies is just one of the three key categories when observing software assets. We will continue to expand on the other two categories, Usage and Performance, with examples taken directly from the SysTrack Software Asset Analytics Kit to show the importance and practicality of monitoring this data for maintaining a successful environment.

Software Asset Optimization with SysTrack

Workplace analytics encompasses a vast amount of end user computing related information collected from a variety of sources, and a vital component of the topic is the observation of software assets. Obviously, a broad topic, we’ve chosen to break that further into three key categories: performance, usage and dependencies. Software performance monitoring is driven by the need to understand how well applications are working in the environment. Software usage is predicated on the idea of optimizing licensing and delivery to provide necessary applications. The last category, Dependencies, is vital to understand what pieces are necessary for software to function.

Software performance is itself a complex topic, but broadly the idea is to identify the answer to key questions like “why does my application keep crashing?” and “what applications take the longest time to load?” This incorporates key metrics like resource consumption details (CPU, memory, IOPS, network bandwidth) as well as number and frequency of faults or hangs. In many ways, this is one of the first items thought of in the context of software asset analytics, and it’s often one of the first things an end user notices about the environment. Diagnosing performance issues and understanding the resource consumption for the average user can help steer hardware requisition and delivery methods. Clearly, though, a preliminary question in many cases is exactly what packages belong in the environment.

Accurately observing software usage can be invaluable to a company. The ability to know which applications are used versus installed directly relates to the distribution of licenses, and that’s a direct cost driver. Another consideration is support cost savings made possible by making images less complicated. Intrinsic to rationalization is a host of potential ways to make sure that the delivery of applications to end users is as closely tailored to their needs as possible. There are some technical considerations to this as well, not the least of which is exploring the components or backend connections required for software in the environment.

Gaining insight into the required components a given package needs to function can be very important to choosing appropriate delivery mechanisms and options. Application compatibility concerns driven by incompatible components, fundamentally unsupportable system components, and complex networking requirements are all key to understand. Identifying what applications call on to function on a day to day basis dictates many of the decisions IT need to make to modernize and continually innovate with their delivery options.

We’ll be going into more depth on each of these categories as we release our upcoming Software Asset Analytics Kit. With each area, we’ll expand on some real use cases and provide some real-world examples of how each provides essential information for an environment.

Digital Experience Management and Event Correlation with SysTrack

SysTrack provides the ability to score an environment’s end user experience using digital experience management metrics. The resulting end user experience score provides a clear view of the end user’s experience in that environment and is composed by a series of Key Performance Indicators (KPIs). These KPIs are structured to provide direction to any problems in the environment that may affect the end user. The key to the philosophical approach with SysTrack, though, is the joining of this scoring to event correlation and analysis through the use of proactive alerts. These proactive alerts tie that overarching score to triggered, targeted events to provide a fuller and easier to understand portrait of the IT environment.

This starts with our end user experience score, and it’s best thought of as a simple grade. Basically, the score comes in a range of 0 to 100, with a score of a 100 implying the environment is providing the best end user experience. The score is composed of 13 different KPIs that represent aggregate data points on potential sources of impact. These  roughly fall into the categories of resource issues, networking problems,  system configuration issues, and infrastructure problems. This results in a great, normalized score to understand across a broad set of different systems what kind of performance issues are occurring. Even more importantly, it provides a platform for long-term analysis for trending to see the trajectory and evolution of that experience over time. The image below displays an overall view of the end user’s experience of the environment and the ability to monitor the evolution of those impacts over time. 

For more operational items that require an immediate response the alerting mechanism comes into play. Alerts are an active component that are triggered by events generally correlated with negative impact. Alert items roughly correlate with the end user experience score KPIs to help further IT administrators’ direction towards resolving problems. The image below demonstrates an environment with active alerts.
The key piece is correlating these items to that impact in a meaningful way. So, the big question is this: how do they work with one another?

One of the most common ways alerts and user experience scores are used in conjunction is through a feedback loop.  An administrator determines which KPI is causing the largest source of impact and continues to drill down providing a clear view of placed and potentially triggered correlating alerts. The alerts will direct the administrator towards the underlying causes of the problem and finally to the potential source of impact. After the resolution the administrator can track the increase in user experience as a result of their improvements to see how successful their changes have been.

End user experience scores provide an overall indicator of the quality end users are experiencing, while alerts provide immediate information on the source of impact. The integration of both tools provides an easy and clear way for IT administrators to discover the source of a system’s impact. To learn more on this topic check out our white paper, Utilizing KPIs and Alarm Methods!