Episode 1: Diving into Windows 10

Lifeguard IT is a podcast about the intersection between enterprise IT and the people, business processes, and technologies that define the modern workspace. Grab a towel, slather on some sunscreen, and tune in as hosts Heather and Linda discuss how IT can protect their users while making the computing experience more like a sunny day at the beach.

In the first episode, hosts Heather and Linda discuss the depreciation of MS Paint and their Windows 10 user impressions. Special guest and Director of Applied Engineering at Lakeside Software, Ben Murphy, joins in to explore the current state of Windows 10 from an enterprise IT perspective.

Resources

The Verge: “Microsoft Paint isn’t dead yet, will live in the Windows Store for free”

Intel Atom Processor Users Dropped from Windows 10 Feature Updates

 

Harmonize Your Resource Stack with IT Asset Optimization

Within every innocuous enterprise workstation exists untapped potential for savings and better system performance.

Over a decade of working on large-scale desktop transformation projects, I’ve seen a gradual erosion in price point to supply a managed desktop to a user. At the same time, complexity of these estates has dramatically increased, with new technologies and delivery platforms constantly being evolved and introduced, presenting opportunities to further lower the cost of delivery or ongoing management of systems.

Yet, many organizations haven’t realized the cost savings brought on by the so-called consumerization of IT. Rather, IT and users have been pitted as adversaries with conflicting objectives. This divide has grown as users have become accustomed to personal devices with “unlimited apps”, expecting their work devices to have equal performance, usability, and app availability. Just this morning I counted 78 “personal applications” installed on my cell phone, contrasted with the dozen or so applications on my laptop that I require to complete my work tasks. One thing’s certain – we are in application overload.

IT asset optimization is the answer to the dizzying influx of new technology in the workplace. By distilling hardware and applications down to what users need, organizations can streamline operational costs and focus IT efforts on improving the end-user experience.

As in any optimization discussion, the much-overused management phrase of “do more with less” comes out to play. The problem is, most organizations simply don’t know what they currently have (and how it’s being used), so they’re unable to even consider “doing more with less”.

Starting with what you’ve got (or think you’ve got)

Let’s start by declaring that simple inventory lists on spreadsheets do not work! When managed by procurement teams within organizations, such lists can offer a picture of what has been purchased, but in reality, how often are these updated by other teams in the organization?

For example, a Service Desk team might add more memory to a machine to fix an immediate problem and never update the spreadsheet, so the hardware change goes unaccounted for. Or devices change hands via HR as part of a leavers and joiners process, but the procurement team isn’t notified. Clearly, spreadsheets are dubiously reliable and won’t bail you out at the time of audit – we need to accurately establish what is deployed and used across the estate.

Identifying Waste

Ensuring that software usage is lower than the procured licenses obviously mitigates the hefty fines on organizations following a software audit (which we are seeing more frequently reported in the press).  On the flipside, being over-licensed for software can also be a huge opportunity to reduce IT spend. With software representing ~30% of a desktop spend and ongoing software assurance averaging 20% of that cost, it is vital to ensure that as organizations we are paying for what is being used.

We can do so by harnessing workspace analytics data, which provides a vast amount of end-user-computing-related information. With the vast array of information captured around the user and device, workspace analytics provides a complete picture into not only what is installed on a device but also how and when applications are used. This gives the administrator an understanding of how frequently the applications are used (if at all), allowing them to make a variety of decisions pertaining to IT asset optimization, including provisioning, license optimization, and how reliably software packages can be delivered. Ultimately, this can lead to reducing licensing costs by removing applications and reclaiming licenses that see infrequent use.

Licensing can also be streamlined if administrators notice all end users require roughly the same software portfolio which can be covered more effectively by a different license with potentially dramatic financial consequences.

As the responsibility for software upkeep shifts from IT to service providers, it is equally as vital for IT to track whether providers are meeting their SLAs to ensure that organization aren’t paying for applications that go unused due to service-side performance issues.

Identifying Underutilized Systems and Intelligent Procurement

In a similar manner to exploring application usage to establish where licenses can be reclaimed, workspace analytics provides the insight to establish how devices and compute resource are being utilized across the estate. Understanding key performance metrics on each device allows for informed decisions not only on if new hardware is required, but also if the user or device is a suitable candidate for running on a lower-cost platform like presentation virtualization or, in the case of deskbound user, if a less expensive device can be utilized instead of a mobile laptop.

With the increasing hype towards cloud-based delivery of desktop and true DaaS models, understanding “by the minute” loading of resources gives administrators complete visibility into the resources both inside and between cloud-based systems to fully understand the new cost components of cloud-based systems and make informed, metric-based decisions on the suitability of users to move to these platforms. This kind of informed decision-making ensures that the costs of desktop transformation are known prior to procurement or moving to a new platform.

While the need to account for and manage assets is established from a risk-aversion angle, there is more to be gained from IT asset optimization beyond application license utilization management. Using workspace analytics to gather and upkeep accurate, real-user data, IT teams are positioned to streamline costs through need-based procurement and to maintain visibility into users’ experiences with cloud-delivered services by tracking SLAs and monitoring the performance of key technologies. With this user-centric viewpoint, IT and end users are on the same side. The same initiatives that result in simplified support for IT also align with what users need in order to be productive.

As we’ve noted before, “End Users Are People Too”. The IT asset optimization capabilities of workspace analytics give administrators another avenue through which to customize procurement and enhance the end-user experience.

Read more about the benefits of workspace analytics

Optimizing the IT Service Desk

As a former university IT employee, I’ve interacted with a lot of users with problems (and emotions) of varying difficulties. Excited parents would set up their brand-new tablets only to forget the password minutes later. Confused professors hauled in their 15-year-old laptops wondering why they weren’t functioning as quickly as the ones displayed. Frustrated students strode in with network connectivity issues, leading me to their embarrassing internet histories and laptop screensavers.

I worked as a Tier 1 employee and often found myself referring these people up the levels of the IT service desk chain, sometimes feeling like I was only there to cause more frustration. As a student is crying, while a parent is yelling, one can’t help but think that there has to be a better way. However, if we want to optimize a process, we must first understand it.

What is the service desk process?

The IT service desk provides information and support to computer users. It involves a process that is broken down into three tiers:

Tier 1: The initial call for help. This level mainly provides simple checks such as resetting passwords or updating computers. Often users spend time discussing the issues they’ve been encountering while the technical support tries to solve their issue. This tier can solve basic issues, but if the problem remains unresolved, it is escalated up to Tier 2.

Tier 2: This level offers more in-depth support. This technical support team has more tools and skills to address issues that couldn’t be solved by Tier 1 support.

Tier 3: This is the highest (and most expensive) level of service desk support. These are often the engineers that are skilled and considered the experts of the subject. It is not uncommon for this level to try to reproduce the problem to determine if this issue exists beyond just that specific user.

Frequently, users get stuck within these layers of service desk support wasting money, time, and patience for themselves, IT staff, and the organizations that employ the support staff.

IT service desk optimization has been a goal among companies for many years. Optimizing help desks reduces the number of tickets, hastens repairs, saves time and money, and leads to increased productivity. The process of requesting support from IT is a tale as old as time. How can companies take the next step in improving their service desk operations? The key is making use of a rich set of data.

Reduce/Prevent/Resolve with Real User Data

The best way to optimize the service desk process is to have the most privileged view in IT, the endpoint. Commonly, IT staff attempts to solve issues after the time of impact and without any historical record of what actually happened (except the account of the end user, which can vary on the truth). With the ability to track user data from the endpoint, IT can see what was happening on a system at the time of impact as well as the full current state of the system. With the new view of the end user, IT can reduce the amount of tickets, address problems proactively, and, most importantly, speed up the resolution process.

Reduce

Endpoint insights provide IT with the tools they need to solve an issue at a Tier 1 level instead of pushing it up levels and creating an overflow of tickets. By logging historical data, IT can understand the state of a system at the time of impact as well as its current performance. This valuable information can lead to discovery of the root cause of impact. Through this approach, support teams have access to more actionable information from the start, helping to improve first call resolution.

Prevent

Continuously monitoring key performance indicators among users within the environment gives IT the power to address issues before they occur. By setting up alarms that trigger when performance deviates from an acceptable level, IT can begin automating problem resolution before a user notices an issue, leading to fewer tickets.

Resolve

By viewing performance issues at the endpoint, IT can accelerate the resolution process. For example, Tier 1 support can easily identify problems by being able to see the data for a specific user instead of going through the common, generic process of ‘How long have you had your laptop?’ and ‘ When did this problem occur?’. Instead, they can easily see where the fault happened, at what time, and with what application. This will reduce redundancy and allow the service desk to identify and resolve the problem faster.

IT service desk optimization is a key component of workspace analytics. Implementing workspace analytics results in increased productivity in the service desk and beyond. When the service desk is optimized, not only are support costs lowered, but end-user issues are resolved faster. This leads to the benefit of increased confidence in support staff, encouraging users to start reporting issues (instead of ignoring them or fixing it poorly themselves), resulting in more problems being resolved. With fewer system issues, end-user experience increases, which IT can track through a quantitative score. Shorter time-to-resolution means that users spend less time with system issues and enjoy better experiences with their technology.

Learn More About Service Desk Optimization

Three Strategies for Stellar Root Cause Analysis

Technological difficulties are an inescapable part of everyday life and they can be massively inconvenient. Just ask any of the United Airlines passengers from the hundreds of flights that were delayed earlier this year due to computer problems. In that instance (a notable, but not isolated example), the inability to quickly resolve a system issue directly wasted hours of people’s time all over the country. We see smaller examples of frustrating computer failures in our day-to-day lives, and the route to identifying and solving an issue is often fraught with turbulence.

These issues are so persistent that, in the Information Technology world, their resolution is an integral part of the success of entire organizations, and a full-time job for service desk professionals. Surprisingly, even at the professional level, many are unequipped with the technology to help them efficiently resolve issues at the source. Even the largest IT environments have simply replaced costly hardware only to find that the same issues keep arising, building mistrust and frustration in end-users. But it doesn’t have to be this way…

Root cause analysis: a better approach

Root cause analysis is the process of examining the underlying issues that caused an error in a system. In an IT environment, this means analyzing the specific hardware and software performance indicators at the time of the error, and determining whether it was user action, a hardware or software fault, or an external problem that caused the failure.

How to perform stellar root cause analysis

1. Treat the root of the problem, not the symptoms

Treating the cause of a problem, rather than the symptoms, is what defines root cause analysis. Going straight to the source ensures that the same issue doesn’t keep happening, which can lead to lower ticket volume for helpdesks, and greater system up-time and user productivity.

Consider a case in which an end-user calls the helpdesk to report a slow computer. The technician who receives the call opens her system administration software to take a closer look. She confirms that the machine is having an issue by observing a low user experience score. The technician knows that a slow computer could be caused by a variety of reasons. To begin the process of finding the root cause of the slowness, the technician considers the best way to see quantifiable evidence of system slowness.

2. Lead with endpoint data

Today’s computing environments are incredibly complex. Ones where critical IT functions might or might not be managed in house. This is why, today, the endpoint is the most privileged point of view for IT.

In our example, the technician uses her software to discover that a program is using an unusually large amount of RAM. She’s able to do this via an endpoint agent installed on the end-user’s machine, which allows her to instantly see a variety of quantitative metrics about the computer. Investigating further, she finds that a software patch to fix this issue was recently released, but has not been installed yet. After installing the patch, the memory leak is fixed, and the user experience score improves dramatically.

The root cause of the machine’s slowness was an uninstalled software patch. By determining that it was a software issue, and not a hardware or networking problem, the technician didn’t need to escalate or pursue the ticket further. Her ability to see metrics about the computer straight from her desk allowed her to quickly diagnose, investigate, and resolve the issue.

3. Automate. Automate. Automate.

For years we have known that when it comes to IT service management, being proactive is better than being reactive. But organizations are still struggling with reaching these proactive goals. Why? The ability to be proactive is directly dependent on the quality of the data being analyzed and the tools being used to predict and prescribe issues based on that data.

Read About One University’s Success with Proactive Monitoring

Looking ahead, the technician from our example can automate the resolution of this issue by utilizing the endpoint agents. By identifying other machines with the uninstalled patch and sending out an update to all the associated agents, she ensures that no more tickets reach her desk for the same problem.

Root cause analysis as a key component of Workspace Analytics

Workspace analytics is a critical component of any modern, metric-driven IT workplace. They allow IT professionals to make important decisions based on quantitative measures, rather than subjective estimates. Excellent root cause analysis by technicians of all seniority levels is a key aspect of workspace analytics. Efficient analysis and problem solving can lead to lower overall IT costs by reducing excessive resource provisioning, better helpdesk KPIs by lowering the amount of escalated tickets, and higher end-user satisfaction and productivity due to better user experience scores.

Learn More About Service Desk Optimization

Explaining and Expanding the SLA Conversation

Service Level Agreements (SLAs) come in many forms and descriptions in life, promising a basic level of acceptable experience. Typically, SLAs have some measurable component, i.e. a metric or performance indicator.

Take, for example, the minimum speed limit for interstates. Usually, the sign would read “Minimum Speed 45 mph”. I always thought the signs existed to keep those who got confused by the top posting of 70 mph (considering that to be the minimum) from running over those who got confused thinking 45 mph to be the maximum.

It turns out the “minimum speed” concept is enforced in some states in the U.S. to prevent anyone from impeding traffic flow. For those who recall the very old “Beverly Hillbillies” TV show, I’ve often wondered if Granny sitting in a rocking chair atop a pile of junk in an open bed truck, driving across the country might be a good example of “impeding the flow of traffic” at any speed. Although, from the looks of the old truck, it probably couldn’t manage the 45 mph minimum either.

In the world of IT, there are all sorts of things that can “impede the flow” of data transfer, data processing, and/or data storage. While there’s nothing as obvious as Granny atop an old truck, there are frequently Key Performance Indicators (KPIs) that could indicate when things aren’t going according to plan.

Historically, IT SLAs have focused on a Reliability, Availability, and Serviceability (RAS) model. While not directly related to specific events/obstacles to optimum IT performance, RAS has become the norm:

  • Reliability – This “thing of interest” shouldn’t break, but it will. Let’s put a number on it.
  • Availability – This “thing of interest” needs to be available for use all the time. That’s not really practical. Let’s put a number on it.
  • Serviceability – When this “thing of interest” breaks or is not available, it must be put back into service instantly. In the real world, that’s not going to happen. Let’s put a number on it.

In the IT industry, there exist many creative variations on the basic theme described above, but RAS is at the heart of this thing called SLA performance. The problem with this approach from an end-user standpoint is that it misses the intent of the SLA, which is to ensure the productivity/usefulness of “the thing of interest”.  In the case of a desktop, that means ensuring that the desktop is performing properly to support the needs of the end user. Thus, the end user’s productivity/usefulness is optimized if the desktop is reliable, available, and serviceable… but is it really?

Consider the following commonplace scenarios:

  • The desktop is available 100% of the time, but 50% of the time it doesn’t meet the needs of the end user, e.g. it has insufficient memory to run an Excel spreadsheet with its huge, memory-eating macros?
  • A critical application keeps crashing, but every call to the service desk results in, “Is it doing it now?” After the inevitable “No” is heard, the service desk responds, “Please call back when the application is failing.” This kind of behavior frequently results in end users becoming discouraged and simply continuing to use a faulty application by frequently restarting it. It also results in a false sense of “reliability” because the user simply quits calling the service desk, resulting in fewer incidents being recorded.
  • A system’s performance is slowed to a crawl for no apparent reason at various times of the day. When the caller gets through to the service desk, the system may/may not be behaving poorly. Regardless, the caller can only report, “My system has been running slowly.” The service desk may ask, “Is it doing it now?” If the answer is “Yes”, they may be able to log into the system and have a look around using basic tools, only to find none of the system KPIs are being challenged (i.e. CPU, memory, IOPs, storage, all are fine). In this scenario, the underlying problem may have nothing to do with the desktop or application. Let’s assume it to be the network latency to the user’s home drive and further complicate it by the high latency only being prevalent during peak network traffic periods. Clearly, this will be outside the scope of the traditional RAS approach to SLA management. Result: again a caller who probably learns to simply tolerate poor performance and do the best they can with a sub-optimized workplace experience.

So, how does one improve on the traditional RAS approach to SLA management? Why not monitor the metrics known to be strong indicators of a healthy/not so healthy workstation? In this SLA world, objective, observable system performance metrics are the basis for the measurement of a system’s health. For example, if the CPU is insufficient, track that metric and determine to what extent it is impacting the end user’s productivity. Then do the same for multiple KPIs. The result is very meaningful number that indicates how much of a user’s time is encumbered by poor system performance.

In the case of SLAs based on observable system KPIs, once a baseline is established, variations from the baseline are easily observable. Simply focusing on counting system outages and breakage doesn’t get to the heart of what an IT department wants to achieve.  Namely, we all want the end user to have an excellent workspace experience, unencumbered by “impeded flow” of any type.  The ultimate outcome of this proposed KPI vs RAS based SLA approach will be more productive end users. In future blogs, I will expand on how various industries are putting into practice a KPI based SLA governance model.

Learn More About Maximizing User Productivity

How Can IT Teams Catch Incompatibilities Before Systems Are at Risk?

Millions of PCs currently running Windows 10 will lose feature support in 2018 due to incompatible drivers according to ZDNet. The issue affects systems with certain Intel Atom Clover Trail processors that were designed to run Windows 8 or 8.1, but were offered free OS upgrades as part of Microsoft’s Windows 10 push. The support loss is due to incompatibility with the Windows 10 Creators Update and the devices in question will not receive further Windows 10 updates from Microsoft as of the time of this writing. [Update: As reported by The Verge, Microsoft has said that the devices will continue to receive security patches through 2023, but will not be included in feature updates.]

The affected PCs are consumer-level devices and enterprises are therefore unlikely to be impacted by this loss. However, there is no guarantee that other devices won’t lose support due to similar circumstances in the future. The ZDNet article cites Microsoft’s device support policy for Windows 10 that contains a footnote stating, in part, “Not all features in an update will work on all devices. A device may not be able to receive updates if the device hardware is incompatible, lacking current drivers, or otherwise outside of the Original Equipment Manufacturer’s (‘OEM’) support period.”

Determining the hardware specifications of a system is easy on the individual level, but how can companies ensure that their employees aren’t at risk of continuing to run unsupported hardware now or with future Windows 10 updates?

A workspace analytics solution, such as Lakeside Software’s SysTrack, collects and analyzes data about systems and user behavior that fast-tracks the discovery process. With this functionality, IT can easily answer questions such as whether any of the unsupported Intel Atom processors are running on their Windows 10 systems.

SysTrack view of Intel processors on different systems in an environment

As we’ve seen with recent ransomware outbreaks, running an unsupported version of an OS puts systems at greater risk of attack. Upgrading incompatible hardware in your environment before it loses support will likely be a critical part of Windows 10 management strategies moving forward.

Assess Your Environment’s Compatibility with Windows 10

End Users Are People Too

Companies are finding that the traditional approach of a four-year, one-size-fits-all technology refresh cycle no longer works for today’s tech-charged workforce. For some employees, that cycle is too long and limits their ability to be productive by keeping them from the latest hardware and applications that they’re accustomed to in their personal lives. Other workers are less demanding, and a refresh may arrive years too early for them, resulting in unnecessary system downtime and wasteful spending.

In theory, surveying employees about what technology they use and need to be most productive would result in harmonious unions between people and technologies. However, this ideal scenario breaks down pretty quickly when you consider the time it would take to process that feedback at the enterprise level. And, even if you could, does the user really know best? The average user isn’t going to be able to name every application they’ve interacted with, provide an unbiased portrayal of their system performance, or be willing to disclose their use of Shadow IT. Not to mention that people change job roles and leave companies frequently, which immediately nullifies the project of matching resources to those individuals.

Thankfully, there is a better approach that will allow you to make purchasing and provisioning decisions based on facts rather than user perception. While the basic concept behind this approach may sound familiar to you, the addition of collection and analysis of real user data makes all the difference between a time-intensive effort with minimal returns and an ongoing way of tailoring end-user experience improvements to employee workstyles.

A Personalized Approach to IT

Continuous user segmentation, also known as personas, is a way of grouping users based on their job roles, patterns, behaviors, and technology. Personas provide a meaningful lens for IT to understand what different types of users need to be productive, allowing IT to optimize assets accordingly.

Workspace analytics software for IT automates the segmentation process and continues to assess user characteristics and experiences to update groupings based on quantitative metrics. As a result, once persona groupings are defined, IT can focus on addressing the needs of different groups and let the software do the work of updating the populations within each persona. This functionality is key to any Digital Experience Monitoring strategy.

It Pays to Segment Users Right

Overlooking personas can lead to over- or under-provisioning assets to a job role. This can be costly to a company in several ways. Over-provisioning licenses can be wasteful of a company’s money while under-provisioning can become a nightmare for IT administrators. Under-provisioning encourages users to install their own applications and allows their user profiles to be personally optimized. However, all the miscellaneous applications can burden IT administrators with the multitude of unique problems for each user and application. Applications that users installed might also not be compatible with each other. Additionally, users may use applications not compatible within the workspace, disabling the ease of sharing files.

Optimizing assets for a company with the aid of personas can enable an increase in productivity. With the use of personas, job roles can be catered to uniquely, but with the provisioning remaining consistent. Each job role, based on real user data, can be provisioned unique licenses and applications that cater to their needs. This prevents users from feeling the need to install their own versions of missing applications, ultimately allowing IT administrators to limit any potential application or license errors.

Segmenting Users in Practice

Using common persona categories, a company may have deskbound users who are provisioned with expensive laptops when a desktop would do, or they may have knowledge workers with expensive i7 CPUs when a PC with an i5 or i3 makes more sense. We have also had customers report that they found that their power users needed to be refreshed every year because of the productivity improvement, while their task workers didn’t need a refresh for as long as five years.

Using personas to segment the end-user environment for a targeted refresh allows an enterprise to provide the right end-user device for a given end user based on their CPU consumption, critical application usage, network usage, and other key metrics. The benefits are numerous and include reduced cost, higher end-user productivity, better security, and a device custom-fit to the end user’s needs.

Learn more about Enterprise Personas

Introducing the SQL Server Administration Kit

Database administration is an important role within IT. Ensuring the backend infrastructure is available and functioning well is critical to the end users. If a SQL Server that’s hosting the backend of a Citrix XenApp farm has no available disk space then users won’t be able to launch any new sessions, which would cause massive problems for any company. It’s a simple example with big consequences, but database administration is often avoiding the big consequences by watching the little things. SysTrack natively collects a lot of great SQL Server related data, and we’ve launched a SQL Server Administration Kit to provide some focused dashboards and reports to visualize that data. Here’s an overview of the content you’ll find in the Kit.

SQL Server Inventory and Overview

This dashboard provides a concise overview of the observed SQL Servers. Having this kind of information readily available and nicely summarized makes it easy to keep track of your SQL assets. The dashboard includes basic resource allocation, database configuration details, and system health observations and trends. A great use case for this dashboard is checking resource allocation against the system health. If, for example, memory is the leading health impact, you can instantly see the allocated memory and decide if adding more makes sense. Also included is a drill-down to SysTrack Resolve, making the jump from basic observations like that to diving into more detailed data as easy as double-clicking the system name.

SQL Server Performance 

Another useful dashboard, this one’s focused on both what’s impacted the system health over the past week, and the trending and aggregated performance data over the past week for key SQL performance metrics. The SQL performance metrics are important to provide another level of detail to standard performance metrics, and add some context around why certain aspects of the system health are trending the direction they are. Available metrics are errors/sec, buffer cache hit ratio, logical connections, user connections, full scans/sec, index searches/sec, page life expectancy, and batch requests/sec.  

SQL Server Overview

This SSRS report has similar data to the SQL Server Inventory and Overview dashboard. It includes the system health, resource allocation, and operating system. Having this data available as a formatted, static report makes it very easy to quickly view the data as well as export to Excel, PDF, or Word for offline use.

Summarizing data and providing use-case driven content packs are the whole idea behind our Kits. We’re happy to have expanded the available Kits to include SQL administration. Making IT easier and more data driven is our goal, and we’ll keep improving and expanding our Kits to continue that goal!

Foundations of Success: Digital Experience Monitoring

We’ve all seen the rapid evolution of the workplace; the introduction of personal consumer devices, the massive explosion of SaaS providers, and the gradual blurring of the lines of responsibility for IT have introduced new complications to a role that once had very clearly defined purview. In a previous post, we discussed quantification of user experience as a key metric for success in IT, and, in turn, we introduced a key piece of workspace analytics: Digital Experience Monitoring (DEM).  This raises the question, though, what exactly is DEM about?

At its very heart, DEM is a method of understanding end users’ computing experience and how well IT is enabling them to be productive. This begins with establishing a concept of a user experience score as an underlying KPI for IT. With this score, it’s possible to proactively spot degradation as it occurs, and – perhaps even more importantly – it introduces a method for IT to quantifiably track its impact on the business. With this as a mechanism of accountability, the results of changes and new strategies can be trended and monitored as a benchmark for success.

That measurable success criterion is then a baseline for comparison that threads its way through every aspect of DEM. It also provides a more informed basis for key analytical components that stem from observation of real user behavior, like continuous segmentation of users into personas. By starting with an analysis of how well the current computing environment meets the needs of users, it opens the door to exploring each aspect of their usage: application behaviors, mobility requirements, system resource consumption, and so on. From there users can be assigned into Gartner defined workstyles and roles, creating a mapping of what behaviors can be expected for certain types of users. This leads to more data driven procurement practices, easier budget rationalization, and overall a more successful and satisfied user base.

Pie chart showing the number of critical applications segmented by persona

Taking an active example from a sample analysis, there are only a handful of critical applications per persona. Those applications represent what users spend most of their productive time working on, and therefore have a much larger business impact. Discovery and planning around these critical applications also can dictate how to best provision resources for net new employees that may have a similar job function. This prioritization of business-critical applications based on usage means that proactive management becomes much more clear cut. The experience on systems where users are most active can be focused on with automated analysis and resolution of problems, and this will have the maximum overall impact on improving user experience. In fact, that user experience can then be trended over time to show what the real business impact is of IT problem solving:

Chart showing the user experience trend for an enterprise

Various voices within Lakeside will go through pieces of workspace analytics over the coming months, and we’ll be starting with a more in-depth discussion of DEM. This will touch on several aspects of monitoring and managing the Digital Experience of a user, including the definition of Personas, management of SLAs and IT service quality measurements, and budget rationalization. Throughout, we’ll be exploring the role of IT as an enabler of business productivity, and how the concept of a single user experience score can provide an organization a level of insight into their business-critical resources that wouldn’t otherwise be possible.

Learn more about End User Analytics

The Quantified User: Redefining How to Measure IT Success

In 1989, the blockbuster hit Back to the Future took the world by a storm with wild technology predictions. Now, we know the film might have missed the mark on flying cars and power clothes, but many of its predictions were more accurate than expected. Case in point: wearables.

From virtual reality headsets to fitness trackers, wearable technology powers this notion of the “quantified self” where our personal lives can be digitalized and analyzed with the end goal of being “improved.”

But when it comes to our professional lives, can we similarly analyze and improve them in order to enable productivity? Yes! Just as we are living in the era of the “quantified self” the enterprise is now entering the era of the “quantified user.”

But don’t just take my word for it. Here is how and, more importantly, why you should care…

What is the “quantified user?”

Think about the workplace today: it is one where people, processes and technologies are overwhelmingly digital and largely managed by third parties (eg: Office 365 and other business-critical SaaS offerings). And this is a great thing!

But this also presents an unforeseen challenge for IT which is, “how do we support a workforce that is largely digital and whose technology resources may or may not be managed by us?”

The key to supporting today’s workforce lies within the concept of the “quantified user” where, just as we are able to quantify the number of steps we take per day to help us improve our personal health, the “quantified user” is one whose end user experience within the digital workspace is quantified and given a score in order to enable productivity.

Learn more about End User Analytics

You might think that, at a glance, there is a loose relationship between a user’s experience and their productivity. However, over the past 20 years, workspace analytics provider, Lakeside Software has found the better the user experience score an employee has, the lower the barriers to productivity within the digital workspace. How? Via a healthier, more available desktop experience.

End user experience score: the most important metric in IT.

A bold statement, I know, but the end user experience score is the most important metric in IT because it accurately and objectively measures how all the technologies and actions taken by IT are enabling business productivity, which is the original purpose of any IT team.

The end user experience score is one that is normalized and is not touched by IT or IT vendors, and serves two purposes: inform what factors are impacting productivity and improve visibility into the health and performance of technology investments.

So how do we calculate the end user experience score?

Calculating employees’ end user experience score is done by analyzing and managing all the factors that could impact their productivity using a data-collection agent right where they conduct most if not all their work: the endpoint.

Why the endpoint? Because as critical IT functions are being outsourced and managed by third parties, reduced visibility into network transactions, data center transactions, and overall IT infrastructure is inevitable. Therefore, an employee’s workspace, the endpoint, has become the most privileged point of view IT can have into the state and the health of an increasingly scattered IT environment.

The end user experience score is one that should be calculated based on the availability and performance of all the factors that could impact the end user experience, that is everything from network and resource problems to infrastructure and system configuration issues.

The result is a score that is normalized and supports the “quantified user.” It is one that can be compared across teams and groups of users, and one that the IT team can work to improve in order to enable business productivity of those who matter most: the end users.

How to start using your end user experience score to enable productivity

Lakeside Software’s flagship solution, SysTrack, is based on permeating the use of the end user experience score throughout IT. A solution for workspace analytics, SysTrack is an endpoint agent that gathers and analyzes end user data on usage, experience and the overall endpoint in order to help IT teams in the following key areas:

– Asset Optimization: ensuring the cost of IT is being optimized for the captured needs and usage of the users

– Event Correlation and Analysis: pinpointing and resolving IT issues blocking users from being productive

– Digital Experience Monitoring: monitoring and most importantly, analyzing end users’ experience with all the technologies and business processes provided for by the organization

Learn more about SysTrack