Category Archives: Business Intelligence

Foundations of Success: Digital Experience Monitoring

We’ve all seen the rapid evolution of the workplace; the introduction of personal consumer devices, the massive explosion of SaaS providers, and the gradual blurring of the lines of responsibility for IT have introduced new complications to a role that once had very clearly defined purview. In a previous post, we discussed quantification of user experience as a key metric for success in IT, and, in turn, we introduced a key piece of workspace analytics: Digital Experience Monitoring (DEM).  This raises the question, though, what exactly is DEM about?

At its very heart, DEM is a method of understanding end users’ computing experience and how well IT is enabling them to be productive. This begins with establishing a concept of a user experience score as an underlying KPI for IT. With this score, it’s possible to proactively spot degradation as it occurs, and – perhaps even more importantly – it introduces a method for IT to quantifiably track its impact on the business. With this as a mechanism of accountability, the results of changes and new strategies can be trended and monitored as a benchmark for success.

That measurable success criterion is then a baseline for comparison that threads its way through every aspect of DEM. It also provides a more informed basis for key analytical components that stem from observation of real user behavior, like continuous segmentation of users into personas. By starting with an analysis of how well the current computing environment meets the needs of users, it opens the door to exploring each aspect of their usage: application behaviors, mobility requirements, system resource consumption, and so on. From there users can be assigned into Gartner defined workstyles and roles, creating a mapping of what behaviors can be expected for certain types of users. This leads to more data driven procurement practices, easier budget rationalization, and overall a more successful and satisfied user base.

Pie chart showing the number of critical applications segmented by persona

Taking an active example from a sample analysis, there are only a handful of critical applications per persona. Those applications represent what users spend most of their productive time working on, and therefore have a much larger business impact. Discovery and planning around these critical applications also can dictate how to best provision resources for net new employees that may have a similar job function. This prioritization of business-critical applications based on usage means that proactive management becomes much more clear cut. The experience on systems where users are most active can be focused on with automated analysis and resolution of problems, and this will have the maximum overall impact on improving user experience. In fact, that user experience can then be trended over time to show what the real business impact is of IT problem solving:

Chart showing the user experience trend for an enterprise

Various voices within Lakeside will go through pieces of workspace analytics over the coming months, and we’ll be starting with a more in-depth discussion of DEM. This will touch on several aspects of monitoring and managing the Digital Experience of a user, including the definition of Personas, management of SLAs and IT service quality measurements, and budget rationalization. Throughout, we’ll be exploring the role of IT as an enabler of business productivity, and how the concept of a single user experience score can provide an organization a level of insight into their business-critical resources that wouldn’t otherwise be possible.

Learn more about End User Analytics

Digital Experience Management and Event Correlation with SysTrack

SysTrack provides the ability to score an environment’s end user experience using digital experience management metrics. The resulting end user experience score provides a clear view of the end user’s experience in that environment and is composed by a series of Key Performance Indicators (KPIs). These KPIs are structured to provide direction to any problems in the environment that may affect the end user. The key to the philosophical approach with SysTrack, though, is the joining of this scoring to event correlation and analysis through the use of proactive alerts. These proactive alerts tie that overarching score to triggered, targeted events to provide a fuller and easier to understand portrait of the IT environment.

This starts with our end user experience score, and it’s best thought of as a simple grade. Basically, the score comes in a range of 0 to 100, with a score of a 100 implying the environment is providing the best end user experience. The score is composed of 13 different KPIs that represent aggregate data points on potential sources of impact. These  roughly fall into the categories of resource issues, networking problems,  system configuration issues, and infrastructure problems. This results in a great, normalized score to understand across a broad set of different systems what kind of performance issues are occurring. Even more importantly, it provides a platform for long-term analysis for trending to see the trajectory and evolution of that experience over time. The image below displays an overall view of the end user’s experience of the environment and the ability to monitor the evolution of those impacts over time. 

For more operational items that require an immediate response the alerting mechanism comes into play. Alerts are an active component that are triggered by events generally correlated with negative impact. Alert items roughly correlate with the end user experience score KPIs to help further IT administrators’ direction towards resolving problems. The image below demonstrates an environment with active alerts.
The key piece is correlating these items to that impact in a meaningful way. So, the big question is this: how do they work with one another?

One of the most common ways alerts and user experience scores are used in conjunction is through a feedback loop.  An administrator determines which KPI is causing the largest source of impact and continues to drill down providing a clear view of placed and potentially triggered correlating alerts. The alerts will direct the administrator towards the underlying causes of the problem and finally to the potential source of impact. After the resolution the administrator can track the increase in user experience as a result of their improvements to see how successful their changes have been.

End user experience scores provide an overall indicator of the quality end users are experiencing, while alerts provide immediate information on the source of impact. The integration of both tools provides an easy and clear way for IT administrators to discover the source of a system’s impact. To learn more on this topic check out our white paper, Utilizing KPIs and Alarm Methods!

Director Integration for Ask SysTrack

One of the unintuitive results of the progression of technology is the massive proliferation of different sources for different pieces of information that are critical to managing an environment. There are just so many tools that provide a depth of detailed data that the sheer number of them makes it difficult to figure out which one to use and how to find it within the interface. Information seeking behavior then takes users across multiple tools with multiple methods of interaction; the net result can be confusion and lost time. This is where cognitive analytics and the ability to ask simple questions can make the difference between solving a problem and bouncing between reporting tools.

The popularity of Ask SysTrack’s recent set of advanced integrations has been very eye opening to how pervasive the need to have a single, easy to use interface for getting contextually relevant answers to questions can be. Because of this we’ve worked with our partners to try and provide a single source to answer IT questions that then provide what’s needed when it’s needed.

At Citrix Summit we’re showcasing one of our most recent examples: plugin integration with Citrix Director. This plugin not only displays SysTrack information in the Director interface, but also provides Director and Citrix related answers to questions that are found in the interface through Ask SysTrack.

The key is providing the Ask SysTrack plugin interface directly in the Director interface home page. Now any IT administrator that makes use of Director has a Watson power cognitive helper to answer questions like “What is the user experience of my environment?”

Clicking the link takes them directly into the relevant data in SysTrack. Alternatively, they can also just ask questions about Director.

We’ve also added a User Experience Trend for delivery groups that are discovered in association with the instance that allows administrators to view what kind of user experience their end users have been getting alongside the other data presented in Director.

This makes it much easier for administrators to now get the key details they need when they need it without having to spend time working through multiple interfaces.

For more details check out a quick video run through.

Answering GPU Questions with SysTrack

Ask SysTrack has become one of the most popular topics we’ve ever discussed in the industry, and our top question is always what we’re adding next. The benefits of using our Natural Language Processing (NLP) tool for common IT questions has appealed to a massive number of our partners in the industry as well as customers. Basically, our goal is to provide an analytical system that takes any question you may have about IT and tries to hook you into the best source of information available to help.

Zach mentioned our first integration in a previous post, and in the new year we’ll be launching a series of new integrations that cover different areas. One I’m personally excited about is our added GPU-based monitoring and reporting from NVIDIA GRID.

GPU utilization in general has been a hot topic for a while, and with the great progress NVIDIA has made with the creation of the first vGPU profiles for VDI, the potential to bring a great graphical experience to anyone has exploded. We’ve provided support for NVIDIA GRID from the very beginning, offering a cloud based assessment tool that can help plan what profiles would work for a currently physical environment to make the move to vGPU and VDI. As a natural progression to that we’ve implemented new monitoring with NVIDIA to help understand workload and usage in VDI systems. 

This kind of insight is especially critical when first undertaking a project to start transforming an environment using new technologies. John Fanelli, vice president of NVIDIA GRID, agrees, “With Lakeside Software’s Ask SysTrack workspace analytics insight engine administrators can make natural language queries to gain contextually relevant NVIDIA vGPU insights and help continuously assess and align vGPU benefits to user personas.”

Of course, the key point is connecting users to all that great content. This is where the expansion to Ask SysTrack comes in. Specifically, we’ve now integrated our additional collection and planning for Ask SysTrack to be able to help answer basic questions like “What kind of NVIDIA vGPU profiles do my users need?”

We can also answer other questions post migration that are critical to maintaining user experience. Things like, “What’s the total GPU usage on Ben’s system?”

Basically, if you can think of a question relating to GPU utilization we have the answer available.

To get started, check out our assessment site at nvidia.lakesidesoftware.com to first size out a new environment or just get an introduction to SysTrack.

INTRODUCING ASK SYSTRACK FOR AIRWATCH

At Lakeside Software our goal has long been to make insightful, high impact analytics readily available to help answer questions and enable better decision making in IT.

In August we took a major step forward in data accessibility with the introduction of Ask SysTrack, in partnership with IBM Watson cognitive services. This Natural Language Processing (NLP) question tool made it possible to find highly specific SysTrack data using nothing but everyday questions, greatly reducing the barrier to entry for all SysTrack tools. A basic introduction to the Ask SysTrack was provided in a previous blog post by Ben Murphy. You can download a white paper for more in-depth information on how the tool works.

One of the interesting things we discovered in the intervening months has been that Ask SysTrack was getting asked questions it understood but didn’t know the answer to. We inadvertently trained Ask SysTrack’s AI dictionary to understand nearly every question someone in IT might ask it. The best metaphor for this would be like being asked for directions to somewhere you don’t know how to get to. Say someone stopped you on the street and asked:

“How do I get to Bob’s burgers?”

You understand they are looking for directions to an eatery named Bob’s burgers. But you don’t know the answer.

Something similar was happening to Ask SysTrack in production – it was getting asked lots of questions about mobile devices.

Since SysTrack is traditionally a desktop analytics tool, it offers only limited visibility into the mobile device space. It’s difficult for users who are unfamiliar with the vast quantities of data available to them through SysTrack and other tools to navigate to the mobility data they need in the moment. But that data was easily found in their EMM console.

Since the most popular EMM tool in SysTrack Community is Airwatch, we reached out to our friends at VMware with a proposition: Let us extend our natural language insight engine to your platform.

One thing led to another, and today we’re introducing Ask SysTrack for AirWatch. Through partnership with VMware AirWatch, the Enterprise Mobility Management leader in the Gartner VMM Magic Quadrant, the Ask SysTrack workspace analytics insight engine now includes Natural Language Processing capabilities for the entirety of the AirWatch platform. This means that the ease of use made possible by the industry first Ask SysTrack now expands into the mobility space.

Using nothing but simple questions you can track down otherwise hard to find data that typically requires a large amount of familiarity with AirWatch to locate. Say for instance that you want to know where to access your compliance polices.

airex3

Or maybe you want to know how many of your employees use iPhones.

airex4

Once again, what would normally be difficult to find only took a simple question.

The Ask SysTrack tool is available with SysTrack 8.2 while Ask SysTrack for AirWatch is made available through installation of an AirWatch SysTrack Kit.

Expanding SysTrack Desktop Assessment for VMware with AirWatch and Windows 10

VMware Windows 10 Migration and Management Assessment

As a Windows 10 launch partner, Lakeside has had resounding success with helping organizations move from legacy workspace components to more modern Microsoft solutions. Now we’re pleased to announce a next step in this with VMware, specifically targeting customers that are interested in improving their enterprise mobility management (EMM) and security with VMware’s cloud-first technologies. Available today at http://assessment.airwatch.com (http://assessment.vmware.com), the SysTrack Desktop Assessment service has been updated to integrate key metrics for implementation of AirWatch and migration to Windows 10. This means that with a free assessment, organizations can at once get a full analysis of application and user behaviors, mobility needs, and their overall readiness for Windows 10 adoption as well as a fit for VMware solutions.

So, what are the details? With the new update you’re going to get two critical new pieces of functionality:

  1. Windows 10 readiness and hardware analysis for an in place migration. This specifically focuses on how AirWatch can help with management of existing or net new physical assets. One of the key considerations here is whether some physical assets require a hardware refresh either for compatibility or for performance optimization. Below the systems that would require an update are market as “Refresh and AirWatch”.

VMware Solutions

  1. Risk exposure and potential security concerns through our new (and evolving) Risk Visualizer tool.

Risk Scores

Alongside this we’ve updated the core report to reflect the overall readiness of existing physical systems that may need to stay physical (for example, systems that are highly mobile or have offline usage) to migrate directly to Windows 10.

Windows 10 Readiness

This is all available today and absolutely free. To get started just go to http://assessment.airwatch.com (or http://assessment.vmware.com) and sign up now.

Have a Question? Ask SysTrack

The overarching theme of all the content we’ve created here has been trying make it easier to find interesting data and allow organizations to engage in intuitive information seeking behavior. If there’s anything the outgrowth of chat based interfaces has taught us, though, it’s that often the most meaningful way to bring the answers that users need to them it’s to allow them to craft their own questions in their own terms. So, the logical evolution here is to introduce a method to make it easier to get answers out of SysTrack with simple, natural language.

This is why Lakeside Software has partnered with IBM to use their Watson cognitive API library to introduce Ask SysTrack. This Natural Language Processing (NLP) question tool allows anyone to ask questions they may have for the SysTrack Workspace Analytics platform in basic, conversational form.

For example, say I want to find what my most heavily faulting application is. I can ask a simple question: “What application faults the most?”

ApplicationFaultsQuestion

Just asking that question gives me the most heavily faulting applications in order and even some other related things for me to look into if I’m interested.

Let’s say I wanted to find out specifically what’s wrong with a system.

Specific System Problems

Again, with a simple question I have the ability to know what this user has been having problems with lately.

With GA of SysTrack 8.2 this feature is now available for anyone, and I’d encourage you to check it out. You can download a white paper that expands on how it all works.

My Personal Security “Best Practices”

First, let me get some disclaimers out of the way: I won’t describe myself as a security expert and what I am about to share is my personal opinion, which is based on my personal experiences. By no means does this article reflect the opinions of my present or past employers and I have no business relationship (or gain from) any of the products or companies I am mentioning here.

With that out of the way, I would like to share a couple of security related practices that I have adopted over the years. I sometimes get asked questions about these topics, so I hope that you find this article informative.

Let me start with passwords:

We need passwords for a ton of things in our professional or personal lives. Password complexity requirements have gone up and there is no way we can remember all of the passwords we need to use on a regular (or not so regular!) basis. There are several vendors that provide single sign-on (SSO) solutions on the web and they basically work by establishing one master password (that you hopefully CAN remember) and then automatically log you into your web applications or let you look up your passwords. So far so good, except that you have to trust the vendor of this kind of solution 100% to keep your information safe and to have safeguards in place that their employees are not helping themselves to your passwords.

Therefore, I dislike all of these types of solutions and prefer the ones where I can personally control the security and encryption of the password file. And apparently I am right given the recent hack of LastPass (http://www.engadget.com/2015/06/15/lastpass-hacked/). I used different apps over the years – first on the iPhone (http://www.apple.com/iphone/). It was eWallet by a vendor called Ilium Software and I liked the fact that it had a Windows companion app that allowed me to sync the files to the PC. These days I am on a windows phone (http://www.windowsphone.com/en-US/) and use a product called SkyWallet (http://skywallet.net/). It works by having a file on share (I am using OneDrive (https://onedrive.live.com)) and it lets you personally generate and specify the crypto key to secure that master file. It also has a desktop companion application so all your passwords stay in sync between devices. It does not provide SSO, but I am actually fine with that and can simply launch the app, look up what I need, and then log in. The important part is that no third party stores my master key and the password file itself is encrypted.

What about files?

There were the days when all your files, photos, and music resided on your PC and you had to make CD-ROMs or DVDs to back up your stuff every once in a while. That was really painful. I later added secondary hard drives to protect myself from disk failure by establishing a RAID configuration, but that didn’t protect me from the total physical failure of my PC in case of hurricanes, home fires, floods,  or other nasty (yet very unlikely) surprises.

I started using a product called HandyBackup (http://www.handybackup.com/), which I liked, because I could simply backup my stuff. I had some $5 per month web hosting service with virtually unlimited storage that I used for the purpose and handybackup allowed me to use my own encryption of the data using the blowfish algorithm (https://en.wikipedia.org/wiki/Blowfish_(cipher)) . This worked reasonably well, but had two major shortcomings: because I chose to encrypt the data, handybackup did not allow me to configure actual file synchronization and I could not simply get to my files from a public terminal or mobile device. Well, it was a backup solution and a fine one at that. I used it for several years, but never had to actually restore anything during that time frame.

I finally got to like online file storage (I happen to use OneDrive, but there many other solutions available as well). My problem here was again that I really don’t trust any company to keep my personal data safe from prying eyes, so encryption is key to me. Initially, I started by just storing photos and personal videos on the service and kept my financials and tax returns between my local machine and the handybackup solution. Then I discovered BoxCryptor (https://www.boxcryptor.com/en), a software solution from a German provider that allows you to automatically encrypt all your stuff in a cloud data solution. What I like about it that it also allows you to create your personal key file, which is never stored on any third party cloud service. This suits me just fine and now all of my personal data is 100% encrypted by BoxCryptor and stored (and sync’ed) on OneDrive. The boxcryptor client is available for all my mobile devices, so now I am enjoying insta-access of all my stuff with a high degree of privacy. Note that there is an option to store the crypto key with the vendor’s cloud service, but I chose to manage it myself. Should I ever lose it, it won’t be recoverable, so there is an added level of personal responsibility involved here.

What about my PC?

Not much to say here. Windows 8.1 / Windows 10 with BitLocker (http://windows.microsoft.com/en-US/windows7/products/features/bitlocker). Enough said. If someone steals the laptop or gets hold of my desktop PC, have fun decrypting that stuff. I have no idea if some has tried to hack BitLocker by using brute force techniques, but I don’t think that there is another alternative that would also be seamless to the user experience. Then again, all the files I have are still encrypted by BoxCryptor, even at rest on the local machine, so I think I am good.

I personally can’t wait until the general availability of Intel’s RealSense and Windows Hello technology to simply use my pretty self as a password 🙂

What about corporate BYO things?

This could very well turn into a soapbox, so I will try to keep it brief. Some companies adopted BYO policies under which employees are allowed to bring their own mobile devices, laptops, and PCs to work. The idea was that employees could simply choose the device they like and in some cases the employer would provide a stipend to help cover the cost. I always thought that this was a terrific idea, and as an employer, I would basically use centralized application hosting with terminal servers, citrix (www.citrix.com), vmware(www.vmware.com), etc. and virtual desktops. I would configure things in a way that none of the corporate data could be copied to the user owned device. These technologies are so mature these days and internet access is so ubiquitous that this can easily be achieved without compromising the end user experience. The old philosophy was that everything inside a building was considered secure (because the building had access controls and physical security. I think that the new philosophy needs to be that anything in an office space is considered not secure and only things inside the actually data center are considered to be secure.)

The reality is sometimes a bit different though. One group I met during my days as a Citrix consultant erred far on the side of user convenience and let employees use any device on the network without any restrictions whatsoever. People could install corporate and personal  applications and also freely download all the corporate data to their personal devices. Trust over draconian security measures was the word! This worked until the day an employee quit and basically took all of her work data with her (no chance for the rest of team to continue her projects.) This is also problematic from the point of view that people sometimes join competitors and having them keep access to critical internal data is just inviting trouble. That group also allowed departing employees to often keep their laptops that the company had paid for (especially if they were 2 years or older as those could not really be given to new employees either). Again with all the data , email archives etc. Interestingly, one day my counterpart there told me that one of his team members resigned and joined a competitor. He did the right thing and turned in his (corporate owned) laptop and was honest and upfront about his move. The manager notified HR and IT, access was revoked and all seemed well until IT started tracking the person’s manager down and demanded a complete forensic analysis (to be performed by the manager, mind you) as to which files may have been copied off the device or emailed to a personal account etc. Insane. Especially given the otherwise wide open policies.

So, security is never really free, but there is always a tradeoff between security and convenience. Luckily, many vendors really make our lives convenient and enterprises have good practices and tools at their disposal to strike the right balance – if they choose to.

Florian

twitter: @florianbecker

 

 

Windows 10 Migration, Motivation and Methodology

Benjamin Franklin famously said, “… in this world nothing can be said to be certain, except death and taxes.” At the risk of being presumptuous I’d like to add OS upgrades to that list. In my more than 36 years of IT experience this is a truism. Love them, hate them, tolerate them, or accommodate them: it’s inevitable that “thou shalt upgrade”. I don’t know if this is an 11th commandment kind of issue or just a real world fact, but the bottom line is Windows 10 is in your future. Perhaps this upgrade isn’t imminent; in fact it may be 2 years down the road. But, planning for an enterprise wide migration should be on your radar, and if done properly the experience can be a smoother transition than the recent move to Windows 7/8.

Why Upgrade to Windows 10?

Upgrades to Windows 10 will usually fall into one of the following justification buckets:

To take advantage of some cool new features such as:

Common look and feel across PCs, tablets, mobile devices, and Xbox. Run your favorite apps wherever, whenever, and whatever, you choose. BYOD (Bring Your Own Device) just got a lot simpler.

  1. Siri and Google Now like personal assistant, i.e. a voice search tool called Cortana.
  2. New Office package (including a new Word based Outlook engine).
  3. New browser experience; yep, goodbye IE, hello, Edge!
  4. UI with a number of cosmetic improvements.
  5. Improved set of built-in security features.
  6. Return of the Start Menu.
  7. Overall performance improvement… faster startup, quicker resume.
  8. Virtual desktop experience; an easy way to keep a busy desktop organized by having multiple desktops.
  9. Feature to allow switching between touch vs. keyboard and mouse interaction.
  10. Option to deliver updates via the P2P protocol, resulting in a more efficient distribution. This feature provides the potential for any PC to become, in effect, an SCCM delivery device. Imagine branch offices with unreliable WAN connections; pick a target, update it and distribute to other PCs at the branch from the original target… pretty cool!
  11. Application compatibility. Not a big deal now, but if history repeats itself, within the next 2 years most COTS (Commercial Off The Shelf) applications will require Windows 10.
  12. End of Window 7 support. Not an immediate concern, according Microsoft’s Lifecycle page, “Mainstream Support” ended for Win7 on January 15, 2015 with “Extended Support” due to end January 14, 2020.
  13. All the cool kids are doing it. There is a certain allure to having the latest/greatest… you can’t deny it!

How to Prepare for the Inevitable?

Too often OS upgrades are thrust upon an unprepared enterprise. In this case “unprepared” usually means too little knowledge of what the end users need. Perhaps you are one of the many within IT with lingering and perhaps uncomfortable memories of the extreme efforts that your migration to Win7/8 may have required.  Remember the many spreadsheets, the end user surveys, the interviews, and, finally, the guessing?  And that was just the prep work for creating an actionable plan; the heavy lifting hadn’t even started yet. All in an effort to best determine who is using what applications and what the prerequisites/dependencies were for your business critical apps so the upgrade could be completed with as little end user down time an productivity losses as possible. That was the goal at least.

Typically there were three exposure areas (security, compliance, and performance) that presented issues that you had to deal with in the middle of an already full plate of migration activities. Perhaps you found out too late that several applications had some nasty application faults that didn’t improve when you moved to a new OS; in fact, some may have gotten worse. Some of you may even recall the surprising discovery of unsupported, unlicensed, and misbehaving applications within the enterprise.

Doesn’t it make sense to take the time now to do it right the next time? Wouldn’t it be great to have all the data you need at your fingertips, without a major project to collect the data? Also, while you are collecting the data in preparation for the big Win10 upgrade, why not improve your current end user experience? That’s right, you can “kill two birds with one stone” as it were. You can monitor your current environment, observe issues you need to resolve and simultaneously collect important data for your next OS upgrade.

How can SysTrack Help?

Tools within SysTrack can be exploited to make your upgrade progress smoothly. Let’s step through a few examples:

Example #1:

SysTrack is excellent at answering the “5 W’s of EUC”: the who, what, when, where, and why of EUC. In short, SysTrack answers the important question(s) – Who used what applications, when did they use them, from where were they used, and why was the end user’s experience less than perfect?

The “5 W’s of EUC” are answered by the ongoing collection of data from every end user who accesses a system on which SysTrack has been deployed. These “child” systems can be PCs, (virtual or physical), servers (virtual or physical), and/or mobile devices (with a supported OS). Data is collected, arranged, aggregated, and visualized automatically making the IT Professional’s job much easier and resulting in far more accurate results.

Below are a few default, “out of the box” data visualizations:

Win10 Migration 1Software Package Usage by system and by end user. This is an easy way to determine who’s launching what and for how long.

Win 10 Migration 2

Application communications to other systems and the latency of those connections can be easily displayed for all applications with dependent resources.

Win 10 Migration 3

Fast, accurate identification of an enterprise’s “power users” and the applications that drive high resource footprints is as simple as the click of a button.

Win 10 Migration 4

Benchmarking the current health of the End User experience via up to 21 KPIs (Key Performance Indicators) provides an easy and accurate way to establish before and after migration user acceptance criteria. It also helps answer the question, “why is a user having less than a perfect experience?”

Example #2

Frequently, details about an application’s prerequisite modules and/or runtime dependencies can help avoid conflicts with other applications which require the same resources.

Win 10 Migration 5SysTrack discovers “out of the box”, module and runtime dependencies for all observed applications.

 Example #3

To more fully appreciate the usage of an application, simply knowing how many times it has been launched is not sufficient.   One extreme example of this is my personal use of Skype. I have four desktops which I use throughout the week. Two of those are virtual and two are physical. Skype is installed on all four, though I use it primarily on only one of the four systems.   Skype launches each time I restart one of the desktops, but frequently I end the process if I need the resources it is consuming. In my case, a simple count of launches of Skype would suggest I use the application constantly. In reality, I have it “in focus” very infrequently.

Many applications fall into this category of being launched at startup or being called by other applications. In other cases, end users launch applications, use them lightly and leave them running in the background, while their focus is on their business critical apps. If the question being asked is, “What are the most important applications in our environment based on real use?” simply counting the number of application launches can be very misleading.

What if you were able to normalize the overall active use of applications based on “in focus” (active user interaction) time? Now, if you could rank applications based on which ones were observed to have been actively used, i.e. “in focus” the most you would have a way to know which applications are used the most, not just launched the most.

While the process described above is not a default data view, SysTrack has the data, and, through the use of the Dashboard Builder tool, it’s possible to create a customized view of “in focus” time. One possible example of this output appears below:

Win 10 Migration 7Application focus, where 100% represents the total amount of “in focus” event time, across the enterprise. Applications are ranked by their individual contribution to the total enterprise “in focus” time.

By creating an interactive dashboard it is also possible to quickly determine what set of applications combine to consume a targeted amount of total use. For example, it is frequently possible to show that as few as five applications account for over 75% of the total “in focus” use in an enterprise. That’s a quick way to get to a very accurate base image… you’ll need to know that when you get around to migrating to Win10, right?

Plan for success

Regardless of when your Win10 project is scheduled to occur, now is the time to start planning for it. SysTrack is a great tool to help you understand and improve your current environment while simultaneously collecting the data you’ll need for a smooth Win10 migration.

What does “End-to-end” monitoring really mean?

The old saying goes that if all you have a hammer, every problem looks like a nail. This is certainly true in the IT world. There are a broad number of vendors and technologies that claim to provide “end-to-end” monitoring for systems, applications, users, the user experience and so forth.

When one starts to peel back the proverbial onion on this topic, it becomes clear that any of these technologies is only providing “end-to-end” visibility if you’re really flexible with the definition of the word “end”. Let’s elaborate.

If I am interested in the end user experience of a given system or IT service, I would certainly start with what the end user is seeing. Is the system responsive to inputs? Are the systems free of crashes or annoying application hangs? Do the systems function for office locations as well as for remote access scenarios? Do complex tasks and processes complete in a reasonable amount of time? Is the user experience consistent?

These are the questions that the business users care about. In the world of IT, however, the topic of user experience is often discussed in rather technical terms. On top of that, there is no such thing as a single point of contact in any larger IT organization for all the systems that make up the service that users have to interact with. Case in point – there is a network team (maybe even split in between local area networks, wide area networks, and wireless technologies), there is a server virtualization team, there is a storage team, there is a PC support team, various application teams, and we can think of many other silos.

So, the monitoring tools that are available in the market place basically map into these silos as well. Broadly speaking, there are tools that are really good at network monitoring, which means they look at the network infrastructure (routers, switches, and so forth) as well as the packets that are flowing through the infrastructure. Thanks to the seven layer OSI model, there is data available not only around connections, TCP ports, IP addresses, network latency, but also the ability to look into the payload of the packets themselves. The latter means being able to understand if the network connection is about the HTTP protocol for web browsing, PCoIP or ICA/HDX for application and desktop virtualization, SQL database queries, etc. Because that type of protocol information is in the top layer of the model, also called the application layer, vendors often position this type of monitoring as “application monitoring”, although it really has little to do with looking into the applications and their behavior on the system. Despite this kind of application layer detail in the networking stack, the data is not sufficient at all to figure out the end user experience. We may be able to see that a web server takes longer than expected to return the requested object of a web page, but we have no idea WHY that might be so. This is because the network monitoring only sees network packets – from the point when they leave one system and are received by another system and then have a corresponding response go the other way – back and forth, but with no idea what is happening on the inside of the systems that are communicating with each other.

The story repeats itself in other silos as well. The hypervisor teams are pretty good at determining that a specific virtual machine is consuming more than its “fair share” of resources on the physical server and is therefore forcing other workloads to have to wait for CPU cycles or memory allocation. They key is that they won’t know what activity in which workload is causing a spike in resources. The storage teams can get really detailed about the sizing and allocation of LUNs, the IOPS load on the storage system and the request volumes, but they won’t know WHY the storage system is spiking at a given point in time.

The desktop or PC support teams… oh, wait – many of them don’t have a monitoring system, so they are basically guessing and asking users to reboot the system, reset the windows profile, or blame it on the network. Before I get a ton of hate mail on the subject – it’s really hard to provide end-user support because we don’t typically have the tools to see what the user is really doing (and users are notoriously bad in terms of accurately describing the symptoms they are seeing.)

Then there’s application monitoring, which is the art and science of base lining and measuring specific transaction execution times on complex applications such as ERP systems or electronic medical records applications. This is very useful to see if a configuration change or systems upgrade has a systemic impact, but beyond the actual timing of events, there is little visibility into the root cause of things (is it the server, the efficiency of the code itself, the load on the database, etc.?)

What all this leads to is that a user may experience performance degradation that impacts their quality of work (or worse, their ability to do any meaningful work) and each silo is then looking at their specific dashboards and monitoring tools just to raise their hands and shout “it’s not me!” That is hardly end-to-end, but just a justification to carry on and leave the users to fight for themselves.

Most well-run IT organizations actually have a pretty good handle at their operational areas and can quickly spot and remediate any infrastructure problems. However, the vast majority of challenges that impact the users directly and that don’t lead to a flat out system outage is simply that users and their systems compete for resources with each other. This is especially true in the age of server based computing and VDI. One user is doing something busy and all other users who happen to have their applications or desktops hosted on the same physical device will suffer as a result. This is exacerbated by the desire to keep cost in check and many VDI and application hosting environments are sized with very little room to spare for flare-ups in user demand.

This is exactly why it is so important to have a monitoring solution that has deep insights into the operating system of the server, virtual server, desktop, vdi image, end-point, PC, laptop, etc. and can actually discern what is going on, which applications are running, crashing, misbehaving, consuming resources etc.

Only that type of technology (combined with the infrastructure pieces mentioned above) can provide true end-to-end visibility into the user experience.

It is one thing to notice that there is a problem or “slowness” on the network and it is something else entirely to be able to pinpoint the possible root causes, establish patterns, and then proactively alarm, alert, and remediate the issues.

Speaking to IT organizations, System Integrators, and customers over the years reveals one common theme: IT administrators would like to have ALL of the pertinent data available AND have it all presented in a single dashboard or single pane of glass. Vendors are just responding to that desire by talking about their products as “end-to-end”, even though most of the monitoring aspects are not end-to-end at all, as we have seen. If you have the same requirement, have a look at SysTrack. It’s the leading tool to collect thousands of data points from PCs, Desktops, Laptops, Servers, Virtual Servers, and Virtual Desktops and can seamlessly integrate with third party data sources to provide actionable data in a way that you would like to see it. We’re not networking experts in the packet analysis business, but we can tap into data sources from networking monitors and present it along with the user behavior and system performance. That is a powerful combination of granular data and provides truly end-to-end capabilities as a system of record and an end user success platform.

 

Check out our latest solution brief on end user computing success.

Let me know what you think and follow me on twitter @florianbecker