The present disclosure relates to quantifying digital experiences, and more specifically to determining on a periodic basis how users of technology are enjoying their experience, and are productive with it to deliver their job.
Information Technology (IT) services are an important part of the modern work experience. However, quantifying and measuring the quality of user experiences with IT services and solutions remains a difficult challenge.
Additional features and advantages of the disclosure will be set forth in the description that follows, and in part will be understood from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein.
Disclosed are systems, methods, and non-transitory computer-readable storage media which provide a technical solution to the technical problem described. A method for performing the concepts disclosed herein can include: receiving, at a computer system from at least one client device via a network, technology performance data of the at least one client device over a plurality of periods of time, the at least one client device being associated with a user and an IT environment, the technology performance data comprising: endpoint data identifying operational aspects of the at least one client device; application data identifying operational aspects of at least one application executed by the at least one client device; and collaboration data identifying at least one of a collaboration software program executed by the at least one client device, wherein the endpoint data, the application data, and the collaboration data comprise time metadata and IT environment metadata identifying when and in which context an event took place; calculating, via at least one processor of the computer system for each period of time within the plurality of periods of time, a level of experience based on the endpoint data, the application data, the collaboration data, and the time and IT environment metadata, resulting in a plurality of experience levels, the plurality of experience levels respectively corresponding to the periods of time within the plurality of periods of time; computing, via the at least one processor, a cumulative experience score over a selected timeframe by combining experience levels associated with the selected timeframe and within the plurality of experience levels; and transmitting the plurality of experience levels and the cumulative experience score to an Information Technology team.
A system configured to perform the concepts disclosed herein can include: at least one processor; and a non-transitory computer-readable storage medium having instructions stored which, when executed by the at least one processor, cause the at least one processor to perform operations comprising: receiving, from at least one client device via a network, technology performance data of the at least one client device over a plurality of periods of time, the at least one client device being associated with a user, the technology performance data comprising: endpoint data identifying operational aspects of the at least one client device; application data identifying operational aspects of at least one application executed by the at least one client device; and collaboration data identifying at least one of a collaboration software program executed by the at least one client device, wherein the endpoint data, the application data, and the collaboration data comprise time metadata and IT environment metadata identifying when and in which context an event took place; calculating, for each period of time within the plurality of periods of time, a level of experience based on the endpoint data, the application data, the collaboration data, and the time and IT environment metadata, resulting in a plurality of experience levels, the plurality of experience levels respectively corresponding to the periods of time within the plurality of periods of time; computing a cumulative experience score over a selected timeframe by combining experience levels associated with the selected timeframe and within the plurality of experience levels; and transmitting the plurality of experience levels and the cumulative experience score to an Information Technology team.
A non-transitory computer-readable storage medium configured as disclosed herein can have instructions stored which, when executed by a computing device, cause the computing device to perform operations which include: receiving, from at least one client device via a network, technology performance data of the at least one client device over a plurality of periods of time, the at least one client device being associated with a user, the technology performance data comprising: endpoint data identifying operational aspects of the at least one client device; application data identifying operational aspects of at least one application executed by the at least one client device; and collaboration data identifying at least one of a collaboration software program executed by the at least one client device, wherein the endpoint data, the application data, and the collaboration data comprise time and IT environment metadata identifying when and in which context an event took place; calculating, for each period of time within the plurality of periods of time, a level of experience based on the endpoint data, the application data, the collaboration data, and the time and IT environment metadata, resulting in a plurality of experience levels, the plurality of experience levels respectively corresponding to the periods of time within the plurality of periods of time; computing a cumulative experience score over a selected timeframe by combining experience levels associated with the selected timeframe and within the plurality of experience levels; and transmitting the plurality of experience levels and the cumulative experience score to an Information Technology team.
Various embodiments of the disclosure are described in detail below. While specific implementations are described, this is done for illustration purposes only. Other components and configurations may be used without parting from the spirit and scope of the disclosure.
Methods, systems, and computer-readable media configured as disclosed herein can quantify and measure an individual's digital experience level as they interact with IT services and solutions. While IT services and solutions (“IT systems”) are designed to make workers more productive and satisfied, determining if those services and solutions are working as desired, and the resulting impact on the worker experience, requires analyzing many different aspects of how the worker interacts with the IT systems. To determine the experience levels for specific time periods, as well as the overall cumulative experience score over a longer timeframe including these time periods, systems configured as disclosed herein can capture the experiences of workers interacting with the different IT systems over different periods of time and determine the worker's experience for each respective period of time based on the captured data. For a given time period, the system can collect multiple different types of data, and an experience level for that data type can be determined. From these multiple experience levels (each associated with a data type) for a given time period, different data determinations can be made. First, an overall experience level for that time period can be determined, where the overall experience level of that time period can be calculated based on weights associated to experience levels' severity quantifying the overall experience of the user over this time period. In addition, the experience levels over multiple time periods can be compiled together, allowing the system to calculate a rolling average for how the user's experience has been over multiple periods of time to provide the overall cumulative experience score for a longer timeframe.
Consider the following example. An employee is working on their computer, and as the employee interacts with their computer, databases, networks, etc., one or more monitoring algorithms are operating in the background. Each time the employee successfully interacts with software, a database, an application, a network, etc., an electronic notification that a successful interaction occurred can be transmitted from the employee's computer across a network to a central computing system. If the employee interacts with the software, database, application, network, etc., in a negative way (e.g., the application crashes, the database is inaccessible, the network lags, etc.), an electronic notification regarding the negative interaction can be transmitted from the employee's computer across the network to the central computing system. Likewise, if the employee interacts with technology in a manner which is neither positive nor negative, an electronic notification indicating a neutral interaction can be transmitted to the central computing system.
These positive, negative, and neutral notifications (e.g. data) can be transmitted for multiple monitored aspects on a constant basis, periodic basis, or batch basis. For example, in some cases the employee's computer, and the associated monitoring software, can transmit all notifications, or a summary of all notifications, once an hour (i.e., periodic reporting) (note that the use of an hour is exemplary—other periods of time may also be used), reducing the bandwidth necessary for the communications. In other cases, the transmission can occur once a predetermined threshold of notifications has been obtained (i.e., batch reporting), which again saves bandwidth. In other cases, the notifications can be transmitted as they occur, providing the latest details to the central computing system at all times. In yet other cases, the way in which reporting is performed can vary depending on specific IT problems encountered, based on repeating instances of negative interactions (e.g., if the employee continues to have negative experiences, the reporting may change from batch to continuous so that IT personnel can immediately be made aware of the issue).
Once the data/notifications are received by the central computer system, the system can calculate, for both a type of experience measurement being analyzed and the associated period of time in which the notification was issued, an experience level for the type of experience being measured. This can, for example, be based on a threshold number of occurrences specific to that type of interaction occurring, or based on the amount of time where the occurrences are happening. If, for example, the employee is using ZOOM or another video conferencing system, possible errors could include lag, lack of audio, lack of video, inability to connect, or application crash. While the employee's computer may track and report each interaction, the system may have a single threshold limit to define a negative experience level if the application crashes or does not connect, but may have a threshold of three lag occurrences for a neutral experience, and five lag occurrences for a negative occurrence. Likewise, if the video does not display for five seconds the experience may shift from positive to neutral, and then if the video does not display for ten seconds may shift from neutral to negative. In some cases these thresholds can be manually set by IT services, whereas in other cases these thresholds can be dynamically updated by the computer system based on historical data. The thresholds can be set for an entire organization. In other cases, the thresholds can vary based on the unique circumstances of each individual employee, such that the thresholds for employee “A” vary from those of employee “B”.
Using the data for each type of interaction and the associated thresholds, the system can identify an experience level for that interaction type which is specific to the period of time being analyzed, such that there are multiple experience levels for a given period of time, each experience level being associated with a particular type of interaction or other part of the user experience. Based on these multiple experience levels, the system can determine a time period experience level, which is a combination of one or more of the individual experience levels calculated for that period of time.
In some configurations, the time period experience level can be based on the lowest experience level the user has during that period of time. For example, if the computer system has a hierarchy for the different ways in which the user can interact with the system, the overall experience level can reflect the lowest experience level of any sub-level. The result is that if, for example, the employee's network communications experience level is positive for a given period of time, their overall experience level for that period of time may be negative if the employee had the a negative experience with a different aspect of IT. In other cases, the system can weight the different experiences based on their positions within the hierarchy, such that a particularly frustrating experience would be weighted higher, and thus it would be more likely that the overall experience level will be negative. That is, the system can weight certain types of interaction more than others, with the result being that the time period experience level reflects a weighted average of individual types of interaction. For example, the weights can be based on the impact on the user experience (e.g., the poorer an experience, the higher the weight).
In many instances, the periods of time for which experience levels are calculated are contiguous, such that one period of time immediately follows another. However, in other instances, there may be gaps between the calculated periods of time. This can be due, for example, to the employee being offline, not using IT services, etc.
A cumulative experience score for a given user across multiple time periods can be calculated on a rolling window. This cumulative experience considers all the experience levels calculated during this rolling window, and combines the experience levels together to compute a cumulative experience score (or KPI (Key Performance Indicator)) which can reflect the user's overall experience level over multiple time periods. Like the time period experience levels, the cumulative experience score can be based on different weights associated to each experience level, such that the weights vary according to the impact on the user experience (e.g., the poorer an experience, the higher the weight). For example, the cumulative experience score can look to the past seven days, with the experience level less focused on daily incidents and more focused on managing problems impacting the employee's digital experience. Once the time period experience levels and cumulative experience score is reported to IT groups, and particularly once trends are able to be calculated for a single user and/or for groups of users, the IT groups be able to quantify and measure the digital experience of employees and make changes or corrections. In some cases, these groups of users may combine data from users that use distinct operating systems on their computing devices, such as some users using MICROSOFT WINDOWS and other users using APPLE OS. In other cases, a group of users may have some users using different sets of equipment (e.g., some on tablets, others on laptops). By collecting data from users using different equipment and/or different operating systems, the systems disclosed herein overcome the technical differences associated with those equipment/software distinctions, resulting in an improved capacity to diagnose common issues across distinct technical platforms.
With that general description, consider the examples provided in the figures.
For example, in the first hour, the user experiences a long time to connect to their Virtual Desktop 218, as well as an efficient collaboration with an application suite 220, resulting in a neural experience level. In the second hour, the user loses connection during focus time 222, resulting in a negative experience level for the period, which extends to a neutral rating in the third hour. In the fourth hour the user reads emails on a small mobile screen 224 while commuting to work 212, resulting in a neutral rating. Once the user arrives at work, they work with a reliable CRM (Customer Relationship Management) application 226, resulting in positive experiences, then perform a manual process that could be automated 228, resulting in a negative experience level. Finally, the user is able to seamlessly connect via a videocall 230, resulting in a positive experience level. In order to compute the cumulative experience score of this user over the multiple time periods illustrated in this example, the system combines the different experience levels.
IT personnel receive data regarding the time period(s), such as the user experience levels, the time period experience levels (i.e., for each hour), the cumulative experience score (or KPI) for the multiple time periods, and any associated data. The IT personnel can then look at the positive, neutral, and negative, and based on that data make modifications to the IT environment such that future experiences will be more positive for the user, as well as other users within the organization.
Within the technology performance factor 304 are three different types of sub-factors: endpoint 308 factors, which are factors associated with hardware performance; application 310 factors, which are associated with performance of computer applications; and collaboration 312 factors, which are associated with how collaboration software, such as conferencing solutions, performed. Non-limiting examples of the endpoint 308 factors can include logon duration 314, network latency for a VDI (Virtual Desktop Interface) 316, non-activated OS (Operating System) 318, download speed 320, and upload speed 322.
Non-limiting examples of application factors can include data associated with specific apps, such as a non-specific “Desktop app N” 324 or a “Web app N” 332. Within the desktop app N 324, exemplary data which can be collected can include the number of application crashes 326, the number of freezes 328, and the startup time 330 for the individual application. Non-limiting examples of web application 332 data which can be collected can include page load time 334, transition duration 336, and error ratio 338.
Collaboration 312 data refers to data obtained in applications or software where individual employees can speak with, see, and/or otherwise work with other employees, such as (but not limited to) video conferencing software. Examples can include MICROSOFT TEAMS 340 and ZOOM 346. For each of these, the system can collect data such as audio quality 342, 348 and video quality 344, 350, for the respective video conferencing software.
The hierarchy defines how experience levels are calculated and defined, with the endpoint 308 data having an experience level calculated from datapoints 314, 316, 318, 320, 322 lower in the hierarchy. Other categories of data, such as applications 310 data and collaboration data 312, are similarly determined based on the data lower in the hierarchy for those respective datatypes. If other types of data are collected (beyond endpoint 308, application 310, and collaboration 312), those other data types can be derived from sub-categories of data located within the hierarchy.
With the experience levels for the number of freezes 404 and the startup time 406 for a common period of time determined, the system determines the overall experience level 402 for this period of time based on the lower experience level found among the subsidiaries 404, 406. In this case, the negative experience level associated with the number of freezes 404 renders the overall experience level 402 negative. In other configurations, the various subsidiaries (in this case, 404, 406) can have their experience levels weighted. For example, if there are multiple neutral subsidiaries, the overall experience level may be negative simply due to the aggregation of so many neutral subsidiaries. In other cases, an application or aspect of the IT services may be particularly important, such that how the employee interacts with that one aspect outweighs other factors in determining the overall experience level.
The method continues, with the system calculating, via at least one processor of the computer system for each period of time within the plurality of periods of time, a level of experience based on the endpoint data, the application data, the collaboration data, and the time and IT environment metadata, resulting in a plurality of experience levels, the plurality of experience levels respectively corresponding to the periods of time within the plurality of periods of time (612). The system can then compute a cumulative experience score over a selected timeframe by combining experience levels associated with the selected timeframe and within the plurality of experience levels (614). The plurality of experience levels and cumulative experience score are transmitted to an Information Technology team (616).
In some configurations, the plurality of periods of time can include at least one of consecutive minutes, consecutive hours, and consecutive days.
In some configurations, the plurality of periods of time can include non-consecutive periods of time.
In some configurations, the collaboration software program can include at least one of MICROSOFT TEAMS AND ZOOM.
In some configurations, the plurality of experience levels and the cumulative experience score can be computed for a single individual user, whereas in other configurations the plurality of experience levels and the cumulative experience score are computed for a plurality of users. Where the plurality of experience levels and the cumulative experience score are computed for a plurality of users, the plurality of users may use distinct computer operating systems. For example, some users may us MICROSOFT WINDOWS, whereas other users may use APPLE OS.
In some configurations, the calculating of the level of experience can further include: comparing each metric within the endpoint data, the application data, and the collaboration data to at least one respective predetermined metric-specific threshold, resulting in metric comparisons; identifying, based on the metric comparisons, metric-specific experience levels; and assigning the level of experience for each period of time based on a lowest ranked level of experience within the metric-specific experience levels. In such configurations, the at least one respective predetermined metric-specific threshold can be customized to a user.
In some configurations, the plurality of experience levels can be selected from categories comprising positive, negative, and neutral.
With reference to
The system bus 710 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. A basic input/output (BIOS) stored in ROM 740 or the like, may provide the basic routine that helps to transfer information between elements within the computing device 700, such as during start-up. The computing device 700 further includes storage devices 760 such as a hard disk drive, a magnetic disk drive, an optical disk drive, tape drive or the like. The storage device 760 can include software modules 762, 764, 766 for controlling the processor 720. Other hardware or software modules are contemplated. The storage device 760 is connected to the system bus 710 by a drive interface. The drives and the associated computer-readable storage media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computing device 700. In one aspect, a hardware module that performs a particular function includes the software component stored in a tangible computer-readable storage medium in connection with the necessary hardware components, such as the processor 720, bus 710, display 770, and so forth, to carry out the function. In another aspect, the system can use a processor and computer-readable storage medium to store instructions which, when executed by a processor (e.g., one or more processors), cause the processor to perform a method or other specific actions. The basic components and appropriate variations are contemplated depending on the type of device, such as whether the device 700 is a small, handheld computing device, a desktop computer, or a computer server.
Although the exemplary embodiment described herein employs the hard disk 760, other types of computer-readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, digital versatile disks, cartridges, random access memories (RAMs) 750, and read-only memory (ROM) 740, may also be used in the exemplary operating environment. Tangible computer-readable storage media, computer-readable storage devices, or computer-readable memory devices, expressly exclude media such as transitory waves, energy, carrier signals, electromagnetic waves, and signals per se.
To enable user interaction with the computing device 700, an input device 790 represents any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 770 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems enable a user to provide multiple types of input to communicate with the computing device 700. The communications interface 780 generally governs and manages the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
The technology discussed herein refers to computer-based systems and actions taken by, and information sent to and from, computer-based systems. One of ordinary skill in the art will recognize that the inherent flexibility of computer-based systems allows for a great variety of possible configurations, combinations, and divisions of tasks and functionality between and among components. For instance, processes discussed herein can be implemented using a single computing device or multiple computing devices working in combination. Databases, memory, instructions, and applications can be implemented on a single system or distributed across multiple systems. Distributed components can operate sequentially or in parallel.
Use of language such as “at least one of X, Y, and Z,” “at least one of X, Y, or Z,” “at least one or more of X, Y, and Z,” “at least one or more of X, Y, or Z,” “at least one or more of X, Y, and/or Z,” or “at least one of X, Y, and/or Z,” are intended to be inclusive of both a single item (e.g., just X, or just Y, or just Z) and multiple items (e.g., {X and Y}, {X and Z}, {Y and Z}, or {X, Y, and Z}). The phrase “at least one of” and similar phrases are not intended to convey a requirement that each possible item must be present, although each possible item may be present.
The various embodiments described above are provided by way of illustration only and should not be construed to limit the scope of the disclosure. Various modifications and changes may be made to the principles described herein without following the example embodiments and applications illustrated and described herein, and without departing from the spirit and scope of the disclosure. For example, unless otherwise explicitly indicated, the steps of a process or method may be performed in an order other than the example embodiments discussed above. Likewise, unless otherwise indicated, various components may be omitted, substituted, or arranged in a configuration other than the example embodiments discussed above.