This application is related to U.S. patent application Ser. No. 13/931,933, filed Jun. 30, 2013, entitled “Relative Frame Rate as Display Quality Benchmark for Remote Desktop,” which is commonly owned and incorporated by reference in its entirety.
In a typical virtual desktop infrastructure (VDI) architecture, displays and input devices are local, and applications execute remotely in a server. A user's desktop is typically hosted in a datacenter or cloud, and the user remotely interacts with her desktop via a variety of endpoint devices, including desktops, laptops, thin clients, smart phones, and tablets. There are many other instances where users may interact with a computer system remotely.
The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings.
In the drawings:
User experience is a key consideration when organizations decide on software for deploying remote desktops. An important way to measure user experience is to determine the display quality visually seen by the users. One indicator of display quality is the relative frame rate, which is the ratio between the frame rate at the client and the frame rate at the server. To measure the frame rates, a video player plays a timestamp video at the server and the screens at the server side and the client side are captured. The frame rate on each side is determined as the number of captured frames with unique timestamps divided by the duration of the capture.
When a user resizes the video player, the timestamp is also resized and may be partially cut off. For example, a portion of the timestamp near the edge may be cut off when the video is resized. Thus the timestamps should be detectable when they are resized and partially cut off. When a remote display protocol uses a lossy compression to transmit the video from the server to the client, noise is introduced into the video. Thus the timestamps should resist noise introduced by the lossy compression.
In accordance with examples of the present disclosure, a timestamp that is detectable after being resized or lossily compressed is provided. The timestamp includes data elements of first and second colors that are spaced apart and set against a background of a third color so adjacent data elements are separated by areas of the third color between them. For example, the timestamp includes black and white columns that are spaced apart and set against a red background.
When the timestamp is resized, adjacent columns remain separated by red areas between them so the columns can be detected. The length of the columns may allow the timestamp to be detected even when it is partially cut off. The columns are also sized so they are greater than a processing unit of a lossy compression in order to resist noise introduced into the timestamp during encoding.
VM 106-n includes a guest operating system (OS) 116 and a benchmark server application 118. Client 108-n includes an OS 120, a desktop viewer application 122, and a benchmark client application 124. Desktop viewer application 122 displays the remote desktop of VM 106-n on client 108-n. Benchmark server application 118 and benchmark client application 124 work together to benchmark the user experience of the VDI in system 100.
In block 202, benchmark server application 118 plays a timestamp video on the remote desktop on VM 106-n at the server side. The timestamp video is played back at a fast rate, such as 100 fps. The timestamp video has frames embedded with unique timestamps.
Each data element 402 has a first color or a second color, such as black or white. Data elements 402 are spaced apart and set against a background 404 that has a different color, such as red. This allows adjacent data elements 402 to remain separated by red areas between them when the timestamp is resized. Data elements 402 are elongated so they may be partially cut off and still detected.
Data elements 402 are sized greater than a processing unit of a lossy compression, such as the 8 by 8 block of pixels for JPEG. This allows each data element 402 to be encoded with as much of its original color (pure black or pure white) as possible in the lossy compression in order to reduce the noise introduce by the lossy compression.
Data elements 402 may be columns, and each column 402 may be 12 by 24 pixels on the screen. Background 404 may be rectangular. A portion 406 of background 404 forms an identifier marker for timestamp 302. Identifier marker 406 may be located above (or below) columns 402. Timestamp 302 further includes an end marker 408 of another color, such as blue, for timestamp 302. End marker 408 may be located to the left (or right) of background 404.
Each column 402 is mapped to a bit based on its color. For example, black columns are mapped to a zero bit while white columns are mapped to a one bit. Note that a sixteen column timestamp 302 provide a time span of (2^16 frames)/(100 fps)=655.36 seconds≈11 minutes.
Referring back to
In block 204, benchmark server application 118 captures frames of at least a portion of the video on the screen of VM 106-n at the server side for a predetermined amount of time, such as one to several minutes. In one example, benchmark server application 118 captures the screen at a rate, such as 200 fps, faster than the playback rate of the timestamp video. Benchmark server application 118 detects and reads the timestamp on each captured frame, and counts the number of captured frames with unique timestamps. Block 204 may be followed by block 206.
In block 206, benchmark server application 118 calculates the frame rate on VM server 106-n at the server side. The frame rate is calculated as follows:
where “SUM of unique frames” is the count from block 204, and the “Time span of screen capture” is equal to the difference in last and the first timestamps divided by the timestamp playback rate (e.g., 100 fps). Block 206 may be followed by block 208.
In block 208, benchmark client application 124 captures frames of at least a portion of the video on the screen of client 108-n on the client side for a predetermined amount of time, such as one to several minutes. In one example, benchmark client application 124 captures the screen at a rate, such as 50 fps, faster than the rate of the remote display protocol, such as 30 fps. Benchmark client application 124 may save the captured frames in a shared folder on VM 106-n that can be accessed by benchmark server application 118. Block 208 may be followed by block 210.
In block 210, benchmark server application 118 calculates the frame rate at client 108-n on the client side. First, benchmark server application 118 retrieves the captured frames from the shared folder, detects and reads the timestamp on each captured frame, and counts the number of captured frames with unique timestamps. The frame rate at client 108-n at the client side is also calculated with equation (1) described above. Block 210 may be followed by block 212.
In block 212, benchmark server application 118 calculates the relative frame rate as follows:
In block 502, the application starts to process a captured frame 300 (
In block 504, the application determines if consecutive red pixels of an identifier marker 406 (
In block 506, the application determines if consecutive blue pixels of an end marker 408 (
In block 508, the application starts to process the area under identifier marker 406 line by line to look for white and black pixels. Block 508 may be followed by block 510.
In block 510, the application scans the current line pixel by pixel until a white or black pixel of a column 402 has been detected. When a white or a black pixel is detected, block 510 may be followed by block 512. When a white or a black pixel is not found in the current line, block 510 may loop back to block 508 to process the next line under identifier marker 406.
In block 512, the application records a bit value corresponding to the color of the detected column 402. Block 512 may be followed by block 514.
In block 514, the application continues to scan the following pixels of the current line until a red pixel indicating the end of the detected column 402 has been detected. When a red pixel is detected, block 514 may be followed by block 516. Otherwise block 514 loops back to itself to process the current line until a red pixel is detected.
In block 516, the application scans the following pixels of the current line until a blue, while, or black pixel has been detected. When a blue pixel is detected, the application has reached the end of timestamp 302 so block 516 may be followed by block 518, which ends method 500 as all the columns have been detected and the bit values of timestamp 302 have been determined. Alternatively to confirm the recorded bit values, block 516 may loop back to block 508 to process the next line under identifier marker 406. When a white or black pixel is detected, the application has detected a new column 402 so block 516 loops back to block 512 to record a bit value corresponding to the color of the detected column 402.
In addition to remote desktops running on VMs, the benchmark methods and applications in the present disclosure may also be applied to systems with remote desktops running on physical machines.
From the foregoing, it will be appreciated that various embodiments of the present disclosure have been described herein for purposes of illustration, and that various modifications may be made without departing from the scope and spirit of the present disclosure. For example, a timestamp may include data elements that are black and white radii set against a red background that is circular. Accordingly, the various embodiments disclosed herein are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
7768937 | Isambart et al. | Aug 2010 | B2 |
8166107 | Makhija | Apr 2012 | B2 |
8209539 | Baudry et al. | Jun 2012 | B2 |
8347344 | Makhija | Jan 2013 | B2 |
9113177 | Molander | Aug 2015 | B2 |
20080204592 | Jia et al. | Aug 2008 | A1 |
20100162338 | Makhija | Jun 2010 | A1 |
20110047211 | Makhija | Feb 2011 | A1 |
20120096146 | Barnett | Apr 2012 | A1 |
20120194874 | Milanski | Aug 2012 | A1 |
20120260277 | Kosciewicz | Oct 2012 | A1 |
20130096904 | Hui et al. | Apr 2013 | A1 |
20130097426 | Agrawal | Apr 2013 | A1 |
20140105576 | Lou et al. | Apr 2014 | A1 |
20140136944 | Harris et al. | May 2014 | A1 |
20140177734 | Carmel et al. | Jun 2014 | A1 |
20150007029 | Gong et al. | Jan 2015 | A1 |
Entry |
---|
Lawrence Spracklen et al., “Comprehensive User Experience Monitoring”, VMware Technical Journal, Apr. 2012, pp. 22-31, vol. 1, No. 1. |
Omer Boyaci et al., “vDelay: A Tool to Measure Capture-to-Display Latency and Frame Rate”, 11th IEEE International Symposium on Multimedia, 2009, pp. 194-200. |
Number | Date | Country | |
---|---|---|---|
20150304655 A1 | Oct 2015 | US |