The present invention relates to methods and systems for measuring performance of remote display and playback, and more particularly, methods and systems for measuring performance of remote display and playback in a virtual desktop environment.
In a typical virtualized desktop infrastructure architecture, user displays and input devices are local, but applications execute remotely in a server. Because applications are executing remotely, a latency element is introduced due to network travel time and application response time. One method of assessing the performance of remote applications is by measuring the response times for various events. These response times are the result of aggregating latency across different components in the architecture. Measuring these latencies, however, is a challenge as measurements must encompass latencies related to both the low level events (such as mouse movements) and the high level events (application launches), as well as work across network boundaries and a range of client devices.
Virtual Desktop Infrastructure (VDI) deployments are rapidly becoming popular. In VDI deployments, a user's desktop is typically hosted in a datacenter or cloud, and the user remotely interacts with her desktop via a variety of endpoint devices, including desktops, laptops, thin clients, smart phones, tablets, etc. There is a wide variety of advantages to leveraging this approach, including cost savings, improved mobility, etc. However, for these VDI environments to become ubiquitous, the user should not be constrained in the type of applications that can be successfully run. Accordingly, it is necessary to ensure that, when required, sufficient computational resources can be made available in the data center and that, without comprising quality, there is sufficient network bandwidth to transmit the desired imagery and audio to the user's endpoint device. In order to ensure proper quality of delivery, it is necessary to automatically monitor audio quality and the synchronization of audio and video.
Embodiments of the present invention provide methods, systems, and computer programs for monitoring quality of audio delivered over a communications channel. It should be appreciated that the present invention can be implemented in numerous ways, such as a process, an apparatus, a system, a device or a method on a computer readable medium. Several inventive embodiments of the present invention are described below.
In one embodiment, a method includes an operation for defining timestamps. The timestamps are associated with a measure of time while delivering audio to a client computer, where each timestamp includes a plurality of timestamp bits. Further, the method includes an operation for modulating an audio signal with pseudo noise (PN) codes when a timestamp bit has a first logical value, and modulating the audio signal with a negative of the PN codes when the timestamp bit has a second logical value. After transmitting the modulated audio signal to the client computer, the timestamp bits are extracted from a received modulated audio signal to obtain received timestamps. The quality of the audio is assessed based on the received timestamps, and the quality of the audio is stored in computer memory.
In another embodiment, a system for monitoring quality of audio delivered over a communications channel includes a performance manager in a server computer and a performance agent in a client computer. The server computer holds an audio signal, and the performance manager defines timestamps that are associated with a measure of time while delivering audio to the client computer. Each timestamp includes a plurality of timestamp bits, and the performance manager modulates an audio signal with PN codes when a timestamp bit has a first logical value and modulates the audio signal with a negative of the PN codes when the timestamp bit has a second logical value. The performance agent extracts the timestamp bits from a received modulated audio signal from the server computer and obtains received timestamps. Further, the performance agent assesses a quality of the audio based on the received timestamps and stores the quality of the audio in computer memory.
In yet another embodiment, a computer program embedded in a non-transitory computer-readable storage medium, when executed by one or more processors, for monitoring quality of audio delivered over a communications channel, includes program instructions for defining timestamps, which are associated with a measure of time while delivering audio to a client computer. Each timestamp includes a plurality of timestamp bits. The computer program further includes program instructions for modulating an audio signal with PN codes when a timestamp bit has a first logical value, and for modulating the audio signal with a negative of the PN codes when the timestamp bit has a second logical value. Additional program instructions are used for transmitting the modulated audio signal to the client computer, and for extracting the timestamp bits from a received modulated audio signal to obtain received timestamps. In addition, the computer program includes program instructions for assessing a quality of the audio based on the received timestamps, and for storing the quality of the audio in computer memory.
Other aspects of the invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of the invention.
The invention may best be understood by reference to the following description taken in conjunction with the accompanying drawings in which:
In Virtual Desktop Infrastructure (VDI) deployments there is a need to ensure that the user has a good audio and visual experience, even when running the most demanding of applications in the most challenging situations. There is a desire to automatically monitor audio quality and audio-video synchronization in the benchmark and capacity planning environments.
Embodiments of the invention present techniques to use spread-spectrum communications to introduce hardened timestamps into audio streams in the VDI environment, as a way to provide real-time feedback on audio quality and audio-video synchronization. Further, embodiments of the invention overcome many additional challenges that are particular to the VDI environment to achieve high information bit rates in the low bit rate audio environments. It should be noted that some embodiments are presented within a VDI environment, but the principles of the invention can be used for other types of deployments and other communications channels using analog or digital signals. The embodiments illustrated herein should therefore not be interpreted to be exclusive or limiting, but rather exemplary or illustrative.
It will be obvious, however, to one skilled in the art, that the present invention may be practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure the present invention.
In one embodiment, performance server 138 is also part of virtualization server 102. Performance server 138 collects performance data from servers and clients and analyzes the data collected for presentation to a user. A simplified architecture of virtualization server 102 is shown in
Virtual machines 104a-n include a Guest Operating System (GOS) supporting applications running on the GOS. A different view of virtual machine 104n includes desktop view 110, windows-based user interface 112, and performance agent 114. Performance Agent 114 is a process executing in the VM that monitors the quality of video and audio delivered to the client. On the other end, a local client 128 includes display 130, remote desktop client 132 and performance agent 134. Desktop view 110 corresponds to the desktop view 130 for the virtual machine, which includes windows-based user interface 132.
Performance agent 134 cooperates with performance agent 114 for the collection of audio and video quality metrics. Embodiments of the invention measure the performance of the virtual environment as seen by users 136a-m, even in the presence of firewalls 140 that may block out-of-band communications. The embodiments presented are independent of the communications protocols used to transfer display data, thus being able to reliably obtained performance measurements under different topologies and protocols and assess how different factors affect virtual desktop performance. Further still, the methods presented can scale to tens of thousands of clients and servers without unduly burdening the virtual infrastructure.
To accurately measure audio fidelity, as perceived on the endpoint device, it is necessary to examine the audio stream received by the endpoint, and ascertain firstly, how closely does the received stream match the ‘original’ audio stream, and secondly, how a user would perceive the differences. If a user is sitting at her desktop and listening to the audio stream locally, how does that experience compare with the experience in the VDI environment? The following factors can have an impact on audio or video delivery:
At first glance, it may simply seem necessary to capture the audio stream on the endpoint device (as it is being sent to the speakers) and then compare it with the original stream. Unfortunately, given the alterations that can occur to the stream in the VDI environment (a result of the host CPU, network and compression issues discussed above), this analysis can rapidly become computationally challenging, especially on resource constrained end-point devices. Embodiments of the invention use timing markers within audio streams to simplify this matching process. During benchmarking and provisioning, there is offline access to the audio and video streams that will be played during testing. Accordingly, it is feasible to modify the original audio and video files to include these markers. By doing so, the markers provide end-to-end timing protection. Further, it is potentially acceptable to perturb the audio stream by the marker insertion during testing because it is not a live deployment where users are actually listening to the streams. However, it should be noted that the principles of the invention can also be used in live deployments by making the markers introduce a small amount of noise which is not noticed by users, or offers only a small amount of perceived noise.
It should be noted that the architecture shown in
Performance manager 206 intercepts the audio or video signal before is sent out to transport module 212 and adds information to the audio or video signal for performing quality measurements, as described below in more detail with reference to
In one embodiment, such as in a test environment, the timestamps are introduced into the audio or video files in advance (offline) and performance manager 206 does not need to intercept or modify the outgoing audio or video streams. Client 218 includes remote desktop environment 220, transport module 226, and performance agent 228. The remote desktop environment provides a user interface, which typically includes a windows environment with video window 222 and audio player 224. The video window 222 delivers videos to the user, and the audio player delivers audio to the user. The signal sent from transport 212 in server 202 via network 214 is received by transport module 226 at client 218. The signal is then processed by the audio or video player to deliver audio or video to the user. The signal is also examined by performance agent 228 to extract the information embedded by performance manager 206 at the server.
Once the information is extracted by performance agent 228, this information is used to compute quality metrics that measure the quality of delivery. More details on the calculation of quality metrics are given below with reference to
A simple method to measure audio quality is to periodically insert samples or markers in the audio stream before transmission to the endpoint device, and then simply locate the markers on the endpoint device. This provides a measure of how fast the audio stream is progressing, and if the inserted markers are being received correctly. However, the modifications incurred by the audio stream during transmission make it difficult, if not impossible, to accurately recover these timing markers. It is necessary to “harden” the markers, such that the markers are more resilient to any modifications incurred during transit.
In order to enable audio-video synchronization at a necessary granularity (e.g., 100 ms,) it is not sufficient to insert infrequent periodic markers every second or so. For a typical (e.g., 44.1 KHz) audio stream, it is necessary to be able to detect alignment within 4500 samples. Further, to prevent the introduction of aliasing (e.g., from jitter or replication) it is not sufficient to insert a simple marker every 4500 samples. Rather, each marker must be uniquely identifiable, at least within a sufficiently large time period to eliminate common aliasing concerns. As a result, there are only 4500 samples available to encode a unique timestamp that can withstand the following sequence of events that can be encountered in the VDI environment: MP3 compression (after offline marker insertion); MP3 decompression; playback via a typical audio tool; compression by the VDI transport protocol; network congestion (e.g., packet loss and jitter;) realization on the endpoint device, etc.
To avoid this problem, timestamps must be hardened. One bit of the timestamp cannot be entrusted to a single bit, or even a single sample, of the audio stream. Rather, it is necessary to spread the timestamp information over multiple samples, such that, even in the presence of the significant modification of the underlying audio stream, the timestamps can be recovered. To achieve this spreading, spread spectrum techniques can be utilized. Spread spectrum signals use special fast codes, often called pseudo-noise (PN) codes, which run at many times the information bandwidth or data rate. The rate of the PN code can range from around 50 times to over 1000 times the information data rate, spreading each bit of information data across multiple bits of the PN code (50 to 1000 bits). The faster the code, the greater the spreading, and the more robust the information encoding is. As each information bit is mapped to a PN code that spans hundreds of audio samples, it is possible for a significant number of audio samples to be missing or corrupted, but the code and the information it carries can still be recovered. The ratio of the code rate to the information bit rate is called the spreading factor or the processing gain.
A spread spectrum (SS) receiver uses a locally generated replica pseudo noise code and a receiver correlator to separate only the desired coded information from all possible signals. A SS correlator can be thought of as a very special matched filter, it responds only to signals that are encoded with a pseudo noise code that matches its own code. Thus, an SS correlator can be tuned to different codes simply by changing its local code. This correlator does not respond to man-made, natural or artificial noise or interference, the correlator responds only to SS signals with identical matched signal characteristics and encoded with the identical pseudo noise code.
The insertion of the timestamps includes an operation for generating a PN code. The timestamps contain a plurality of bits, and each bit is sent separately as a PN code. The timestamp data is used to modulate the k-bit PN code (i.e., the timestamp data is spread.) The k-bit PN code is repeated for the duration of the audio stream. In operation 302, the method checks if the bit being sent is a logical 0 or a logical 1. If it is a logical 0, a negative version of the PN code is used, and if the bit is a logical 1, a positive version of the PN code is used. It is noted that PN codes have the property that the receiver can detect a positive or a negative correlation of the received signal with the PN codes. This property is used to encode a 0 or a 1 bit when using the PN codes. The resulting signal modulates a digital carrier 306, which is obtained by sampling an analog carrier 304, resulting in digital PN code with carrier signal 308.
For example, if a timestamp consists of a sequence of bits 0110 (without using markers, as described below), this timestamp is spread across 4 repeats of the PN code, with each instance of the code being modulated by a successive bit of the timestamp. If the timestamp bit is a 1, an instance of the PN code is emitted, whereas if the timestamp bit is a 0, a negated version of the PN code is emitted. Thus, for the above timestamp, −++− versions of the PN code are generated.
Audio signal 310 is sampled 312 to obtain a digital form of the audio signal 314. The digital PN code with carrier is incorporated into the original digital audio stream 314 to obtain a digital audio signal with PN codes 316, also referred to herein as a modulated audio signal, and then transmitted to the client. At the client, the received digital signal 320 is used for playing audio 322 in speaker 324.
The recovery of the timestamps includes the generation of a PN code that matches the PN code used during the generation of the modified audio stream. PN code detection module 326 at the receiver acquires the received code and locks to the received code. The PN code detection module 326 compares its copy of the PN code against the received digital stream. When an unmodified version of the PN code is observed, the receiver knows that a 1 was transmitted, whereas if a negated version is observed, then a 0 was transmitted. By repeating this process for successive timestamp bits, the receiver gradually recreates the transmitted timestamp by concatenating the received timestamp bits.
The timestamp bit detection is performed by undertaking a correlation operation, where the received stream is correlated against the known PN code. These special PN codes have a critical property, the periodic autocorrelation function has a peak at 0 shift and a value of 0 elsewhere, i.e., there is a significant spike in correlation when the two codes are precisely aligned. A misalignment of the codes, by as little as a single sample, results in a significantly diminished degree of correlation. Accordingly, to locate a PN code in the received stream, the receiver needs to gradually advance its PN code across the received stream and recalculate the correlation after each sample-by-sample move. When the correlation exceeds a predefined threshold, the code in the audio stream has been located or acquired. Alternatively, rather than using a preset threshold, the code can be moved across a predefined window of the audio stream and the maximum correlation observed deemed to represent the location of the code.
Once the code has been locked, the receiver can proceed across the audio stream, determine where a positive or negative version of the code was transmitted (indicated by whether the result of the correlation operation is positive or negative), and recover the timestamp bits that can then be used to determine how far the audio stream has advanced and whether it is synchronized with the corresponding video sequence. If the correlation is positive, the system determines that a timestamp bit with a value of 1 has been received, and a bit with a value of 0 otherwise. A plurality of timestamps bits are combined to form a timestamp, as described below in more detail with reference to
It should be noted that the embodiments illustrated in
Timestamp bits 406 are combined to form timestamp 408. In the embodiment shown in
Finding the timestamps in the incoming stream of timestamp bits starts by determining where each timestamp starts and ends. The received sequence may contain, replicated bits, missing bits, or corrupted bits. As a result, it is not possible to simply divide the bit stream into chunks of a fixed number of bits and consider each chunk a timestamp. Rather, it is necessary to use a unique symbol in the audio stream to demarcate each timestamp. For instance, if the timestamp is 8-bit long and if the time data is constrained such that bits 0 and 7 are always zero, then a sequence of 8 1s could be used to demarcate and detect each timestamp.
One of the challenges in measuring quality is inserting a sufficiently large timestamp fast enough to provide the high resolution timing required for valuable measurements. For instance, if each PN sequence spans 255 samples and each timestamp is 8 bits, 2040 timestamps are required for each timestamp. If a maximum of 4500 samples is used per timestamp and an 8-bit demarcation marker is added to each timestamp, where the sample count is up to 4080, there is little additional room for error correction (EC) codes. If the PN code length is reduced for a shorter code, then there is less spreading and less immunity to audio stream modification. In experimentation, 255-bit Kasami codes were found to provide sufficient protection to handle the conditions encountered in the VDI environment (127-bit codes were found to be too short). Alternatively, the interval between demarcation markers could be increased, reducing their overhead, although this increases the complexity of recovering the timestamps.
It should be noted that the embodiments illustrated in
It is necessary to continually check whether there is synchronization with the PN code. This is achieved by monitoring the result of the correlation operations performed as the analysis advances across each successive chunk (chunk size is the size of the PN code) of the audio stream. If the correlation result drops below the set threshold, synchronization is considered lost and it is necessary to resynchronize with the stream, as shown in
To identify the impact of missing, corrupted or duplicated bits, a sequence of several adjacent timestamps are examined. If the sequence is, for example, monotonically increasing (assuming that the original timestamps monotonically increase), then it is highly likely that the timestamps are uncorrupted and are considered correct. The level of uncertainty can be managed by changing the number of timestamps evaluated. Additionally, it is possible to insert parity bits or EC codes to guard against corruption in a more traditional manner.
Another measure of quality is based on a large timescale (i.e., the macro scale). The question is at which audio degradation becomes noticeable to humans, and how do all the various chunks of received audio reassemble the original stream. In this case it is necessary to consider the received audio stream in the broader context and attempt to determine how well the received samples fit together to provide something that resembles the original audio stream.
On a large scale measurement of quality, one method locates the longest local alignment (in the window of analysis, which is type a buffer's worth of data, e.g., a few seconds), and then attempts to extend the alignment across all of the available samples within the analysis window. Once this process is completed, the percentage of good samples is computed to obtain a measure of the end user's audio experience.
The maximum local alignment is obtained by scanning the recovered timestamps, and identifying the longest sequence of monotonically increasing timestamps. This represents the largest sequence of good quality, correctly-timed audio. In one embodiment, other non-contiguous groups of samples within the buffer may also be correctly timed with respect to the longest local match. These non-contiguous groups also contribute to the reconstruction of the audio stream, and the initial match can then be extended by including these other smaller groups of audio samples in the set of matching samples.
Further, small slips in synchronization may not be detectible by the human ear, and so it is possible to use these samples to further expand the set of matching samples. In this case, the degree to which each of the remaining timestamps in the analysis window is out of synchronization is considered. Where the lack of synchronization is less than a predefined threshold, the samples are also considered part of the match.
To measure audio-video synchronization on the endpoint device, it is necessary to determine which audio samples are being played on the endpoint device in parallel with which video frames. In essence, the problem is to identify is a given audio sample associated with a given video sample is the same match at the client as at the endpoint. There can be a dis-synchronization of 1 sample, 10 samples, 1000 samples, etc. A drift of as little as 100 ms can become noticeable to users. It should be noted that it is not possible to simply enhance the VDI transport protocols to provide this timing information, as there may be host CPU constraints that impact upstream (application) behavior in ways that are not apparent to the transport protocol. Further, it is desirable to have an analysis technique that works with existing protocols.
In one embodiment, the several quality metrics are combined to create a single quality metric. As noted above, it is necessary to subjectively align the metric such that is closely reflects an end-user's audio experience (i.e. what is the value of the computed metric when the user regards the experience as excellent, good, acceptable, poor, bad and so on).
The embodiments previously described have been discussed with respect to audio being streamed from the host to the endpoint device. In another embodiment, the principles presented herein are applied to the case when the endpoint device inserts markers into its output stream (e.g., microphone, webcam streams), allowing the performance of the uplink from the endpoint to the host to be analyzed.
In operation 712, the method extracts the timestamp bits from a received modulated audio signal to obtain received timestamps. From operation 712, the method continues to operation 714 to assess the quality of the audio received based on the received timestamps, and from operation 714 the method continues to operation 716 where the quality of the audio is stored in computer memory.
Mass storage device 814 represents a persistent data storage device such as a floppy disc drive or a fixed disc drive, which may be local or remote. Network interface 830 provides connections via network 832, allowing communications with other devices. It should be appreciated that CPU 804 may be embodied in a general-purpose processor, a special purpose processor, or a specially programmed logic device. Input/Output (I/O) interface provides communication with different peripherals and is connected with CPU 804, RAM 806, ROM 812, and mass storage device 814, through bus 810. Sample peripherals include display 818, keyboard 822, cursor control 824, removable media device 834, etc.
Display 818 is configured to display the user interfaces described herein. Keyboard 822, cursor control 824, removable media device 834, and other peripherals are coupled to I/O interface 820 in order to communicate information in command selections to CPU 804. It should be appreciated that data to and from external devices may be communicated through I/O interface 820. The invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire-based or wireless network.
Embodiments of the present invention may be practiced with various computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. The invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a network.
With the above embodiments in mind, it should be understood that the invention can employ various computer-implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Any of the operations described herein that form part of the invention are useful machine operations. The invention also relates to a device or an apparatus for performing these operations. The apparatus may be specially constructed for the required purpose, such as a special purpose computer. When defined as a special purpose computer, the computer can also perform other processing, program execution or routines that are not part of the special purpose, while still being capable of operating for the special purpose. Alternatively, the operations may be processed by a general purpose computer selectively activated or configured by one or more computer programs stored in the computer memory, cache, or obtained over a network. When data is obtained over a network the data maybe processed by other computers on the network, e.g., a cloud of computing resources.
The embodiments of the present invention can also be defined as a machine that transforms data from one state to another state. The transformed data can be saved to storage and then manipulated by a processor. The processor thus transforms the data from one thing to another. Still further, the methods can be processed by one or more machines or processors that can be connected over a network. The machines can also be virtualized to provide physical access to storage and processing power to one or more users, servers, or clients. Thus, the virtualized system should be considered a machine that can operate as one or more general purpose machines or be configured as a special purpose machine. Each machine, or virtual representation of a machine, can transform data from one state or thing to another, and can also process data, save data to storage, display the result, or communicate the result to another machine.
The invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data, which can be thereafter be read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, magnetic tapes and other optical and non-optical data storage devices. The computer readable medium can include computer readable tangible medium distributed over a network-coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
Although the method operations were described in a specific order, it should be understood that other housekeeping operations may be performed in between operations, or operations may be adjusted so that they occur at slightly different times, or may be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing, as long as the processing of the overlay operations are performed in the desired way.
Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications can be practiced within the scope of the appended claims. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.
This application is a continuation application of, and claims priority to, U.S. patent application Ser. No. 12/942,393, filed on Nov. 9, 2010, and which is incorporated here by reference. This application is related to U.S. Pat. No. 7,831,661, issued Nov. 9, 2010, and entitled “MEASURING CLIENT INTERACTIVE PERFORMANCE USING A DISPLAY CHANNEL”; and U.S. application Ser. No. 12/337,895, filed on Dec. 18, 2008, and entitled “MEASURING REMOTE VIDEO PLAYBACK PERFORMANCE WITH EMBEDDED ENCODED PIXELS”, which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5642171 | Baumgartner et al. | Jun 1997 | A |
5815572 | Hobbs | Sep 1998 | A |
5933155 | Akeley | Aug 1999 | A |
6168431 | Narusawa et al. | Jan 2001 | B1 |
6381362 | Deshpande et al. | Apr 2002 | B1 |
6421678 | Smiga et al. | Jul 2002 | B2 |
6618431 | Lee | Sep 2003 | B1 |
6876390 | Nagata | Apr 2005 | B1 |
7155681 | Mansour et al. | Dec 2006 | B2 |
7287180 | Chen et al. | Oct 2007 | B1 |
7287275 | Moskowitz | Oct 2007 | B2 |
7532642 | Peacock | May 2009 | B1 |
7552467 | Lindsay | Jun 2009 | B2 |
7593543 | Herz et al. | Sep 2009 | B1 |
7752325 | Yadav et al. | Jul 2010 | B1 |
7796978 | Jones et al. | Sep 2010 | B2 |
7831661 | Makhija et al. | Nov 2010 | B2 |
8166107 | Makhija | Apr 2012 | B2 |
8347344 | Makhija et al. | Jan 2013 | B2 |
8788079 | Spracklen | Jul 2014 | B2 |
9214004 | Agrawal et al. | Dec 2015 | B2 |
9471951 | Agrawal et al. | Oct 2016 | B2 |
9578373 | Agrawal et al. | Feb 2017 | B2 |
9674562 | Spracklen et al. | Jun 2017 | B1 |
20010023436 | Srinivasan et al. | Sep 2001 | A1 |
20020026505 | Terry | Feb 2002 | A1 |
20020056129 | Blackketter et al. | May 2002 | A1 |
20020138846 | Mizutani et al. | Sep 2002 | A1 |
20020165757 | Lisser | Nov 2002 | A1 |
20040022453 | Kusama et al. | Feb 2004 | A1 |
20040073947 | Gupta | Apr 2004 | A1 |
20040137929 | Jones | Jul 2004 | A1 |
20040184526 | Penttila et al. | Sep 2004 | A1 |
20040221315 | Kobayashi | Nov 2004 | A1 |
20050041136 | Miyata | Feb 2005 | A1 |
20050138136 | Potter | Jun 2005 | A1 |
20050187950 | Parker et al. | Aug 2005 | A1 |
20050234715 | Ozawa | Oct 2005 | A1 |
20050283800 | Ellis et al. | Dec 2005 | A1 |
20060050640 | Jin et al. | Mar 2006 | A1 |
20060059095 | Akins, III | Mar 2006 | A1 |
20060206491 | Sakamoto et al. | Sep 2006 | A1 |
20070003102 | Fujii et al. | Jan 2007 | A1 |
20070008108 | Schurig | Jan 2007 | A1 |
20070125862 | Uchiyama et al. | Jun 2007 | A1 |
20070126929 | Han et al. | Jun 2007 | A1 |
20070250920 | Lindsay | Oct 2007 | A1 |
20070260870 | Nissan et al. | Nov 2007 | A1 |
20070271375 | Hwang | Nov 2007 | A1 |
20070291771 | Cline | Dec 2007 | A1 |
20080022350 | Hostyn et al. | Jan 2008 | A1 |
20080052783 | Levy | Feb 2008 | A1 |
20080070589 | Hansen et al. | Mar 2008 | A1 |
20080075121 | Fourcard | Mar 2008 | A1 |
20080112490 | Kamijo et al. | May 2008 | A1 |
20080117937 | Firestone et al. | May 2008 | A1 |
20080204600 | Xu et al. | Aug 2008 | A1 |
20080212557 | Chiricescu et al. | Sep 2008 | A1 |
20080263634 | Kirkland | Oct 2008 | A1 |
20080297603 | Hurst | Dec 2008 | A1 |
20080310368 | Fischer | Dec 2008 | A1 |
20090100164 | Skvortsov et al. | Apr 2009 | A1 |
20090210747 | Boone | Aug 2009 | A1 |
20090216975 | Halperin et al. | Aug 2009 | A1 |
20090217052 | Baudry | Aug 2009 | A1 |
20090259941 | Kennedy, Jr. | Oct 2009 | A1 |
20090260045 | Karlsson et al. | Oct 2009 | A1 |
20090268709 | Yu | Oct 2009 | A1 |
20100047211 | Mcniece | Feb 2010 | A1 |
20100161711 | Makhija et al. | Jun 2010 | A1 |
20100162338 | Makhija | Jun 2010 | A1 |
20100246810 | Srinivasan et al. | Sep 2010 | A1 |
20100306163 | Beaty et al. | Dec 2010 | A1 |
20110023691 | Iwase et al. | Feb 2011 | A1 |
20110051804 | Chou et al. | Mar 2011 | A1 |
20110078532 | Vonog | Mar 2011 | A1 |
20110103468 | Polisetty | May 2011 | A1 |
20110134763 | Medina et al. | Jun 2011 | A1 |
20110138314 | Mir et al. | Jun 2011 | A1 |
20110179136 | Twitchell, Jr. | Jul 2011 | A1 |
20110188704 | Radhakrishnan et al. | Aug 2011 | A1 |
20110224811 | Lauwers et al. | Sep 2011 | A1 |
20110238789 | Luby et al. | Sep 2011 | A1 |
20120036251 | Beaty et al. | Feb 2012 | A1 |
20120066711 | Evans et al. | Mar 2012 | A1 |
20120073344 | Fabris | Mar 2012 | A1 |
20120081580 | Cote et al. | Apr 2012 | A1 |
20120113270 | Spracklen | May 2012 | A1 |
20120140935 | Kruglick | Jun 2012 | A1 |
20120167145 | Incorvia | Jun 2012 | A1 |
20120246225 | Lemire | Sep 2012 | A1 |
20130097426 | Agrawal et al. | Apr 2013 | A1 |
20140320673 | Agrawal et al. | Oct 2014 | A1 |
20140325054 | Agrawal et al. | Oct 2014 | A1 |
20160098810 | Agrawal et al. | Apr 2016 | A1 |
20170011486 | Agrawal et al. | Jan 2017 | A1 |
Entry |
---|
“Port Forwarding.” Wikipedia. Published Feb. 15, 2010. Retrieved prior to Oct. 25, 2013. Retrieved from the internet, Retrieved URL<http://web.archive.org/web/20100215085655/http://en.wikipedia.org/Port_forwarding>. 3 pages. |
Larsen, Vegard. Combining Audio Fingerprints. Norwegian University of Science and Technology. Department of Computer and Information Science. 2008. 151 pages. |
Mathematics Stack Exchange [online]. “Orthogonal Binary Sequences,” Aug. 2015, [retrieved on Mar. 6, 2018]. Retrieved from the Internet URL <https://math.stackexchange.com/questions/1412903/orthogonal-binary-sequences>. 1 page. |
Number | Date | Country | |
---|---|---|---|
20140328203 A1 | Nov 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12942393 | Nov 2010 | US |
Child | 14336835 | US |