As wide area networks (WANs) have continued to proliferate, the client/server computing model has similarly seen an increase in its application by a wide variety of both enterprise and home users. In the client/server model, one or more server computers (generally very fast computers with large amounts of processing power and other resources such as memory and data storage space) are setup at a central location from where the servers communicate with a number of smaller and less powerful client computers across a network (e.g., the Internet). The server is configured to run software applications that are designed to be controlled by a user operating the client computer. These frequently large and complex software applications execute on the server and perform most of the computations required to accomplish the task initiated by the user, thus taking advantage of the superior processing resources of the server (as compared to those of the client).
Software executing on the client computer forwards the commands issued by the user to the software applications executing on the server. The software also receives responses and/or results from the server software applications for presentation to the user at the client computer. An example of the client/server model is a remote desktop client/server application. The server computer executes an instance of a full operating system and its associated applications, as well as a server-side remote desktop application that redirects to the client computer the display output generated by a graphics adapter within the server and under the control of the operating system instance. The client computer executes a client-side remote desktop application, which displays the output generated by the operating system running on the server computer (e.g., the desktop and windows in a windowed operating system such as Microsoft® Windows®). The client-side remote desktop application also accepts input from the user (e.g., from a keyboard and mouse), and redirects to the server computer the user inputs received at the client computer. Communication between the client and server computers takes place over a network such as, for example, the Internet.
In order to further shift the computational burden of the client to the server, and thus further reduce the complexity and cost of the client, software and hardware have been developed that shift much of the graphics processing from the client to the server. In such systems, the server processes and formats the graphical data (e.g., via a graphics processing unit (GPU) within the server) and stores the data in a frame buffer. But instead of locally presenting the frame buffer data to a user on a locally attached display unit, the frame buffer data is transmitted across a network to a thin client, desktop personal computer (PC), or network attached display device, which displays the data without the need for processing and/or formatting by a client-local GPU. See for example, U.S. Pat. App. Pub. 2005/0193396 by Stafford-Fraser et al. (hereinafter “Stafford”) and entitled “Computer Network Architecture and Method of Providing Display Data.” The graphics adapter in such a system is thus “virtualized” within the server.
While such virtualized graphics adapters serve to simplify the client hardware and software, the demands on the server hardware and software are commensurately increased. While this is to be expected in (and in fact is one of the goals of) a client/server model, the effect is multiplied when the virtualized graphics adapter of Stafford is used with clients that require multiple dis'plays. For example, if a client PC with multiple displays is replaced by a simplified client (sometimes referred to as a “thin client”) with multiple displays, and the virtualized graphics adapter of Stafford is used on the server side, multiple virtualized graphics adapters (one for each display) must be executed at the server for each client. As a result, a significant increase in the resources required at the server for each client may result from executing multiple virtualized display adapter instances, when compared to the requirements for a server that executes only a single virtualized adapter instance for each client.
For a detailed description of exemplary embodiments of the invention, reference will now be made to the accompanying drawings in which:
Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, computer companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . ” Also, the term “couple” or “couples” is intended to mean either an indirect, direct, optical or wireless electrical connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, through an indirect electrical connection via other devices and connections, through an optical electrical connection, or through a wireless electrical connection. Additionally, the term “system” refers to a collection of two or more hardware and/or software components, and may be used to refer to an electronic device, such as a computer, a portion of a computer, a combination of computers, etc. Further, the term “software” includes any executable code capable of running on a processor, regardless of the media used to store the software. Thus, code stored in non-volatile memory, and sometimes referred to as “embedded firmware,” is included within the definition of software.
The following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that embodiment.
Server computer 110 couples to client device 150 via network 140 (e.g., the Internet). Server computer transfers graphical image data stored in at least one of frame buffers 122 or 124 to client device 150 for presentation as a displayed image on each of client displays 168 and 178. Client device 150 includes network interface and router (Net I/F & Router) 152, which couples to each of graphics control units (Graphics Ctrl Unit) 160 and 170. Graphics control units 160 and 170 each respectively couple to display devices 168 and 178. Network interface and router 152 also couples to keyboard 154 and mouse 156. Each of the graphics control units 160 and 170 include a processor (162, 172) coupled to network interface and router 152 and a frame buffer (164, 174).
Each frame buffer includes data corresponding to data from a sub-region of a frame buffer within server computer 110. In the illustrative embodiment of
The frame buffers of both server computer 110 and client device 150 are used to store image data that has already been processed (e.g., by processor 114 or graphics adapter 120). Such processing may include converting objects such as geometric objects (e.g., lines, squares, triangles) to displayed images, and/or applying advance two- and three-dimensional transformations to complex images, such as lighting, shading, shadowing and texture mapping, just to name a few examples. The end result of such operations is a representation of the resulting image to be presented on one or more display devices. Such a representation may be stored in a frame buffer, which is a specialized memory device or region of memory that is used to store data associated with the represented image such that each location within the buffer corresponds to a pixel on the screen.
For example, in at least some illustrative embodiments a single pixel is represented by a 32-bit value (e.g., 4 bytes, each respectively representing an 8-bit intensity value for the primary colors red, green and blue and the opacity value alpha (RGBA) for the pixel). Thus, if each row of a displayed image has 2560 pixels (e.g., as part of an image measuring 2560×1024 pixels), represented by 2560 RGBA values, than 10240 data bytes are stored in the frame buffer, in sequentially addressed locations within the memory or memory region of the frame buffer, for each scan line of pixels on one or more displays. By sequentially storing the data as sequential RGBA values, the data can be read out in the order that it will be presented on the display device, simplifying the processes of extracting the data from the buffer. Further, the data for each scan line may be stored such that a single memory device row corresponds to a single scan line. Thus if a memory row is sized to the next largest binary multiple beyond the amount of data required for a scan line (16384 bytes in the example described), a single scan line may be addressed using the most significant or upper bits of the memory address, while the lower bits may be used to address the pixel data of a row or scan line.
Because of the manner in which image data is organized and stored within a frame buffer, regions within an image may be mapped directly to regions within the address space of the frame buffer that stores the image. Thus, in the illustrative embodiment of
Pixel values for a region may be referenced relative to that region by applying one or more offset values to the region-relative pixel x-y coordinate. In the example described above, the pixel data for pixel (0, 0) of the right region (i.e., at the origin of the right region) is stored within the frame buffer at locations 5120-5123 of the first row of the buffer (i.e., pixel 1280 of row 0). The pixel data start address may be determined by adding the appropriate pixel coordinate offset to the region-relative x-coordinate of the pixel, multiplying the resulting offset pixel coordinate by the number of bytes per pixel, and adding the product of the bytes per buffer row times the y-coordinate (e.g., start byte address=4*(x+1280)+16384*y). The inverse of these calculations may also be performed to determine a region-relative pixel coordinate from the frame buffer address. Similar groupings of rows and offset calculations may also be used to define vertically divided regions of an image. Those of ordinary skill in the art will recognize that any number of vertical or horizontal regions, or both vertical and horizontal regions, may be defined within a frame buffer, and that many other coordinate-to-address and address-to-coordinate transformations may be applied to the embodiments described herein. All such numbers and combinations of regions, and all such coordinate and address transformations are within the scope of the present disclosure.
Continuing to refer to the illustrative embodiment of
At the end of the next interval, the contents of frame buffer 122 and 124 are compared (byte-for-byte) to identify those bytes of data that changed during the interval. Only those data bytes that changed during the interval (i.e., the difference data) are transmitted to the client device 150, which reduces the amount of data transmitted for images that are not changing very much from frame to frame. Once the difference data is identified, the frame buffers are again swapped, and data from frame buffer 124 is copied to frame buffer 122, so that frame buffer 122 may be updated with newer data while the difference data is extracted from frame buffer 124 for transmission to client device 150. In at least some illustrative embodiments, the entire content (for all regions) of the frame buffer that contains the newest data is periodically transmitted to client device 150 without generating difference data. These “reference frames,” as they are sometimes referred to, are transmitted in case some difference data was not received by client device 150 (e.g., if a connectionless network transaction, such as an IP datagram, was used to send the data and the message was lost due to a network disruption).
Referring still to the illustrative embodiment of
In at least some illustrative embodiments the difference data may be unencapsulated from the message used to transmit the data across the network before being forwarded. In other illustrative embodiments, the entire message may be forwarded and unencapsulated from the network message by the graphics control unit receiving the difference data. In at least some illustrative embodiments, the difference data is received within a message formatted according to the transmission control protocol/Internet protocol (TCP/IP) network protocol, and transferred to the appropriate graphics control unit using individual universal serial bus (USB) communication links between network interface and router 152, and each of the graphics control units. Keyboard 154 and mouse 156 also couple to network interface and router 152, as shown in
Continuing to refer to
As is evidenced by the description above, the operations performed at the client device require less graphical computational power than that required by server computer 110. This is due to the fact that the computationally intensive graphics processing operations are performed by graphics adapter 120, which then transfers to client device data requiring much less processing, even in embodiments that compress and decompress the image data sent to client device 150. This results in what is sometimes referred to as a “thin” client, both in terms of the hardware and the software that implements the functionality of client device 150. The use of frame buffer data between computer server 110 and client device 150, instead of data that requires extensive graphics processing (e.g., geometric object data), results in an image-based remote access system that operates using thin clients that are easily and inexpensively scaled.
In at least some illustrative embodiments, the image data transmitted from server computer 110 to client device 150 (including difference data, reference frames, or both) is compressed prior to being transmitted to further reduce the bandwidth required to transfer the image data. In at least some illustrative embodiments the compression is performed by processor 114, while in other illustrative embodiments the compression is performed by graphics adapter 120. Decompression is performed by processors 162 and 172 of client device 150, each processor decompressing the received data corresponding their respective sub-regions and displays. The compression/decompression may be implemented using any of a number of known compression/decompression (CODEC) algorithms, may include both lossy and lossless compression/decompression techniques, and may include both hardware and software implementations, as well as combinations of hardware and software implementations. All such CODEC algorithms, techniques and implementations are within the scope of the present disclosure.
In at least some illustrative embodiments (not shown) two or more guest operating systems concurrently execute under server-side remote access software 212, which arbitrates access to graphics adapter 120. As a result of such arbitration, graphics adapter 120 is exposed to each guest operating system as a dedicated resource, even though it is actually shared between the guest operating systems. The graphics adapter, which in at least some illustrative embodiments is not used by server computer 110 to locally drive a display device, thus operates as an “offload” graphics processor that is managed by server-side remote access software 212 as a shared resource. In other illustrative embodiments, a virtualized graphics adapter is implemented for each guest operating system instance by server-side remote access software 212.
Continuing to refer to
In at least some illustrative embodiments the image data transmitted by client interface software 218 is received and processed by network interface and routing software 252, executing on network interface and router 152 (e.g., on a processor within network interface and router 152 (not shown)). Client interface software 252 implements a network protocol stack (e.g., a TCP/IP protocol stack), wherein client device 150 is accessed as a network addressable TCP/IP device. Client routing software 252 converts the received image data messages to a format suitable for transmission to the graphics control units (e.g., USB transactions), and routes the image data to the appropriate graphics control unit based on the sub-region identifier within the received message. Client remote access software 260 and 270, executing on client processors 162 and 172 respectively, extract (and if necessary decompress) the received image data, and update the corresponding client frame buffer (164 or 174).
In the illustrative embodiments described, software executing on the various processors present in both the server computer 110 and client device 150 perform many of the functions described herein. Nonetheless, those of ordinary skill in the art will recognize that other illustrative embodiments may implement at least some of the functionality described in software or hardware (e.g., using application specific integrated circuits (ASICs)), or by a combination of software and hardware, and all such embodiments are within the scope of the present disclosure.
In at least some illustrative embodiments, network interface and router software 252, in conjunction with client interface software 218, operate to provide a configuration interface to a user of the remote access system described herein. The configuration interface allows a user to specify the layout, relative positions and resolution of the display devices (168 and 178 of
Because the guest operating system 214 of the example in
Either or all of volatile storage 310, volatile storage 393, non-volatile storage 364 and non-volatile storage 397 include, for example, software that is executed by processing logic 302 or 391, respectively, and provides the computer systems 300 and 390 with some or all of the functionality described herein. The computer system 300 also includes a network interface (Net I/F) 362 that enables the computer system 300 to receive information via a local area network and/or a wired or wireless wide area network, represented in the example of
Computer system 300 may be a bus-based computer, with a variety of busses interconnecting the various elements shown in
Computer system 390 may also be a bus-based computer, with PCI bus 394 coupling the various elements shown in
The peripheral interface 368 of computer system 300 accepts signals from the input device 370 and other input devices such as a pointing device 372, and transforms the signals into a form suitable for communication on PCI bus 361. The peripheral interface 399 of computer system 390 similarly accepts signals from the input device 394 and other input devices such as a pointing device 392, and transforms the signals into a form suitable for communication on PCI bus 394. The display interface 342 of computer system 300 may include a graphics card or other suitable video interface that accepts information from the AGP bus 341 and transforms it into a form suitable for the display 340. The display interface 395 of computer system 390 may include video control logic that accepts frame buffer data from PCI bus 394 and transforms it into a form suitable for the display 396.
The processing logic 302 of computer system 300 gathers information from other system elements, including input data from the peripheral interface 368, and program instructions and other data from non-volatile storage 364 or volatile storage 310, or from other systems (e.g., a server used to store and distribute copies of executable code) coupled to a local area network or a wide area network via the network interface 362. The processing logic 302 executes the program instructions (e.g., server remote access software 212) and processes the data accordingly. The program instructions may further configure the processing logic 302 to send data to other system elements, such as information presented to the user via the video interface 342 and the display 340. The network interface 362 enables the processing logic 302 to communicate with other systems via a network (e.g., the Internet). Volatile storage 310 may serve as a low-latency temporary store of information for the processing logic 302, and non-volatile storage 364 may serve as a long term (but higher latency) store of information.
The processing logic 391 of computer system 390 similarly gathers information from other system elements, including input data from the peripheral interface 399, and program instructions and other data from non-volatile storage 397 or volatile storage 393, or from other external systems (e.g., a server used to store and distribute copies of executable code) accessible by computer system 390 via the communication interface 399. The processing logic 391 executes the program instructions (e.g., client remote access software 260 and 270) and processes the data accordingly. The program instructions may further configure the processing logic 391 to send data to other system elements, such as information presented to the user via the display interface 395 and the display 396. The communication interface 398 enables the processing logic 391 to communicate with other systems. Volatile storage 393 may serve as a low-latency temporary store of information for the processing logic 391, and non-volatile storage 397 may serve as a long term (but higher latency) store of information.
The processing logic 302, and hence the computer system 300 as a whole, operates in accordance with one or more programs stored on non-volatile storage 364 or received via the network interface 362. The processing logic 302 may copy portions of the programs into volatile storage 310 for faster access, and may switch between programs or carry out additional programs in response to user actuation of the input device 370. The additional programs may be retrieved from non-volatile storage 364 or may be retrieved or received from other locations via the network interface 362. One or more of these programs executes on computer system 300, causing the computer system to perform at least some functions disclosed herein.
Likewise, the processing logic 391, and hence the computer system 390 as a whole, operates in accordance with one or more programs stored on non-volatile storage 397 or received via the communication interface 398. The processing logic 391 may copy portions of the programs into volatile storage 393 for faster access, and may switch between programs or carry out additional programs in response to user actuation of the input device 394. The additional programs may be retrieved from non-volatile storage 397 or may be retrieved or received from other locations via the communication interface 398. One or more of these programs executes on computer system 390, causing the computer system to perform at least some functions disclosed herein.
Although the illustrative embodiments described herein, utilize 2 displays as part of client device 150, those of ordinary skill in the art will recognize that other illustrative embodiments may include any number of displays, organized in a wide variety of configurations. Examples may include 2 displays organized as a top and bottom half of an overall display, or a 4 by 3 matrix of displays, just to name a few. All such configurations and numbers of displays are within the scope of the present disclosure.
Because of the simplified design of the components of client device 150, it is possible to reduce the overall size and profile of the client device. For example, in at least some illustrative embodiments, a graphics control unit may be reduced down to a housing similar to a USB memory stick (sometimes referred to as a “dongle”) that couples to a display with a VGA connector, and to the network interface and router with a USB connector. In other illustrative embodiments, the graphics control unit may be integrated within the display device housing, with a USB cable coupling the network interface and router to each combined graphics control unit/display device. Other housing configurations will become apparent to those of ordinary skill in the art, and all such configurations are within the scope of the present disclosure.
The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. For example, although the illustrative embodiments describe performing communication between the network interface and router and other client components using a USB interface, other illustrative embodiments include other suitable communication interfaces, and all such communication interfaces are within the scope of the present disclosure. Further, although the server computer was described as using double buffered frame buffers, and the client device was described as using single buffered frame buffers, any number of additional frame buffers may be used within both the server computer and client device described, and all such frame buffer configurations are within the scope of the present disclosure. Additionally, although only difference data may have been described in some illustrative embodiments, the systems and methods described also apply to the additional distribution of reference frames and reference frame data, over an above the difference data generated and distributed as described herein. Further, although the embodiments described herein included a host operating system, other illustrative embodiments include server remote access software that does not require a host operating system, or that include server remote access software that executes as a service of either a guest or a host operating system. Also, although guest operating systems configured with a single graphics adapter are shown in the illustrative embodiments described, other illustrative embodiments may include guest operating system configured with multiple graphics adapters (real or virtual), each configured with multiple sub-regions and display devices as described herein. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US08/58032 | 3/24/2008 | WO | 00 | 9/21/2010 |