Using a scalable graphics system to enable a general-purpose multi-user computer system

Information

  • Patent Grant
  • 10026140
  • Patent Number
    10,026,140
  • Date Filed
    Sunday, March 1, 2009
    15 years ago
  • Date Issued
    Tuesday, July 17, 2018
    6 years ago
Abstract
A graphics system is disclosed. The graphics system includes at least one GPU (graphics processing unit) for processing a compute workload. The graphics system uses a multi-user manager for allocating the compute workload capability for each one of a plurality of users. Each user will use an access terminal.
Description
FIELD OF THE INVENTION

Embodiments of the present invention generally relate to computer system architectures. More particularly, embodiments of the present invention relate to using a graphics system to enable a multi-user computer system.


BACKGROUND OF THE INVENTION

Technological advances have significantly improved the performance of the computer system. However, there is a limit to the improvement in performance solely from technological advances. In the past, the computer system was mostly a productivity tool. Now, the computer system is evolving into both a digital entertainment tool and a productivity tool.


The traditional architecture of the computer system envisions a single user. But as the computer system is configured with greater processing power (e.g., by increasing the number of processors), greater storage capacity (e.g., by increasing the size of the hard drive), greater graphics rendering capacity (e.g., by increasing the processing power of the graphics processing unit and increasing its graphics memory), and greater network communication capacity (e.g., by increasing network communication bandwidth), the capabilities of the computer system begins to exceed the needs of traditional single users. As the capabilities of traditional computer systems continue to expand, the computer systems purchased by the typical single user are over-powered and beyond what a single user needs and consequently, the increasingly capable computer system is under-utilized by the single user.


SUMMARY

Embodiments of the present invention provide a method and system for using a graphics system to enable a multi-user computer system. Embodiments of the present invention should see improved computer resource utilization when compared to the traditional single user computer system architecture. Taking advantage of virtualized resources, a multi-user computer system will offer fail-safed, user-specific, pay as you go, computing resource allocation on the fly capable of adapting to the changing demands of a variety of subscribing users with varying computing resource needs and varying computer platforms.


In one embodiment, the present invention is implemented as a multi-user computer system. The multi-user computer system includes a central processing unit (CPU), a disk drive configured to support a plurality of users, and a graphics system. The graphics system includes at least one GPU (graphics processing unit) for processing a compute workload. A multi-user manager component is included for allocating the compute workload capability for each one of a plurality of users. Each user will use an access terminal to interact with and access the multiuser computer system. In one embodiment, the compute workload can be physics calculations, transcoding applications, or the like. Alternatively, the compute workload can be 3D computer graphics rendering (e.g., for a real-time 3-D application).


In one embodiment, the access terminal is a display and an input device. User I/O is provided via interaction with the display and the input device (e.g., keyboard, mouse, etc.). In another embodiment, the access terminal is a thin client device having a certain degree of local processor functionality. For example, the thin client device would incorporate a local processor and memory. The local processor would be able to assist functionality provided by the CPU and GPU(s) of the multiuser computer system (e.g., locally decompress data, compressed by a data prior to uploading, and the like).





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements.



FIG. 1A illustrates an overview of a multi-user computer system in accordance with one embodiment of the present invention.



FIG. 1B illustrates a multi-user computer system in accordance with one embodiment of the present invention wherein the graphics system is incorporated into the multi-user computer system as a single unit.



FIG. 2 illustrates allocation of compute workload capability of a graphics system for each user in accordance with one embodiment of the present invention.



FIG. 3A illustrates an access terminal in accordance with one embodiment of the present invention wherein the access terminal includes a display and input devices



FIG. 3B illustrates an access terminal in accordance with one embodiment of the present invention wherein the access terminal is a thin client.



FIG. 4 illustrates a flow chart showing a method of supporting an additional user in a multi-user computer system in accordance with one embodiment of the present invention.



FIG. 5 illustrates a multi-user computer system in accordance with one embodiment of the present invention wherein the graphics system is incorporated into the multi-user computer system with a stand-alone array of GPUs.





DETAILED DESCRIPTION OF THE INVENTION

Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of embodiments of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be recognized by one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the embodiments of the present invention.


Notation and Nomenclature:


Some portions of the detailed descriptions, which follow, are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.


It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as “processing” or “accessing” or “executing” or “storing” or “rendering” or the like, refer to the action and processes of a computer system (e.g., multi-user computer system 100 of FIG. 1B), or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories and other computer readable media into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.



FIG. 1A illustrates an overview of a multi-user computer system in accordance with an embodiment of the present invention. The multi-user computer system is shown having a plurality of users (USER1, USER2, USER3 . . . USERN) connecting through a Network Cloud to a computer system 100 with scalable compute resource (or workload) capabilities. Taking advantage of virtualized resources, the multi-user computer system will offer a variety of users with varying compute resource needs, utilizing a variety of platforms (lap tops, cell phones, thin clients, etc.), a virtual computer experience. This virtual experience will adapt to the individual user's compute needs on the fly. Other embodiments may offer a “pay as you go” virtual computer that allows a user to purchase only the compute resources they need. Because the individual user works with a virtualized instance, these virtual applications are also fail-safed. In the event of a loss of resources in the multi-user computer system, other available resources are reallocated to support the current compute resource demands of the users. Further, in the event of a loss of resources, while available resources are reallocated, the user is seamlessly switched to alternate resources when necessary without loss of service or personalization of computing environment.



FIG. 1A further illustrates the flexible nature of an embodiment of a virtual computer environment. For example, USER1 is provided a virtual computer environment with the compute resources to handle low resource demands such as word processing applications. USER1 has the desired resource capabilities to meet his needs without having to purchase additional resources that would then go unused. In other words, while a highly capable single user machine would be more than capable of meeting the needs of USER1, the added expensive of unneeded additional resources is avoided by only purchasing the level of resource capability needed from a multi-user computer system.


The other end of the spectrum is illustrated by USER 2, who is provided a virtual computer environment with the compute resources to handle high resource demands such as Graphics and Video applications. Rather than be forced to purchase an expensive single user computer, USER2 is also able to take advantage of a virtual computer environment and purchase all the compute resources needed. An added advantage for USER2 is avoiding computer system obsolescence. As USER2's future compute resource needs grow, USER2 need only purchase additional resources, rather than go through the expense of purchasing an entire computer system.


Meanwhile, USER3 illustrates the capability of the multi-user computer system to allocate resources on the fly. USER3's compute sources needs are on a continuum, from the low resource needs of an email application, to the medium resource needs of web browsing with Video. USER3 will be allocated the compute resources needed, with the ability to reallocate more resources as USER3's demands change. This allows the multi-user computer system to remain efficient, providing the resources that are actually needed or requested, rather than providing a set amount of resources that may go largely unused by a majority of users. Further by efficiently allocating the compute resources, the system is capable of handling requests for large amounts of compute resources, and handling the unexpected loss of resources due to equipment failure, allowing the system to be fail-safe and the reallocation of resources invisible to the users. In other words, the reallocation of resources occurs without the user experiencing any disruption of service or personalization of computing environment.


USERN illustrates the capability of the multi-user computer system to allocate resources on the fly with the added functionality of a “pay as you go” option where additional resources above and beyond the normal allocation are made available to the user for an additional fee. In this example, USERN has a user account that allows the allocation of compute resources for low to high demand applications, as seen with USER3, with the added option of purchasing additional compute resources as needed. A “pay as you go” user would have the benefit of additional resources when needed and only have to pay for them when the additional resources were requested. Rather than having to purchase an expensive user package with more compute resources than necessary on the off chance that they might be necessary in the future, a user can purchase compute resources for today's needs and purchase more as they become necessary. This advantage becomes even more apparent when compared to the single user computer system. Rather than having to purchase a new computer to meet evolving computing demands or purchase a computer up front with more resources than necessary, USERN need only pay the fee to purchase the additional resources when needed.



FIG. 1B illustrates a multi-user computer system 100 in accordance with an embodiment of the present invention. The computer system 100 is shown having a CPU 10 and RAM 20 coupled to a hard drive 30 and a graphics system 50 via a hub 40.


As depicted in the FIG. 1B embodiment, the multiuser computer system 100 illustrates a manner in which the traditional computer system is transformed from an underutilized single user computer system into an extensively utilized multi-user computer system 100. The multi-user computer system 100 concurrently supports a number of different users. The FIG. 1B embodiment shows a plurality of different users (e.g., user1, user2, user3 . . . userN). Each user operates a respective access terminal (as later illustrated in FIGS. 3A and 3B). It should be understood that the multi-user computer system 100 may be configured to support any number of users (e.g., 2 users, 16 users, 64 users, etc.). Additionally, it should be understood that the multi-user computer system 100 may have other configurations.


Each user is connected via a wired or wireless network connection (e.g., network connection 1, network connection 2, network connection 3 . . . network connection N) to the multi-user computer system 100. Possible network connections include USB, 802.11, LAN, WPAN (such as Bluetooth), GSM (cell phone networks) and others. In practice, the users share the resources of the multi-user computer system 100. The multi-user computer system 100 provides computational processing, information storage, network communication, and graphical/pixel processing services to the users. In one embodiment, these services are provided to each individual user through the use of a virtual environment. The server/client environment, where a single full-service computer is able to be accessed by multiple users is an example of a virtual environment. Virtualization is the implementation of a virtual machine environment, such that while there is only one server providing all primary processing and data storage, each user on the network will be provided a complete simulation of the underlying hardware. These may be referred to as virtual PCs, virtual machines, and virtual computers, for example. This allows the sharing of the resources of the single computer system (the server) with the plurality of users.


As described above, the multi-user computer system 100 includes the central processing unit 10, the RAM (main memory) 20, the HDD (hard disk drive) 30, and the hub 40. Further, the multi-user computer system 100 has a graphics system 50. In one embodiment, the graphics system 50 includes a plurality of GPUs 52 (GPU1, GPU2, GPU3 . . . GPUN) and a multi-use manager 54. The graphics system 50 may also have other combinations of GPUs 52. Further, additional GPUs 52 may be added to the graphics system 50 in any one of numerous ways. For example, a module (e.g., graphics card) having a single GPU or multiple GPUs may be coupled to the graphics system 50. Further, the graphics system 50 may include more than one graphics card, each with a single GPU or multiple GPUs. The GPU 52 may have one or multiple cores. The GPUs 52 are in communication with the graphics system 50 and the multi-user computer system 100 through well known interfaces such as PCI, PCI-Express, USB, Ethernet, and 802.11. As later illustrated in FIG. 5, the GPUs may be incorporated into a separate chassis.


The GPU 52 is a semiconductor device that specializes in rapidly processing graphical or pixel data compared to a typical CPU 10. However, current GPUs 52 may also be utilized for general purpose processing tasks typically performed by the CPU 10. These general purpose processing tasks, referred to as a “compute workload” in the present invention, may be performed by the graphic system 50. This compute workload may include software applications such as word processing, virus detection, physics modeling, 3D graphics rendering and transcoding. Other applications are also possible. Allocating the compute workload capability of the GPUs 52 among the plurality of users allows each user access to a virtual computer with all the computing capability they currently require.


In the present invention, the graphics system 50 is configured to enable the scaling of processing power in accordance with the number of new users, the amount of desired peak compute workload, and the like. For example, the above mentioned virtualization is implemented wherein the total number of GPUs 52 may be increased to support new users and changing user compute workload demands. As the total compute workload capability of the graphics system 50 can be increased by adding additional GPUs 52 for example. This increased capability may be used to create additional virtual computers for additional users, or for providing increased capability for the virtual computers already in existence.



FIG. 2 illustrates the allocation of compute workload capability of the graphics system 50 for N number of users (e.g., user1, user2, user3 . . . userN) in accordance with an embodiment of the present invention. In a typical operational scenario, the multi-user manager 54, as depicted in FIG. 1B, receives requests for the multi-user computer system 100 to support additional users. The multi-user manager 54 decides whether to accept the request. If the multi-user manager 54 accepts the request, the multi-user manager 54 allocates compute workload capability for the additional user. As shown in FIG. 2, allocation of compute workload capability is dependent on the needs of the user. As the compute workload increases for a particular user, a larger allocation of compute workload capacity is required for the user. This is illustrated in the varying compute workload demands between a user operating word processing software and another user playing a 3D video game.


Referring again to FIG. 1B, the HDD 30 is partitioned into sections for each user (e.g., user1, user2, user3 . . . userN). Further, the HDD 30 includes a section of shared resources available to all users. If an additional user is accepted, the multi-user computer system 100 is configured to support the additional user. This configuration includes creating a section for the additional user in the HDD 30.


In one embodiment of the present invention as depicted in FIG. 3A, the access terminal includes a display 302 and an input device 304 (such as a keyboard and/or mouse). In such an embodiment, the processing functionality is provided by the multiuser computer system 100. The display 302 is controlled by the computer system 100 and the input device 304 is the means to receive input from the user. The FIG. 3A embodiment has an advantage of being comparatively straightforward to deploy and comparatively easy to maintain.


In another embodiment of the present invention as depicted in FIG. 3B, the access terminal is a thin client 300. A thin client 300 is a computer designed to run on a network with a server, where the server provides the majority of the processing, information storage and software. A thin client 300 might only have a simple user interface accessed from a CD-ROM, a network drive or flash memory. Usually a thin client 300 will have only a display 302, input devices 304 (such as a keyboard and mouse), and the computing ability 306 to handle its own display 304 and the connection to the network. For example, the computing ability 306 can incorporate a local processor and memory. The local processor would be able to assist functionality provided by the CPU and GPU(s) of the multiuser computer system 100 (e.g., locally decompress data, compressed by a data prior to uploading, and the like). In other embodiments of the present invention, the access terminal may contain an integrated display and input device. Examples of integrated display and input devices include a small, hand-held computer, a laptop computer, a cell phone and a PDA (personal digital assistant).



FIG. 4 illustrates a flow chart showing a method 400 of supporting an additional user in a multi-user computer system 100 in accordance with an embodiment of the present invention. Reference is made to FIGS. 1 and 2.


At Block 410, the graphics system 50 of the multi-user computer system 100 is requested to support an additional user. In particular, the multi-user manager 54 receives the request to support the additional user. Moreover, the multi-user manager 54 decides whether to accept the request.


Continuing, at Block 420, compute workload capability is allocated for the additional user if the graphics system 50 accepts the request. Allocation of compute workload capability is depicted in FIG. 2. This allocation of compute workload capability provides the new user a virtual computer scaled to meet their computational needs.


In an embodiment of the present invention, the request may include proof of an additional license for the additional user to use the multi-user computer system 100. In another embodiment, the request may include an authorization key obtained after payment of a fee. In yet another embodiment, the request may include payment for supporting the additional user.


Further, at Block 430, the multi-user computer system 100 is configured to support the additional user. This configuration includes creating a section for the additional user in the HDD 40 of the multi-user computer system 100.


In sum, a seamless virtual computer experience for multiple users is created at a reduced cost and without a cumbersome process. Each user is provided a complete virtual environment provided by the multi-user computer system 100 and its graphics system 50.


Referring to FIG. 5, another embodiment of the present invention is illustrated. The graphics system 550 includes four GPUs 552 in a separate array 556. This array 556, fixed in a separate chassis from the rest of the multi-user computer system 500, may exist as a single array 556 as depicted in FIG. 2, or as a plurality of arrays (not shown). Further, any number of GPUs 552 may exist in the array 556. It should be understood that the multi-user computer system 500 may have other configurations. For example, the array 556 and multi-user computer system 500 may be placed in a single chassis. Further, the array 556 and/or multi-user computer system 500 may be rack-mounted for use in a rack mount chassis. The GPUs 552 may be hot-swappable, allowing the removal and addition of GPUs 552 without powering down the multi-user computer system 500. An obvious benefit received from hot-swapping GPUs 552 is easy scaling of the graphics system 550 without disruption for the current users. The array 556 of GPUs 552 is in communication with the graphics system 550 and the multi-user computer system 500 via a transmission line Z through any of a number of ways well known in the art, such as PCI, PCI-X, and PCI Express.


The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.

Claims
  • 1. An apparatus comprising: a single physical server comprising: a plurality of GPUs (graphics processing units);a compute workload capability (CWC) property that indicates available compute workload processing capability of the plurality of GPUs for non-graphics tasks and qraphics tasks and that is allocated per user to a plurality of users in an exclusive non-overlapping manner based on an actual user need or a user request; anda plurality of systems remote from the plurality of users and including a processing system that includes a central processing unit (CPU), a storage system that includes a storage drive, and a graphics system that includes the GPUs,wherein the graphics system is configured to receive via a network cloud a support access request, from an additional user, for the single physical server to support use by the additional user as a new user and is configured to grant or to deny the support access request by the additional user to the single physical server based on the CWC property and prior individual exclusive allocations of the CWC property to the users.
  • 2. The apparatus as recited in claim 1, wherein the compute workload processing includes at least one of word processing, virus detection, physics calculations, transcoding and 3D computer graphics rendering, wherein the graphics system includes a multi-user manager configured to allocate compute workload capability for each one of the users, wherein each user uses an access terminal.
  • 3. The apparatus as recited in claim 2, wherein the multi-user manager on demand allocates the compute workload capability for each user in response to the changing compute workload needs of the user.
  • 4. The apparatus as recited in claim 2, wherein the multi-user manager allocates the compute workload capability for each user based on the compute workload capability purchased by the plurality of users.
  • 5. The apparatus as recited in claim 2, wherein the multi-user manager receives a request to support an additional user, and if accepted, the multi-user manager allocates the compute workload capability for the additional user.
  • 6. The apparatus as recited in claim 2, wherein the compute workload capability for each user is unaffected when compute resources are reallocated.
  • 7. The apparatus as recited in claim 2, wherein the access terminal includes a display and an input device, wherein the graphics system is operable to consider financial-related information of the support access request when granting or denying the support access request, and wherein the graphics system is configured to scale up or to scale down number of GPUs in operation.
  • 8. A multi-user computer system comprising: a single physical server comprising: a plurality of GPUs (graphics processing units);a compute workload capability (CWC) property that indicates available compute workload processing capability of the plurality of GPUs for non-graphics tasks and graphics tasks and that is allocated per user to a plurality of users in an exclusive non-overlapping manner based on an actual user need or a user request;a processing system that is remote from the plurality of users and that includes a central processing unit (CPU);a storage system that is remote from the users and that includes a storage drive; anda graphics system that is remote from the users and that includes the GPUs, wherein the graphics system is configured to receive via a network cloud a support access request, from an additional user, for the single physical server to support use by the additional user as a new user and is configured to grant or to deny the support access request by the additional user to the single physical server based on the CWC property and prior individual exclusive allocations of the CWC property to the users, and wherein the graphics system is configured to scale up or to scale down number of GPUs in operation.
  • 9. The multi-user computer system as recited in claim 8 wherein the compute workload processing includes at least one of: word processing, virus detection, physics calculations, transcoding and 3D computer graphics rendering, wherein the storage system is configured to support the users, and wherein the graphics system further includes a multi-user manager configured to allocate compute workload capability for each one of the users, wherein each user uses an access terminal.
  • 10. The multi-user computer system as recited in claim 9 wherein the multi-user manager on demand allocates the compute workload capability for each user in response to the changing compute workload needs of the user.
  • 11. The multi-user computer system as recited in claim 9 wherein the multi-user manager allocates the compute workload capability for each user based on the compute workload capability purchased by the plurality of users.
  • 12. The multi-user computer system as recited in claim 9 wherein the multi-user manager receives a request to support an additional user, and if accepted the multi-user manager allocates the compute workload capability for the additional user.
  • 13. The multi-user computer system as recited in claim 9 wherein the compute workload capability for each user is unaffected when compute resources are reallocated.
  • 14. The multi-user computer system as recited in claim 9 wherein the access terminal includes a display and an input device, and wherein the graphics system is operable to consider financial-related information of the support access request when granting or denying the support access request.
  • 15. A method of supporting an additional user in a multi-user computer system, the method comprising: providing a single physical server comprising a plurality of GPUs (graphics processing units), a compute workload capability (CWC) property that indicates available compute workload processing capability of the plurality of GPUs for non-qraphics tasks and graphics tasks and that is allocated per user to a plurality of users in an exclusive non-overlapping manner based on an actual user need or a user request, and a plurality of systems remote from the plurality of users and including a processing system that includes a central processing unit (CPU), a storage system that includes a storage drive, and a graphics system for use by the users, wherein the graphics system includes the GPUs and a multi-user manager;using the graphics system to receive via a network cloud a support access request, from the additional user, for the single physical server to support use by the additional user as a new user and to grant or to deny the support access request by the additional user to the single physical server based on the CWC property and prior individual exclusive allocations of the CWC property to the users; andusing the graphics system to scale up or to scale down number of GPUs in operation.
  • 16. The method as recited in claim 15 wherein the multi-user manager is configured to allocate compute workload capability, wherein the using the graphics system to receive includes: requesting the graphics system of the single physical server to support the additional user which uses an access terminal,if the graphics system accepts the request, allocating the compute workload capability for the additional user, andconfiguring the single physical server to support the additional user, wherein the compute workload processing includes at least one of: word processing, virus detection, physics calculations, transcoding, and 3D computer graphics rendering.
  • 17. The method as recited in claim 16 wherein the multi-user manager on demand allocates the compute workload capability for each user in response to the changing compute workload needs of the user.
  • 18. The method as recited in claim 16 wherein the multi-user manager allocates the compute workload capability for each user based on the compute workload capability purchased by users.
  • 19. The method as recited in claim 16 wherein the compute workload capability for each user is unaffected when compute resources are reallocated.
  • 20. The method as recited in claim 16 wherein the access terminal includes a display and an input device, and wherein the graphics system is operable to consider financial-related information of the support access request when granting or denying the support access request.
  • 21. The method as recited in claim 16 wherein the multi-user manager is configured to decide whether to accept the request.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation in part and incorporates by reference, the following U.S. patent application: USING A GRAPHICS SYSTEM TO ENABLE A MULTI-USER COMPUTER SYSTEM, by Diamond, M., filed on Jun. 10, 2005, Ser. No. 11/150,620.

US Referenced Citations (148)
Number Name Date Kind
4761736 Di Orio Aug 1988 A
4949280 Littlefield Aug 1990 A
5117468 Hino et al. May 1992 A
5247634 Cline et al. Sep 1993 A
5251295 Ikenoue et al. Oct 1993 A
5276798 Peaslee et al. Jan 1994 A
5289574 Sawyer Feb 1994 A
5315699 Imai et al. May 1994 A
5361078 Caine Nov 1994 A
5377320 Abi-Ezzi et al. Dec 1994 A
5388206 Poulton et al. Feb 1995 A
5392393 Deering et al. Feb 1995 A
5408606 Eckart Apr 1995 A
5434967 Tannenbaum et al. Jul 1995 A
5440682 Deering Aug 1995 A
5448655 Yamaguchi Sep 1995 A
5467102 Kuno et al. Nov 1995 A
5485559 Sakaibara et al. Jan 1996 A
5493643 Soderberg et al. Feb 1996 A
5523769 Lauer et al. Jun 1996 A
5602983 Naba et al. Feb 1997 A
5604490 Blakley, III Feb 1997 A
5606336 Yuki Feb 1997 A
5668995 Bhat Sep 1997 A
5694150 Sigona et al. Dec 1997 A
5727178 Pletcher et al. Mar 1998 A
5794016 Kelleher Apr 1998 A
5781747 Smith et al. Jul 1998 A
5784035 Hagiwara et al. Jul 1998 A
5784616 Horvitz Jul 1998 A
5784628 Reneris Jul 1998 A
5790862 Tanaka et al. Aug 1998 A
5889989 Robertazzi et al. Mar 1999 A
5930777 Barber Jul 1999 A
5956046 Kehlet et al. Sep 1999 A
6002409 Harkin Dec 1999 A
6028586 Swan et al. Feb 2000 A
6044215 Charles et al. Mar 2000 A
6064403 Hayashi et al. May 2000 A
6067097 Morita et al. May 2000 A
6085216 Huberman et al. Jul 2000 A
6104392 Shaw et al. Aug 2000 A
6141021 Bickford et al. Oct 2000 A
6157917 Barber Dec 2000 A
6191800 Arenburg et al. Feb 2001 B1
6206087 Nakase et al. Mar 2001 B1
6209066 Holzle et al. Mar 2001 B1
6249294 Lefebvre et al. Jun 2001 B1
6269275 Slade Jul 2001 B1
6282596 Bealkowski et al. Aug 2001 B1
6292200 Bowen et al. Sep 2001 B1
6301616 Pal et al. Oct 2001 B1
6304952 Suzuoki Oct 2001 B1
6329996 Bowen et al. Dec 2001 B1
6331856 Van Hook et al. Dec 2001 B1
6336127 Kurtzberg et al. Jan 2002 B1
6359624 Kunimatsu Mar 2002 B1
6397343 Williams et al. May 2002 B1
6473086 Morein et al. Oct 2002 B1
6476816 Deming et al. Nov 2002 B1
6496187 Deering et al. Dec 2002 B1
6501999 Cai Dec 2002 B1
6535216 Deming et al. Mar 2003 B1
6535939 Arimilli et al. Mar 2003 B1
6591262 MacLellan et al. Jul 2003 B1
6611241 Firester et al. Aug 2003 B1
6630936 Langendorf Oct 2003 B1
6631474 Cai et al. Oct 2003 B1
6654826 Cho et al. Nov 2003 B1
6670958 Aleksic et al. Dec 2003 B1
6683614 Walls et al. Jan 2004 B2
6700586 Demers Mar 2004 B1
6704021 Rogers et al. Mar 2004 B1
6708217 Colson et al. Mar 2004 B1
6711691 Howard et al. Mar 2004 B1
6714200 Talnykin et al. Mar 2004 B1
6760031 Langendoff et al. Jul 2004 B1
6760684 Yang Jul 2004 B1
6772265 Baweja et al. Aug 2004 B2
6798420 Xie Sep 2004 B1
6832269 Huang et al. Dec 2004 B2
6835070 Law Dec 2004 B1
6859926 Brenner et al. Feb 2005 B1
6864891 Myers Mar 2005 B2
6914779 Askeland et al. Jul 2005 B2
6919894 Emmot et al. Jul 2005 B2
6919896 Sasaki et al. Jul 2005 B2
6937245 Van Hook et al. Aug 2005 B1
6956579 Diard et al. Oct 2005 B1
6985152 Rubinstein et al. Jan 2006 B2
7019752 Paquette et al. Mar 2006 B1
7024510 Olarig Apr 2006 B2
7030837 Vong et al. Apr 2006 B1
7058829 Hamilton Jun 2006 B2
7079149 Main et al. Jul 2006 B2
7080181 Wolford Jul 2006 B2
7119808 Gonzalez et al. Oct 2006 B2
7176847 Loh Feb 2007 B2
7178147 Benhase Feb 2007 B2
7203909 Horvitz et al. Apr 2007 B1
7260839 Karasaki Aug 2007 B2
7321367 Isakovic et al. Jan 2008 B2
7372465 Tamasi May 2008 B1
7634668 White et al. Dec 2009 B2
7663633 Diamond et al. Feb 2010 B1
20020047851 Hirase et al. Apr 2002 A1
20020073247 Baweja et al. Jun 2002 A1
20020099521 Yang Jul 2002 A1
20020107809 Biddle et al. Aug 2002 A1
20020118201 Mukherjee et al. Aug 2002 A1
20020130889 Blythe et al. Sep 2002 A1
20020141152 Pokharna et al. Oct 2002 A1
20020180725 Simmonds et al. Dec 2002 A1
20030051021 Hirschfeld Mar 2003 A1
20030067470 Main et al. Apr 2003 A1
20030071816 Langendorf Apr 2003 A1
20030115344 Tang Jun 2003 A1
20030128216 Walls et al. Jul 2003 A1
20030137483 Callway Jul 2003 A1
20030193503 Seminatore et al. Oct 2003 A1
20030233391 Crawford, Jr. Dec 2003 A1
20040008200 Naegle et al. Jan 2004 A1
20040021678 Ullah et al. Feb 2004 A1
20040032861 Lee Feb 2004 A1
20040039954 White et al. Feb 2004 A1
20040103191 Larsson May 2004 A1
20040104913 Walls et al. Jun 2004 A1
20040125111 Tang-Petersen et al. Jul 2004 A1
20040189677 Amann et al. Sep 2004 A1
20040210591 Hirschfeld Oct 2004 A1
20050017980 Chang et al. Jan 2005 A1
20050028015 Asano et al. Feb 2005 A1
20050088445 Gonzalez Apr 2005 A1
20050134588 Aila et al. Jun 2005 A1
20050144452 Lynch Jun 2005 A1
20050160212 Caruk Jul 2005 A1
20050190190 Diard et al. Sep 2005 A1
20050190536 Anderson et al. Sep 2005 A1
20050270298 Thieret Dec 2005 A1
20050278559 Sutardja et al. Dec 2005 A1
20060107250 Tarditi et al. May 2006 A1
20060161753 Aschoff Jul 2006 A1
20060164414 Farinelli Jul 2006 A1
20060168230 Caccavale et al. Jul 2006 A1
20060176881 Ma et al. Aug 2006 A1
20060206635 Alexander Sep 2006 A1
20080084419 Bakalash et al. Apr 2008 A1
20080238917 Bakalash Oct 2008 A1
Foreign Referenced Citations (8)
Number Date Country
1020050047243 May 2005 KR
421752 Feb 2001 TW
485309 May 2002 TW
591400 Jun 2004 TW
200422936 Nov 2004 TW
I223752 Nov 2004 TW
200501046 Jan 2005 TW
2005010854 Feb 2005 WO
Non-Patent Literature Citations (8)
Entry
Luke, E. J. et al., “Semotus Visum: a flexible remote visualization framework”, IEEE Visualization 2002, Oct. 27-Nov. 1, 2002, Boston, MA, pp. 61-68.
Casera, S. et al., “A Collaborative Extension of a Visualization System”, Proceedings of the First International Conference on Distributed Frameworks for Multimedia Applications (DFMA'05), Feb. 6-9, 2005, Besoncon, France, pp. 176-182.
Stegmaier, S. et al., “Widening the Remote Visualization Bottleneck”, Proceedings of the 3rd International Symposium on Image and Signal Processing and Analysis 2003, Sep. 18-20, 2003, Rome, Italy, vol. 1 No. 18, pp. 174-179.
Miller, J. R., “The Remote Application Controller”, Computer and Graphics, Pergamon Press Ltd, Oxford, Great Britain, vol. 27 No. 4, Aug. 2003, pp. 605-615.
http://www.informit.com/articles/article.aspx?p=339936, Oct. 22, 2004.
Bhatt, Ajay V., “Creating a PCI Interconnect”, Intel Corporation, Jan. 1, 2002, pp. 1-8.
U.S. Appl. No. 60/523,084, filed Nov. 19, 2003, pp. 1-5.
Rupley, Sebastian, “Intel Developer Forum to Spotlight PCI Express,” PC Magazine, Sep. 6, 2002, p. 1.
Related Publications (1)
Number Date Country
20090164908 A1 Jun 2009 US
Continuation in Parts (1)
Number Date Country
Parent 11150620 Jun 2005 US
Child 12395685 US