Embodiments of the present invention generally relate to computer system architectures. More particularly, embodiments of the present invention relate to using a graphics system to enable a multi-user computer system.
Technological advances have significantly improved the performance of the computer system. However, there is a limit to the improvement in performance solely from technological advances. In the past, the computer system was mostly a productivity tool. Now, the computer system is evolving into both a digital entertainment tool and a productivity tool.
The traditional architecture of the computer system envisions a single user. But as the computer system is configured with greater processing power (e.g., by increasing the number of processors), greater storage capacity (e.g., by increasing the size of the hard drive), greater graphics rendering capacity (e.g., by increasing the processing power of the graphics processing unit and increasing its graphics memory), and greater network communication capacity (e.g., by increasing network communication bandwidth), the capabilities of the computer system begins to exceed the needs of traditional single users. As the capabilities of traditional computer systems continue to expand, the computer systems purchased by the typical single user are over-powered and beyond what a single user needs and consequently, the increasingly capable computer system is under-utilized by the single user.
Embodiments of the present invention provide a method and system for using a graphics system to enable a multi-user computer system. Embodiments of the present invention should see improved computer resource utilization when compared to the traditional single user computer system architecture. Taking advantage of virtualized resources, a multi-user computer system will offer fail-safed, user-specific, pay as you go, computing resource allocation on the fly capable of adapting to the changing demands of a variety of subscribing users with varying computing resource needs and varying computer platforms.
In one embodiment, the present invention is implemented as a multi-user computer system. The multi-user computer system includes a central processing unit (CPU), a disk drive configured to support a plurality of users, and a graphics system. The graphics system includes at least one GPU (graphics processing unit) for processing a compute workload. A multi-user manager component is included for allocating the compute workload capability for each one of a plurality of users. Each user will use an access terminal to interact with and access the multiuser computer system. In one embodiment, the compute workload can be physics calculations, transcoding applications, or the like. Alternatively, the compute workload can be 3D computer graphics rendering (e.g., for a real-time 3-D application).
In one embodiment, the access terminal is a display and an input device. User I/O is provided via interaction with the display and the input device (e.g., keyboard, mouse, etc.). In another embodiment, the access terminal is a thin client device having a certain degree of local processor functionality. For example, the thin client device would incorporate a local processor and memory. The local processor would be able to assist functionality provided by the CPU and GPU(s) of the multiuser computer system (e.g., locally decompress data, compressed by a data prior to uploading, and the like).
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements.
Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of embodiments of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be recognized by one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the embodiments of the present invention.
Notation and Nomenclature:
Some portions of the detailed descriptions, which follow, are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as “processing” or “accessing” or “executing” or “storing” or “rendering” or the like, refer to the action and processes of a computer system (e.g., multi-user computer system 100 of
The other end of the spectrum is illustrated by USER 2, who is provided a virtual computer environment with the compute resources to handle high resource demands such as Graphics and Video applications. Rather than be forced to purchase an expensive single user computer, USER2 is also able to take advantage of a virtual computer environment and purchase all the compute resources needed. An added advantage for USER2 is avoiding computer system obsolescence. As USER2's future compute resource needs grow, USER2 need only purchase additional resources, rather than go through the expense of purchasing an entire computer system.
Meanwhile, USER3 illustrates the capability of the multi-user computer system to allocate resources on the fly. USER3's compute sources needs are on a continuum, from the low resource needs of an email application, to the medium resource needs of web browsing with Video. USER3 will be allocated the compute resources needed, with the ability to reallocate more resources as USER3's demands change. This allows the multi-user computer system to remain efficient, providing the resources that are actually needed or requested, rather than providing a set amount of resources that may go largely unused by a majority of users. Further by efficiently allocating the compute resources, the system is capable of handling requests for large amounts of compute resources, and handling the unexpected loss of resources due to equipment failure, allowing the system to be fail-safe and the reallocation of resources invisible to the users. In other words, the reallocation of resources occurs without the user experiencing any disruption of service or personalization of computing environment.
USERN illustrates the capability of the multi-user computer system to allocate resources on the fly with the added functionality of a “pay as you go” option where additional resources above and beyond the normal allocation are made available to the user for an additional fee. In this example, USERN has a user account that allows the allocation of compute resources for low to high demand applications, as seen with USER3, with the added option of purchasing additional compute resources as needed. A “pay as you go” user would have the benefit of additional resources when needed and only have to pay for them when the additional resources were requested. Rather than having to purchase an expensive user package with more compute resources than necessary on the off chance that they might be necessary in the future, a user can purchase compute resources for today's needs and purchase more as they become necessary. This advantage becomes even more apparent when compared to the single user computer system. Rather than having to purchase a new computer to meet evolving computing demands or purchase a computer up front with more resources than necessary, USERN need only pay the fee to purchase the additional resources when needed.
As depicted in the
Each user is connected via a wired or wireless network connection (e.g., network connection 1, network connection 2, network connection 3 . . . network connection N) to the multi-user computer system 100. Possible network connections include USB, 802.11, LAN, WPAN (such as Bluetooth), GSM (cell phone networks) and others. In practice, the users share the resources of the multi-user computer system 100. The multi-user computer system 100 provides computational processing, information storage, network communication, and graphical/pixel processing services to the users. In one embodiment, these services are provided to each individual user through the use of a virtual environment. The server/client environment, where a single full-service computer is able to be accessed by multiple users is an example of a virtual environment. Virtualization is the implementation of a virtual machine environment, such that while there is only one server providing all primary processing and data storage, each user on the network will be provided a complete simulation of the underlying hardware. These may be referred to as virtual PCs, virtual machines, and virtual computers, for example. This allows the sharing of the resources of the single computer system (the server) with the plurality of users.
As described above, the multi-user computer system 100 includes the central processing unit 10, the RAM (main memory) 20, the HDD (hard disk drive) 30, and the hub 40. Further, the multi-user computer system 100 has a graphics system 50. In one embodiment, the graphics system 50 includes a plurality of GPUs 52 (GPU1, GPU2, GPU3 . . . GPUN) and a multi-use manager 54. The graphics system 50 may also have other combinations of GPUs 52. Further, additional GPUs 52 may be added to the graphics system 50 in any one of numerous ways. For example, a module (e.g., graphics card) having a single GPU or multiple GPUs may be coupled to the graphics system 50. Further, the graphics system 50 may include more than one graphics card, each with a single GPU or multiple GPUs. The GPU 52 may have one or multiple cores. The GPUs 52 are in communication with the graphics system 50 and the multi-user computer system 100 through well known interfaces such as PCI, PCI-Express, USB, Ethernet, and 802.11. As later illustrated in
The GPU 52 is a semiconductor device that specializes in rapidly processing graphical or pixel data compared to a typical CPU 10. However, current GPUs 52 may also be utilized for general purpose processing tasks typically performed by the CPU 10. These general purpose processing tasks, referred to as a “compute workload” in the present invention, may be performed by the graphic system 50. This compute workload may include software applications such as word processing, virus detection, physics modeling, 3D graphics rendering and transcoding. Other applications are also possible. Allocating the compute workload capability of the GPUs 52 among the plurality of users allows each user access to a virtual computer with all the computing capability they currently require.
In the present invention, the graphics system 50 is configured to enable the scaling of processing power in accordance with the number of new users, the amount of desired peak compute workload, and the like. For example, the above mentioned virtualization is implemented wherein the total number of GPUs 52 may be increased to support new users and changing user compute workload demands. As the total compute workload capability of the graphics system 50 can be increased by adding additional GPUs 52 for example. This increased capability may be used to create additional virtual computers for additional users, or for providing increased capability for the virtual computers already in existence.
Referring again to
In one embodiment of the present invention as depicted in
In another embodiment of the present invention as depicted in
At Block 410, the graphics system 50 of the multi-user computer system 100 is requested to support an additional user. In particular, the multi-user manager 54 receives the request to support the additional user. Moreover, the multi-user manager 54 decides whether to accept the request.
Continuing, at Block 420, compute workload capability is allocated for the additional user if the graphics system 50 accepts the request. Allocation of compute workload capability is depicted in
In an embodiment of the present invention, the request may include proof of an additional license for the additional user to use the multi-user computer system 100. In another embodiment, the request may include an authorization key obtained after payment of a fee. In yet another embodiment, the request may include payment for supporting the additional user.
Further, at Block 430, the multi-user computer system 100 is configured to support the additional user. This configuration includes creating a section for the additional user in the HDD 40 of the multi-user computer system 100.
In sum, a seamless virtual computer experience for multiple users is created at a reduced cost and without a cumbersome process. Each user is provided a complete virtual environment provided by the multi-user computer system 100 and its graphics system 50.
Referring to
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
This application is a continuation in part and incorporates by reference, the following U.S. patent application: USING A GRAPHICS SYSTEM TO ENABLE A MULTI-USER COMPUTER SYSTEM, by Diamond, M., filed on Jun. 10, 2005, Ser. No. 11/150,620.
Number | Name | Date | Kind |
---|---|---|---|
4761736 | Di Orio | Aug 1988 | A |
4949280 | Littlefield | Aug 1990 | A |
5117468 | Hino et al. | May 1992 | A |
5247634 | Cline et al. | Sep 1993 | A |
5251295 | Ikenoue et al. | Oct 1993 | A |
5276798 | Peaslee et al. | Jan 1994 | A |
5289574 | Sawyer | Feb 1994 | A |
5315699 | Imai et al. | May 1994 | A |
5361078 | Caine | Nov 1994 | A |
5377320 | Abi-Ezzi et al. | Dec 1994 | A |
5388206 | Poulton et al. | Feb 1995 | A |
5392393 | Deering et al. | Feb 1995 | A |
5408606 | Eckart | Apr 1995 | A |
5434967 | Tannenbaum et al. | Jul 1995 | A |
5440682 | Deering | Aug 1995 | A |
5448655 | Yamaguchi | Sep 1995 | A |
5467102 | Kuno et al. | Nov 1995 | A |
5485559 | Sakaibara et al. | Jan 1996 | A |
5493643 | Soderberg et al. | Feb 1996 | A |
5523769 | Lauer et al. | Jun 1996 | A |
5602983 | Naba et al. | Feb 1997 | A |
5604490 | Blakley, III | Feb 1997 | A |
5606336 | Yuki | Feb 1997 | A |
5668995 | Bhat | Sep 1997 | A |
5694150 | Sigona et al. | Dec 1997 | A |
5727178 | Pletcher et al. | Mar 1998 | A |
5794016 | Kelleher | Apr 1998 | A |
5781747 | Smith et al. | Jul 1998 | A |
5784035 | Hagiwara et al. | Jul 1998 | A |
5784616 | Horvitz | Jul 1998 | A |
5784628 | Reneris | Jul 1998 | A |
5790862 | Tanaka et al. | Aug 1998 | A |
5889989 | Robertazzi et al. | Mar 1999 | A |
5930777 | Barber | Jul 1999 | A |
5956046 | Kehlet et al. | Sep 1999 | A |
6002409 | Harkin | Dec 1999 | A |
6028586 | Swan et al. | Feb 2000 | A |
6044215 | Charles et al. | Mar 2000 | A |
6064403 | Hayashi et al. | May 2000 | A |
6067097 | Morita et al. | May 2000 | A |
6085216 | Huberman et al. | Jul 2000 | A |
6104392 | Shaw et al. | Aug 2000 | A |
6141021 | Bickford et al. | Oct 2000 | A |
6157917 | Barber | Dec 2000 | A |
6191800 | Arenburg et al. | Feb 2001 | B1 |
6206087 | Nakase et al. | Mar 2001 | B1 |
6209066 | Holzle et al. | Mar 2001 | B1 |
6249294 | Lefebvre et al. | Jun 2001 | B1 |
6269275 | Slade | Jul 2001 | B1 |
6282596 | Bealkowski et al. | Aug 2001 | B1 |
6292200 | Bowen et al. | Sep 2001 | B1 |
6301616 | Pal et al. | Oct 2001 | B1 |
6304952 | Suzuoki | Oct 2001 | B1 |
6329996 | Bowen et al. | Dec 2001 | B1 |
6331856 | Van Hook et al. | Dec 2001 | B1 |
6336127 | Kurtzberg et al. | Jan 2002 | B1 |
6359624 | Kunimatsu | Mar 2002 | B1 |
6397343 | Williams et al. | May 2002 | B1 |
6473086 | Morein et al. | Oct 2002 | B1 |
6476816 | Deming et al. | Nov 2002 | B1 |
6496187 | Deering et al. | Dec 2002 | B1 |
6501999 | Cai | Dec 2002 | B1 |
6535216 | Deming et al. | Mar 2003 | B1 |
6535939 | Arimilli et al. | Mar 2003 | B1 |
6591262 | MacLellan et al. | Jul 2003 | B1 |
6611241 | Firester et al. | Aug 2003 | B1 |
6630936 | Langendorf | Oct 2003 | B1 |
6631474 | Cai et al. | Oct 2003 | B1 |
6654826 | Cho et al. | Nov 2003 | B1 |
6670958 | Aleksic et al. | Dec 2003 | B1 |
6683614 | Walls et al. | Jan 2004 | B2 |
6700586 | Demers | Mar 2004 | B1 |
6704021 | Rogers et al. | Mar 2004 | B1 |
6708217 | Colson et al. | Mar 2004 | B1 |
6711691 | Howard et al. | Mar 2004 | B1 |
6714200 | Talnykin et al. | Mar 2004 | B1 |
6760031 | Langendoff et al. | Jul 2004 | B1 |
6760684 | Yang | Jul 2004 | B1 |
6772265 | Baweja et al. | Aug 2004 | B2 |
6798420 | Xie | Sep 2004 | B1 |
6832269 | Huang et al. | Dec 2004 | B2 |
6835070 | Law | Dec 2004 | B1 |
6859926 | Brenner et al. | Feb 2005 | B1 |
6864891 | Myers | Mar 2005 | B2 |
6914779 | Askeland et al. | Jul 2005 | B2 |
6919894 | Emmot et al. | Jul 2005 | B2 |
6919896 | Sasaki et al. | Jul 2005 | B2 |
6937245 | Van Hook et al. | Aug 2005 | B1 |
6956579 | Diard et al. | Oct 2005 | B1 |
6985152 | Rubinstein et al. | Jan 2006 | B2 |
7019752 | Paquette et al. | Mar 2006 | B1 |
7024510 | Olarig | Apr 2006 | B2 |
7030837 | Vong et al. | Apr 2006 | B1 |
7058829 | Hamilton | Jun 2006 | B2 |
7079149 | Main et al. | Jul 2006 | B2 |
7080181 | Wolford | Jul 2006 | B2 |
7119808 | Gonzalez et al. | Oct 2006 | B2 |
7176847 | Loh | Feb 2007 | B2 |
7178147 | Benhase | Feb 2007 | B2 |
7203909 | Horvitz et al. | Apr 2007 | B1 |
7260839 | Karasaki | Aug 2007 | B2 |
7321367 | Isakovic et al. | Jan 2008 | B2 |
7372465 | Tamasi | May 2008 | B1 |
7634668 | White et al. | Dec 2009 | B2 |
7663633 | Diamond et al. | Feb 2010 | B1 |
20020047851 | Hirase et al. | Apr 2002 | A1 |
20020073247 | Baweja et al. | Jun 2002 | A1 |
20020099521 | Yang | Jul 2002 | A1 |
20020107809 | Biddle et al. | Aug 2002 | A1 |
20020118201 | Mukherjee et al. | Aug 2002 | A1 |
20020130889 | Blythe et al. | Sep 2002 | A1 |
20020141152 | Pokharna et al. | Oct 2002 | A1 |
20020180725 | Simmonds et al. | Dec 2002 | A1 |
20030051021 | Hirschfeld | Mar 2003 | A1 |
20030067470 | Main et al. | Apr 2003 | A1 |
20030071816 | Langendorf | Apr 2003 | A1 |
20030115344 | Tang | Jun 2003 | A1 |
20030128216 | Walls et al. | Jul 2003 | A1 |
20030137483 | Callway | Jul 2003 | A1 |
20030193503 | Seminatore et al. | Oct 2003 | A1 |
20030233391 | Crawford, Jr. | Dec 2003 | A1 |
20040008200 | Naegle et al. | Jan 2004 | A1 |
20040021678 | Ullah et al. | Feb 2004 | A1 |
20040032861 | Lee | Feb 2004 | A1 |
20040039954 | White et al. | Feb 2004 | A1 |
20040103191 | Larsson | May 2004 | A1 |
20040104913 | Walls et al. | Jun 2004 | A1 |
20040125111 | Tang-Petersen et al. | Jul 2004 | A1 |
20040189677 | Amann et al. | Sep 2004 | A1 |
20040210591 | Hirschfeld | Oct 2004 | A1 |
20050017980 | Chang et al. | Jan 2005 | A1 |
20050028015 | Asano et al. | Feb 2005 | A1 |
20050088445 | Gonzalez | Apr 2005 | A1 |
20050134588 | Aila et al. | Jun 2005 | A1 |
20050144452 | Lynch | Jun 2005 | A1 |
20050160212 | Caruk | Jul 2005 | A1 |
20050190190 | Diard et al. | Sep 2005 | A1 |
20050190536 | Anderson et al. | Sep 2005 | A1 |
20050270298 | Thieret | Dec 2005 | A1 |
20050278559 | Sutardja et al. | Dec 2005 | A1 |
20060107250 | Tarditi et al. | May 2006 | A1 |
20060161753 | Aschoff | Jul 2006 | A1 |
20060164414 | Farinelli | Jul 2006 | A1 |
20060168230 | Caccavale et al. | Jul 2006 | A1 |
20060176881 | Ma et al. | Aug 2006 | A1 |
20060206635 | Alexander | Sep 2006 | A1 |
20080084419 | Bakalash et al. | Apr 2008 | A1 |
20080238917 | Bakalash | Oct 2008 | A1 |
Number | Date | Country |
---|---|---|
1020050047243 | May 2005 | KR |
421752 | Feb 2001 | TW |
485309 | May 2002 | TW |
591400 | Jun 2004 | TW |
200422936 | Nov 2004 | TW |
I223752 | Nov 2004 | TW |
200501046 | Jan 2005 | TW |
2005010854 | Feb 2005 | WO |
Entry |
---|
Luke, E. J. et al., “Semotus Visum: a flexible remote visualization framework”, IEEE Visualization 2002, Oct. 27-Nov. 1, 2002, Boston, MA, pp. 61-68. |
Casera, S. et al., “A Collaborative Extension of a Visualization System”, Proceedings of the First International Conference on Distributed Frameworks for Multimedia Applications (DFMA'05), Feb. 6-9, 2005, Besoncon, France, pp. 176-182. |
Stegmaier, S. et al., “Widening the Remote Visualization Bottleneck”, Proceedings of the 3rd International Symposium on Image and Signal Processing and Analysis 2003, Sep. 18-20, 2003, Rome, Italy, vol. 1 No. 18, pp. 174-179. |
Miller, J. R., “The Remote Application Controller”, Computer and Graphics, Pergamon Press Ltd, Oxford, Great Britain, vol. 27 No. 4, Aug. 2003, pp. 605-615. |
http://www.informit.com/articles/article.aspx?p=339936, Oct. 22, 2004. |
Bhatt, Ajay V., “Creating a PCI Interconnect”, Intel Corporation, Jan. 1, 2002, pp. 1-8. |
U.S. Appl. No. 60/523,084, filed Nov. 19, 2003, pp. 1-5. |
Rupley, Sebastian, “Intel Developer Forum to Spotlight PCI Express,” PC Magazine, Sep. 6, 2002, p. 1. |
Number | Date | Country | |
---|---|---|---|
20090164908 A1 | Jun 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11150620 | Jun 2005 | US |
Child | 12395685 | US |