Networks have been used in conjunction with electronic devices for some time to facilitate the exchange of data and share resources among a plurality of the electronic devices communicatively coupled to a common exchange medium. In many systems, the use of a network may enable the efficient transfer of data between the electronic devices. Additionally, a network may make possible the sharing of peripheral devices among more than one of the electronic devices in the network.
Networks may be used to allow one or more host computing devices access to a plurality of shared, physically disaggregated peripheral devices. Particularly, in some systems the host computing devices and the shared peripheral devices may all be communicatively coupled to an intermediary bridge device, which allocates certain of the shared peripheral devices to one or more of the computing devices. Once the allocation process is complete, a data connection between selected shared peripheral devices and a corresponding computing device may be created by the bridge device.
In some cases, the bridge device may create a connection such that the selected peripheral devices interact with the host computing devices as if the shared peripheral devices resided physically on the host computing device. In such cases, a peripheral device may only be visible to a host computing device if the connection to the host computing device has been established prior to the host computing device executing a hardware enumeration process.
The accompanying drawings illustrate various embodiments of the principles described herein and are a part of the specification. The illustrated embodiments are merely examples and do not limit the scope of the claims.
Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
As described above, in some cases, a bridge device in a network may create a connection between one or more selected peripheral devices and a host computing device such that the selected peripheral devices interact with the host computing devices as if the shared peripheral devices resided physically on the host computing device. In such cases, the peripheral devices may only be visible to the host computing device if the connection between the peripheral device and the host computing device has been established prior to the host computing device executing a hardware enumeration process.
Unfortunately, in many such systems, if a host computing device were to enumerate its hardware before connections to the shared peripheral devices were established by the bridge device, the host computing device may not detect the shared peripheral devices allocated to it. Under such circumstances, the host computing device may remain oblivious to the availability of the allocated shared peripheral device or devices, thus potentially rendering the shared peripheral devices useless to that particular host computing device.
Moreover, a bridge device in such systems may be unable to allocate shared peripheral devices to a specific host computing device until after the shared peripheral devices have been successfully booted and detected by the bridge device. However, the shared peripheral devices may require varying amounts of time to boot. Furthermore, the bridge device may require time to perform the allocation process in which access to certain of the shared peripheral devices is granted to the host device. In systems where the host computing device, the bridge device, and the shared peripheral devices are powered on substantially simultaneously, the host computing device may perform a hardware enumeration process as part of a boot process before the bridge device is able to allocate access to one or more shared peripheral devices to the host computing device.
In some such host computing devices, the hardware enumeration process may occur primarily as the host computing device is booted. Thus, if the host computing device performs the hardware enumeration process before access to the shared peripheral devices are successfully allocated by the bridge to the host computing device, the host computing device may effectively be prevented from utilizing the shared peripheral devices provided.
To address these and other issues, the present specification discloses methods and systems that provide for a host computing device to execute a hardware enumeration process only after a resource allocation process has been executed in a bridge device. Using the methods and systems of the present disclosure, a host computing device may be enabled to discover shared peripheral devices allocated by the bridge device during the hardware enumeration process, thus enabling the host computing device to access the shared peripheral devices.
As used in the present specification and in the appended claims, the term “host computing device” refers to a computing device configured to interact with and/or control at least one peripheral device. Typically, the host computing device interacts with or controls the at least one peripheral device through a bridge device.
As used in the present specification and in the appended claims, the term “hardware enumeration process” refers to a series of instructions executed by a computing device in which hardware devices connected to the computing device are discovered and identified. Drivers or other code needed to interface with the hardware devices are also identified and loaded during the hardware enumeration process.
As used in the present specification and in the appended claims, the term “peripheral device” refers to an electronic device, separate and distinct from a central processing unit and physical memory of a host computing device, which is configured to provide the host computing device with one or more resources. For example, a peripheral device may provide input data to, or accept output data from, the host computing device.
As used in the present specification and in the appended claims, the term “bridge device” or “bridge” refers to a network device configured to communicatively couple at least one peripheral device to at least one host computing device. The communicative coupling created by the bridge device between peripheral device(s) and a corresponding host computing device may include one or more electrically conductive paths and/or a coupling implemented by software in which data is forwarded to a recipient through the bridge device.
As used in the present specification and in the appended claims, the term “resource allocation process” refers to a series of instructions executed by a bridge device that communicatively couple at least one shared peripheral device with at least one host computing device. As indicated, these communicative coupling may be implemented by hardware, firmware or software.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present systems and methods. It will be apparent, however, to one skilled in the art that the present systems and methods may be practiced without these specific details. Reference in the specification to “an embodiment,” “an example” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment or example is included in at least that one embodiment, but not necessarily in other embodiments. The various instances of the phrase “in one embodiment” or similar phrases in various places in the specification are not necessarily all referring to the same embodiment.
The principles disclosed herein will now be discussed with respect to illustrative systems and methods.
Illustrative Systems
Referring now to
The host computing devices (101) may communicate with the bridge device (105) through network connections (107-1, 107-2). The host computing devices (101) may be connected directly to the bridge (105) with shared or individual connections. Alternatively, the host computing devices (101) may be connected to the bridge through a network. The network connections (107-1, 107-2) may be wired or wireless connections.
Similarly, each of the shared peripheral devices (103) may communicate with the bridge device (105) through network connections (109-1 through 109-N). The peripheral devices (103) may be connected directly to the bridge (105) with shared or individual connections. Alternatively, the peripheral devices (103) may be connected to the bridge through a network. The network connections (109-1 through 109-N) may be wired or wireless connections.
Each of the host computing devices (101) may include any computer hardware and/or instructions (e.g., software programs), or combination of software and hardware, configured to perform the processes for which they are intended. In particular, it should be understoodthat the host computing devices (101) may include any of a number of well-known computing devices, including, but not limited to, desktop computers, laptop computers, servers, personal digital assistants, and the like. These host computing devices (101) may employ any of a number of well-known computer operating systems, including, but not limited to, known versions and/or varieties of Microsoft™ Windows™, UNIX, Macintosh™, and Linux operating system software.
The peripheral devices (103) may be configured to provide data to, or accept data from, at least one of the host computing devices (101). Examples of suitable peripheral devices (103) that may be used in conjunction with the systems and methods of the present specification include, but are not limited to, printers, plotters, scanners, multi-function peripherals, projectors, multimedia devices, computing devices, storage media, disk arrays, network devices, pointing devices, and combinations thereof.
Although the peripheral devices (103) may be configured to directly interact with one or more of the host computing devices (101), as shown in the present example, the host computing devices (101) may not be directly coupled to any of the peripheral devices (103). In some embodiments, this may be due to the fact that the peripheral devices (103) are configured to be shared among a plurality of host computing devices (101). For example, both a first host computing device (101-1) and a second host computing device (101-2) may be configured to communicate with a single peripheral device (e.g., 103-1). Additionally or alternatively, the host computing devices (101) may not be directly coupled to individual peripheral devices (103) in order to simplify or reduce network wiring and/or other clutter associated with creating the connections.
In any event, the host computing devices (101) may be configured to communicate with selected peripheral devices (103) by means of the intermediate bridge device (105). Data from the host computing devices (101) intended for the peripheral devices (103) may be transmitted to the bridge device (105), where the data may then be transmitted to the appropriate peripheral device(s) (103). Likewise, data originating from the peripheral devices (103) intended for one or more of the host computing devices (101) may be transmitted to the bridge device (105), where the data may then be transmitted to the appropriate host computing device(s) (101).
The bridge device (105) may be configured to allow each of the host computing devices (101) access to certain of the shared peripheral devices (103). The peripheral devices (103) that each of the host computing devices (101) are permitted to access through the bridge device (105) may be determined by network configurations, user or machine profiles, programs installed on the host computing devices (101) or the bridge (105), and/or other factors.
The bridge device (105) is configured to selectively create virtual connections between the host computing devices (101) and the peripheral devices (103). Firmware on the bridge devices (105), in connection with the operation of the host computing devices (101), will facilitate the creation of these selective virtual connections.
Referring now to
Many of the functional units described in the present specification have been labeled as “modules” in order to more particularly emphasize their implementation independence. For example, modules may be implemented in software for execution by various types of processors. An identified module may include executable code, for instance, one or more physical or logical blocks of computer instructions that may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may include disparate instructions stored in different locations which, when joined logically together, collectively form the module or module subsystem and achieve the stated purpose for the module. For example, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. In other examples, modules may be implemented entirely in hardware, or in a combination of hardware and software.
In the present example, the virtual connections may be peer-to-peer network connections, and the bridge device (105) may include a plurality of peer-to-peer modules (201-1 to 201-N, collectively “201”) configured to form virtual peer-to-peer network connections (203, 205) among each other. As each of the host computing devices (101,
A switch management module (207) may be present in the bridge device (105) to determine which of the peripheral devices (103,
For example, in the present embodime, the switch management module (207) may determine that a first host computing device (e.g., 101-1,
Similarly, in the present embodiment, another peer-to-peer module (201-2) is configured to communicate with a second host computing device (e.g., 101-2,
In some embodiments, the peripheral devices (103,
As mentioned above, the switch management module (207) of the bridge device (105) is configured to perform a resource allocation process before communicatively coupling peer-to-peer modules (201) to each other. In the resource allocation process, a series of instructions may be executed by the bridge device (105) that determines which, if any, of the peripheral devices (103,
To successfully perform the resource allocation process, the bridge device (105) needs to detect and communicate with each of the peripheral devices (103,
However, in the event that one of the host computing devices (101,
Referring now to
In the diagram, each block represents a configuration module having one or more configuration tasks that must be completed before progressing to a subsequent configuration module or terminating the configuration process (300). The flow of the configuration process (300) in the present example may be governed by a local core (301) of the host computing device (101,
In the configuration process (300), the local core (301) may initiate the configuration process (Config Request) as part of the boot process using an application-specific integrated circuit (ASIC) interface module (307). The ASIC interface module (307) in the host computing device may include an interface to the bridge device (105).
Concurrently, a resource allocation process (303) in the bridge device (105) may begin, or have already begun. The bridge device (105) will monitor for confirmation of the completion of the resource allocation process (303) as indicated by the arrow from the resource allocation process (303) to the bridge device (105).
The ASIC interface module (307) may prevent flow in the configuration process (300) from being transferred to a subsequent module until after the resource allocation process (303) has been performed by the bridge device (105), thus preventing a hardware enumeration process (313) from being commenced before peripheral devices have been allocated to the host computing device by the bridge device (105). This prevention may be accomplished by the ASIC interface module (307) delaying a configuration completion response to the local core (301) as long as an indicator (e.g. RamStop) is provided to the ASIC Interface module (307) by the bridge device (105) that the resource allocation process (303) has not been completed.
Once the resource allocation process has been completed, the bridge device (105) may remove the indicator (RamStop) from the ASIC interface module (307). If the ASIC interface module (307) determines that the indicator (RamStop) is no longer present, the ASIC interface module (307) will allow the configuration flow to proceed to the RAM module (309).
After the necessary configuration tasks have been completed in the RAM module (309), configuration flow may be transferred to the local configuration module (311). As the resource allocation process (303) has have been performed prior to flow being transferred to the local configuration module (311), all of the peripheral devices allocated to the host computing device by the bridge device (105) should be available for detection by the host computing device during the hardware enumeration process (313), in addition to local peripheral devices that may already be connected to the host computing device.
Flow may then be transferred to a completion RAM module (315), back to an ASIC interface module (317), and back to the local core (301). The hardware enumeration process (313) may then be executed following the completion of the resource allocation process (303) of the bridge device (105).
Some of the beneficial effects of this flow can be understood with reference to
Referring now to
Referring now to
Illustrative Methods
Referring now to
In the method (500), a system is provided (step 501) having a bridge device connected to at least one host computing device and at least one peripheral device. The bridge device may be configured to communicatively couple the host device to the peripheral device.
As described herein, in various embodiments, the bridge device may be configured to provide a virtual peer-to-peer connection between a host device and a peripheral device. Additionally, in some embodiments, the bridge device may be configured to provide the host device with access to a plurality of peripheral devices.
The devices in the system may then be powered on (step 503), and begin booting. A resource allocation process may be initiated (step 505) in the bridge device after the peripheral device has booted.
If it is determined (decision 507) that the resource allocation process by the bridge device is complete, a hardware enumeration process by the host computing device may be initiated (step 509).
If it is determined (decision 507) that the resource allocation process by the bridge device is not complete, the host computing device may be prevented (step 511) from initiating the hardware enumeration process until it is determined (decision 507) that the resource allocation process by the bridge device is complete.
In some embodiments, the hardware enumeration process may be prevented by the bridge device during a configuration process performed by the host computing device as described above in connection with
Referring now to
In this method (600), a system may be provided (step 601) having a bridge device connected to a host computing device and at least one peripheral device. Similar to the method (500,
The devices in the system may then be powered on (step 603), and begin booting. A resource allocation process may be initiated (step 605) in the bridge device after the peripheral device(s) have been booted.
A configuration request may then be received (step 607) in the bridge device from the host computing device, and a configuration completion response provided (step 611) to the host computing device from the bridge device only after the bridge device has completed (step 609) the resource allocation process. The host computing device may then execute (step 613) the resource enumeration process.
The preceding description has been presented only to illustrate and describe embodiments and examples of the principles described. This description is not intended to be exhaustive or to limit these principles to any precise form disclosed. Many modifications and variations are possible in light of the above teaching.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2008/054201 | 2/18/2008 | WO | 00 | 8/12/2010 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2009/105090 | 8/27/2009 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5371878 | Coker | Dec 1994 | A |
5860083 | Sukegawa | Jan 1999 | A |
5867728 | Melo et al. | Feb 1999 | A |
6212589 | Hayek et al. | Apr 2001 | B1 |
6279060 | Luke et al. | Aug 2001 | B1 |
6321276 | Forin | Nov 2001 | B1 |
6442632 | Hayek et al. | Aug 2002 | B1 |
6539491 | Skergan et al. | Mar 2003 | B1 |
6665263 | Kawabata et al. | Dec 2003 | B1 |
6732264 | Sun et al. | May 2004 | B1 |
6959350 | Luke et al. | Oct 2005 | B1 |
7634621 | Coon et al. | Dec 2009 | B1 |
8130960 | Zimmer et al. | Mar 2012 | B2 |
20010027500 | Matsunaga | Oct 2001 | A1 |
20020129173 | Weber et al. | Sep 2002 | A1 |
20020147869 | Owen et al. | Oct 2002 | A1 |
20020174229 | Owen et al. | Nov 2002 | A1 |
20020178316 | Schmisseur | Nov 2002 | A1 |
20020178317 | Schmisseur et al. | Nov 2002 | A1 |
20020188836 | Gurumoorthy et al. | Dec 2002 | A1 |
20030093510 | Cen | May 2003 | A1 |
20040186905 | Young et al. | Sep 2004 | A1 |
20050083968 | Chan et al. | Apr 2005 | A1 |
20060143350 | Miloushev et al. | Jun 2006 | A1 |
20070030517 | Narayanan | Feb 2007 | A1 |
20070263642 | Harriman | Nov 2007 | A1 |
20080005383 | Bender et al. | Jan 2008 | A1 |
20080126735 | Kang | May 2008 | A1 |
20080256351 | Natarajan | Oct 2008 | A1 |
20080288671 | Masuda | Nov 2008 | A1 |
20110004688 | Matthews et al. | Jan 2011 | A1 |
Number | Date | Country |
---|---|---|
0179425 | May 1993 | EP |
3-019060 | Jan 1991 | JP |
04064142 | Feb 1992 | JP |
10-222453 | Aug 1998 | JP |
2005100379 | Apr 2005 | JP |
2005149312 | Jun 2005 | JP |
2012124588 | Jun 2012 | JP |
10-2004-0031510 | Nov 2004 | KR |
WO-01-65365 | Sep 2001 | WO |
WO 2005093577 | Oct 2005 | WO |
Entry |
---|
Nazeem, A.; Reveliotis, S., “Efficient Enumeration of Minimal Unsafe States in Complex Resource Allocation Systems,” Automation Science and Engineering, IEEE Transactions on , pp. 1,14. |
Internatonal Search Report and Written Opinion, dated Nov. 6, 2008, 10 pages. |
Number | Date | Country | |
---|---|---|---|
20100325332 A1 | Dec 2010 | US |