Method for hot swapping of network components

Information

  • Patent Grant
  • 6324608
  • Patent Number
    6,324,608
  • Date Filed
    Wednesday, October 1, 1997
    27 years ago
  • Date Issued
    Tuesday, November 27, 2001
    23 years ago
Abstract
Methods of removing and replacing data processing circuitry are provided comprising removing a network interface module from the computer without powering down the computer and removing an interface card from the network interface module. The further acts of replacing the interface card into the network interface module and replacing the network interface module into the computer without powering down the network computer are also performed in accordance with this method.
Description




RELATED APPLICATIONS




The subject matter of U.S. patent application entitled “FAULT TOLERANT COMPUTER SYSTEM”, filed on Oct. 1, 1997, application Ser. No. 08/942,194, is related to this application.




APPENDICES




Appendix A, which forms a part of this disclosure, is a list of commonly owned copending U.S. patent applications. Each one of the applications listed in Appendix A is hereby incorporated herein in its entirety by reference thereto.




COPYRIGHT RIGHTS




A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files of records, but otherwise reserves all copyright rights whatsoever.




BACKGROUND OF THE INVENTION




Network servers and the accompanying local area networks (LANs) have expanded the power and increased the productivity of the work force. It was just a few years ago that every work station had a standalone personal computer incapable of communicating with any other computers in the office. Data had to be carried from person to person by diskette. Applications had to be purchased for each standalone personal computer at great expense. Capital intensive hardware such as printers were duplicated for each standalone personal computer. Security and backing up the data were immensely difficult without centralization.




Network servers and their LANs addressed many of these issues. Network servers allow for resource sharing such as sharing equipment, applications, data, and the means for handling data. Centralized backup and security were seen as definite advantages. Furthermore, networks offered new services such as electronic mail. However, it soon became clear that the network servers could have their disadvantages as well.




Centralization, hailed as a solution, developed its own problems. A predicament that might shut down a single standalone personal computer would, in a centralized network, shut down all the networked work stations. Small difficulties easily get magnified with centralization, as is the case with the failure of a network server interface card (NIC), a common dilemma. A NIC may be a card configured for Ethernet, LAN, or Token-Ring to name but a few. These cards fail occasionally requiring examination, repair, or even replacement. Unfortunately, the entire network has to be powered down in order to remove, replace or examine a NIC. Since it is not uncommon for modern network servers to have sixteen or more NICs, the frequency of the problem compounds along with the consequences. When the network server is down, none of the workstations in the office network system will be able to access the centralized data and centralized applications. Moreover, even if only the data or only the application is centralized, a work station will suffer decreased performance.




Frequent down times can be extremely expensive in many ways. When the network server is down, worker productivity comes to a stand still. There is no sharing of data, applications or equipment such as spread sheets, word processors, and printers. Bills cannot go out and orders cannot be entered. Sales and customer service representatives are unable to obtain product information or pull up invoices. Customers browsing or hoping to browse through a network server supported commercial web page are abruptly cut off or are unable to access the web pages. Such frustrations may manifest themselves in the permanent loss of customers, or at the least, in the lowering of consumer opinion with regard to a vendor, a vendor's product, or a vendor's service. Certainly, down time for a vendor's network server will reflect badly upon the vendor's reliability. Furthermore, the vendor will have to pay for more service calls. Rebooting a network server, after all, does require a certain amount of expertise. Overall, whenever the network server has to shut down, it costs the owner both time and money, and each server shut down may have ramifications far into the future. The magnitude of this problem is evidenced by the great cost that owners of network servers are willing to absorb in order to avoid down time through the purchase of uninterruptible power supplies, surge protects, and redundant hard drives.




What is needed to address these problems is an apparatus that can localize and isolate the problem module from the rest of the network server and allow for the removal and replacement of the problem module without powering down the network server.




SUMMARY OF THE INVENTION




The present invention includes methods of removing and replacing data processing circuitry. In one embodiment, the method comprises changing an interface card in a computer comprising removing a network interface module from the computer without powering down the computer and removing an interface card from the network interface module. The further acts of replacing the interface card into the network interface module and replacing the network interface module into the computer without powering down the network computer are also performed in accordance with this method.




Methods of making hot swappable network servers are also provided. For example, one embodiment comprises a method of electrically coupling a central processing unit of a network server to a plurality of network interface modules comprising the acts of routing an I/O bus having a first format from the central processing unit to primary sides of a plurality of bus adaptor chips and routing an I/O bus of the same first format from a secondary side of the bus adaptor chips to respective ones of the network interface modules.











BRIEF DESCRIPTION OF THE DRAWINGS





FIG. 1

shows one embodiment of a network server in accordance with the invention including a fault tolerant computer system mounted on a rack.





FIG. 2

is a block diagram illustrating certain components and subsystems of the fault tolerant computer system shown in FIG.


1


.





FIG. 3A

shows the chassis with network interface modules and power modules.





FIG. 3B

is an exploded view which shows the chassis and the interconnection assembly module.





FIG. 3C

is an illustration of the interconnection assembly module of FIG.


3


B.





FIG. 4

shows a front view of an embodiment of a network server in a chassis mounted on a rack.





FIG. 5A

is a view showing the front of the backplane printed circuit board of an interconnection assembly module in the network server.





FIG. 5B

is a view showing the back of the backplane printed circuit board of the interconnection assembly module in the network server.





FIG. 6

is an exploded view which shows the elements of one embodiment of a network interface module of the network server.











DETAILED DESCRIPTION OF THE INVENTION




Embodiments of the present invention will now be described with reference to the accompanying Figures, wherein like numerals refer to like elements throughout. The terminology used in the description presented herein is intended to be interpreted in its broadest reasonable manner, even though it is being utilized in conjunction with a detailed description of certain specific embodiments of the present invention. This is further emphasized below with respect to some particular terms used herein. Any terminology intended to be interpreted by the reader in any restricted manner will be overtly and specifically defined as such in this specification.





FIG. 1

shows one embodiment of a network server


100


. It will be appreciated that a network server


100


which incorporates the present invention may take many alternative configurations, and may include many optional components currently used by those in the art. A specific example of one such configuration is described in conjunction with FIG.


1


. The operation of those portions of the server


100


which are conventional are not described in detail.




In the server of

FIG. 1

, a cabinet


101


houses a rack


102


, on which is mounted several data processing, storage, and display components. The server


100


may include, for example, a display monitor


173


A resting on a monitor shelf


173


B mounted on the rack


102


as well as a retractable keyboard


174


. Also included are a variable number of data storage devices


106


, which may be removably mounted onto shelves


172


of the rack


102


. One embodiment as shown in

FIG. 1

has twenty data storage modules


106


removably mounted individually on four shelves


172


of the rack


102


, with five data storage modules


106


per shelf. A data storage module may comprise magnetic, optical, or any other type of data storage media. In the embodiment illustrated in

FIG. 1

, one data storage module is a CD-ROM module


108


.




In advantageous embodiments described in detail with reference to

FIGS. 2-6

below, the network server includes a fault tolerant computer system which is mounted in a chassis


170


on the rack


102


. To provide previously unavailable ease in maintenance and reliability, the computer system may be constructed in a modular fashion, including a CPU module


103


, a plurality of network interface modules


104


, and a plurality of power modules


105


. Faults in individual modules may be isolated and repaired without disrupting the operation of the remainder of the server


100


.




Referring now to

FIG. 2

, a block diagram illustrating several components and subsystems of the fault tolerant computer system is provided. The fault tolerant computer system may comprise a system board


182


, a backplane board


184


which is interconnected with the system board


182


, and a plurality of canisters


258


,


260


,


262


, and


264


which interconnect with the backplane board


184


. A number ‘n’ of central processing units (CPUs)


200


are connected through a host bus


202


to a memory controller


204


, which allows for access to semiconductor memory by the other system components. In one presently preferred embodiment, there are four CPUs


200


, each being an Intel Pentium® Pro microprocessor. A number of bridges


206


,


208


and


210


connect the host bus to three additional bus systems


212


,


214


, and


216


. The bus systems


212


,


214


and


216


, referred to as PC buses, may be any standards-based bus system such as PCI, ISA, EISA and Microchannel. In one embodiment of the invention, the bus systems


212


,


214


,


216


are PCI. In another embodiment of the invention a proprietary bus is used.




An ISA Bridge


218


is connected to the bus system


212


to support legacy devices such as a keyboard, one or more floppy disk drives and a mouse. A network of microcontrollers


225


is also interfaced to the ISA bus


226


to monitor and diagnose the environmental health of the fault tolerant system.




The two PC buses


214


and


216


contain bridges


242


,


244


,


246


and


248


to PC bus systems


250


,


252


,


254


, and


256


. As with the PC buses


214


and


216


, the PC buses


250


,


252


,


254


and


256


can be designed according to any type of bus architecture including PCI, ISA, EISA, and Microchannel. The PC buses


250


,


252


,


254


and


256


are connected, respectively, to a canister


258


,


260


,


262


and


264


. These canisters are casings for a detachable bus system and provide multiple slots for adapters. In the illustrated canister, there are four adapter slots. The mechanical design of the canisters is described in more detail below in conjunction with FIG.


6


.




The physical arrangement of the components of the fault tolerant computer shown in

FIG. 2

are illustrated further in

FIGS. 3A

,


3


B, and


3


C. Referring now to

FIG. 3A

, a chassis


170


is mounted on chassis mounting rails


171


so as to be secured to the rack


102


of FIG.


1


. The chassis includes a front


170


A, back


170


B, sides


170


C and


170


D, as well as a top


170


E and a bottom


170


F. Although not shown in

FIG. 3A

, sets of perforations


177


in such patterns and numbers to provide effective cooling of the internal components of the chassis


170


are also provided in its housing panels.




A central processing unit (CPU) module


103


which may advantageously include the system board


182


of

FIG. 2

is removably mounted on a chassis. A plurality of network interface modules


104


are also removably mounted on the chassis


170


. The network interface modules


104


may comprise the multiple-slot canisters


258


,


260


,


262


, and


264


of FIG.


2


. Two redundant power modules


105


are additionally removably mounted on the chassis


170


. The CPU module


103


, the network interface modules


104


, and the power modules


105


, when removably mounted may have their fronts positioned in the same plane as the chassis front


170


A.




In this embodiment, the CPU module


103


is removably mounted on the top chassis shelf


175


A. The next chassis shelf


175


B below holds two removably mounted network interface modules


104


and one removably mounted power module


105


. The remaining chassis shelf


175


C also holds two removably mounted network interface modules


104


and one removably mounted power module


105


. The network interface modules


104


and the power modules


105


are guided into place with the assistance of guide rails such as guide rail


180


.




In one embodiment of the invention, the network interface modules


104


and the power modules


105


are connected to the CPU module


103


through an interconnection assembly module


209


(illustrated in additional detail in

FIGS. 3B and 3C

) which advantageously includes the backplane board


184


illustrated in FIG.


2


. The interconnection assembly module electrically terminates and isolates the rest of the network server


100


from the PC Bus local to any given network interface module


104


when that network interface module


104


is removed and replaced without powering down the network server


100


or the CPU module


103


. The physical layout of one embodiment of the interconnection assembly module is described in more detail below with reference to

FIGS. 5A and 5B

.





FIG. 3B

illustrates the chassis


170


for the fault tolerant computer system


170


in exploded view. With the interconnection assembly module


209


installed in the rear, interconnection assembly module


209


may provide a communication path between the CPU module


103


and the network interface modules


104


. In this embodiment, the interconnection assembly module


209


is mounted on the chassis back


170


B such that it is directly behind and mates with the chassis modules


103


,


104


and


105


when they are mounted on the chassis


170


.




Thus, with the interconnection assembly module


209


mounted on the chassis


170


, the network interface modules


104


can be brought in and out of connection with the network server


100


by engaging and disengaging the network interface module


104


to and from its associated backplane board connector. One embodiment of these connectors is described in additional detail with reference to

FIG. 3C

below. This task may be performed without having to power down the entire network server


100


or the CPU module


103


. The network interface modules


104


are thus hot swappable in that they may be removed and replaced without powering down the entire network server


100


or the CPU module


103


.




In

FIG. 3C

, a specific connector configuration for the interconnection assembly module


209


is illustrated. As is shown in that Figure, four connectors


413


,


415


,


417


, and


419


are provided for coupling to respective connectors of the network interface modules


104


. Two connectors


421


are provided for the power modules


105


. Another connector


411


is configured to couple with the CPU module


103


. The process of interconnecting the network interface modules


104


and the CPU module


103


to the interconnection assembly module


209


is facilitated by guiding pegs


412


,


414


,


416


,


418


,


420


on the connectors of the interconnection assembly module


209


which fit in corresponding guiding holes in the network interface modules


104


and CPU module


103


. The interconnection assembly module


209


also includes two sets of perforations


422


sufficient in number and in such patterns so as to assist with the cooling of each power module


105


. This embodiment has two sets of perforations


422


adjacent each power module connector


421


.





FIG. 4

is a front view of the network server cabinet


101


housing a partially assembled fault tolerant computer system mounted on a rack


102


. In this Figure, the interconnection assembly module


209


is visible through unoccupied module receiving spaces


201


,


203


, and


205


. The CPU module


103


has not yet been mounted on the chassis as evidenced by the empty CPU module space


203


. As is also illustrated in

FIG. 1

, several network interface modules


104


are present. However, one of the network interface modules remains uninstalled as evidenced by the empty network interface module space


201


. Similarly, one power module


105


is present, but the other power module has yet to be installed on the chassis


170


as evidenced by the empty power module space


205


.




In this Figure, the front of the interconnection assembly module


209


mounted on the rear of the chassis is partially in view.

FIG. 4

thus illustrates in a front view several of the connectors on the backplane board


184


used for connecting with the various chassis modules when the chassis modules are removably mounted on the chassis


170


. As also described above, the CPU module


103


may be removably mounted on the top shelf


175


A of the chassis in the empty CPU module space


203


. As briefly explained above with reference to

FIGS. 3A through 3C

, the CPU module


103


has a high density connector which is connected to the high density connector


411


on the back of the backplane printed circuit board


184


when the CPU module is mounted on the top shelf


175


A of the chassis


170


. The chassis


170


and the guiding peg


412


assist in creating a successful connection between the 360 pin female connector


411


and the 360 male connector of the CPU module


103


. The guiding peg


412


protrudes from the backplane printed circuit board front and slip into corresponding guiding holes in the CPU module


103


when the CPU module


103


is mounted on the shelf


175


A of the chassis


170


.




In addition, one of the high density connectors


413


which interconnects the backplane printed circuit board


184


with one of the network interface modules


104


is shown in FIG.


4


. In the illustrated embodiments, there are four high density connectors, one connecting to each network interface module


104


. The high density connector


413


may be a 180 pin female connector. This 180 pin female connector


413


connects to a 180 pin male connector of the network interface module


104


when the network interface module


104


is removably mounted on the middle shelf


175


B of the chassis in the empty network interface module space


201


. The chassis, the two guiding pegs (of which only guiding peg


414


is shown in FIG.


4


), and the chassis guide rail 180 assist in creating a successful connection between the 180 pin female connector


413


and the 180 pin male connector of the network interface module


104


. The two guiding pegs, of which only guiding peg


414


is within view, protrude from the front of the backplane printed circuit board and slip into corresponding guiding holes in the network interface module


104


when the network interface module


104


is removably mounted on a shelf of the chassis.





FIG. 5A

is a view showing the front side of the backplane printed circuit board


184


. In this embodiment, the backplane printed circuit board


184


is configured to be mounted on the chassis rear directly behind the chassis modules comprising the CPU module


103


, the network interface modules


104


, and the power modules


105


. The backplane printed circuit board


184


may be rectangularly shaped with two rectangular notches


423


and


424


at the top left and right.




As is also shown in

FIG. 3C

, the backplane printed circuit board


184


also has high density connectors


413


,


415


,


417


and


419


which connect to corresponding network interface modules


104


. Each high density connector has a pair of guiding pegs


414


,


416


,


418


, and


420


which fit into corresponding guiding holes in each network interface module


104


. The backplane printed circuit board also mounts a high density connector


411


and a guiding peg


412


for connecting with the CPU module


103


and two connectors


421


for connecting with the power modules


105


. The backplane printed circuit board


184


may also include sets of perforations


422


sufficient in number and in such patterns so as to assist with the cooling of each power module


105


. The perforations


422


are positioned in the backplane printed circuit board


184


directly behind the power modules


105


when the power modules


105


are removably mounted on the shelves


175


B and


175


C of the chassis.





FIG. 5B

shows the rear side of the backplane printed circuit board


184


. The back of the connectors


421


that connect to the connectors of the power modules


105


are illustrated. Also, the rear of the high density connectors


413


,


415


,


417


and


419


which connect to the network interface modules


104


are also present on the backplane printed circuit board back to connect to the backplane printed circuitry. As shown in this Figure, each high density connector


413


,


415


,


417


,


419


is attached to an input/output (I/O) bus


341


,


344


,


349


or


350


. In one advantageous embodiment, the I/O bus is a peripheral component interconnect (PCI) bus.




In one embodiment of the present invention, the I/O buses


341


,


344


,


349


, and


350


are isolated by bus adapter chips


331


,


332


,


333


and


334


. These bus adapter chips


331


,


332


,


333


, and


334


provide, among other services, arbitered access and speed matching along the I/O bus. One possible embodiment uses the DEC 21152 Bridge chip as the bus adapter


331


,


332


,


333


or


334


.




Several advantages of the present invention are provided by the bus adapter chips


331


through


334


as they may be configured to provide electrical termination and isolation when the corresponding network interface module


104


has been removed from its shelf on the chassis. Thus, in this embodiment, the bridge


331


,


332


,


333


or


334


acts as a terminator so that the removal and replacement of a network interface module


104


from its shelf of the chassis


170


, through an electrical removal and insertion is not an electrical disruption on the primary side of the bridge chip


331


,


332


,


333


or


334


. It is the primary side of the bridge chip


331


B,


332


B,


333


B or


334


B which ultimately leads. to the CPU module


103


. Thus, the bridge chip


331


,


332


,


333


or


334


provides isolation for upstream electrical circuitry on the backplane printed circuit board


184


and ultimately for the CPU module


103


through an arbitration and I/O controller chip


351


or


352


. As mentioned above, this embodiment uses a PCI bus for the I/O bus. In such an instance, the bridge chip is a PCI to PCI bridge. The arbitration and I/O controller chip


351


or


352


(not illustrated in

FIG. 2

above) determines arbitered access of the I/O bus and I/O interrupt routing. The I/O bus


343


or


346


then continues from the arbitration and I/O controller chip


351


or


352


to the back side of the high density connector


411


that connects with the corresponding high density connector of the CPU module


103


when the CPU module


103


is mounted on the top shelf


175


A of the chassis


170


.





FIG. 6

shows aspects of one embodiment of a network interface module


104


. The modularity provided by the canister configuration provides ease of maintenance. Referring now to this Figure, the network interface module


104


comprises a canister


560


with a front


560


A, back


560


B, sides


560


C, top


560


D and bottom


560


E. The canister front


560


A may be positioned proximate the front of the chassis when the canister is removably mounted on a shelf of the chassis. A printed circuit board


561


is secured flat against the canister side


560


C inside the canister


560


. The printed circuit board


561


comprises an I/O bus. As described above, in one advantageous embodiment, the I/O bus is a PCI bus. A plurality of interface card slots


562


, are attached to the I/O bus. The number of allowed interface card slots is determined by the maximum load the I/O bus can handle. In one illustrated embodiment, four interface card slots


562


are provided, although more or less could alternatively be used. Also connected to the I/O bus and on one end of the printed circuit board


561


is a high density connector


563


which mates with one of the high density connectors on the backplane board


184


. Above and below the connector


563


is a solid molding with a guiding hole. These two guiding holes correspond with a pair of guiding pegs


414


,


416


,


418


, or


420


which along with the chassis and the chassis guiding rails assist, when the canister


560


is removably mounted, in bringing together or mating the 180 pin male connector


563


at one end of the printed circuit board


561


and the 180 pin female connector


413


,


415


,


417


or


419


on the backplane printed circuit board


184


.




Interface cards may be slipped into or removed from the interface card slots


562


when the canister


560


is removed from its shelf


175


B or


175


C in the chassis


170


. An interface card slot


562


be empty or may be filled with a general interface card. The general interface card may be a network interface card (NIC) such as, but not limited to, an Ethernet card or other local area network (LAN) card, with a corresponding NIC cable connected to the NIC and routed from the server


100


to a LAN. The general interface card may be a small computer system interface (SCSI) controller card with a corresponding SCSI controller card cable connected to the SCSI controller card. In this embodiment, the SCSI controller card is connected by a corresponding SCSI controller card cable to a data storage module which may be connected to data storage modules such as hard disks


106


or other data storage device. Furthermore, the general interface card need not be a NIC or an SCSI controller card, but may be some other compatible controller card. The canister front


560


A also has bay windows


564


from which the general interface card cable may attach to a general interface card. Unused bay windows may be closed off with bay window covers


565


.




The network interface module


104


also has a novel cooling system. Each network interface module


104


extends beyond the chassis rear, and in this portion, may include a pair of separately removable fans


566


A and


566


B. The separately removable fans are positioned in series with one separately removable fan


566


B behind the other separately removable fan


566


A. The pair of separately removable fans


566


A and


566


B run at reduced power and reduced speed unless one of the separatey removable fans


566


A or


566


B fails, in which case, the remaining working separately removable fan


566


B or


566


A will run at increased power and increased speed to compensate for the failed separately removable fan


566


A or


566


B. The placement of the separately removable fans


566


A and


566


B beyond the chassis rear make them readily accessible from the behind the rack


102


. Accessibility is desirable since the separately removable fans


566


A and


566


B may be removed and replaced without powering down or removing the network interface module


104


.




To further assist with the cooling of the canister


560


, the canister


560


has sufficient sets of perforations


567


in such pattern to assist in cooling the canister


560


. In this embodiment, the perforations


567


are holes in the canister


560


placed in the pattern of roughly a rectangular region.




A significant advantage of this embodiment is the ability to change a general interface card in a network server


100


without powering down the network server


100


or the CPU module


103


. To change a general interface card, it is desirable to first identify the bridge chip


331


,


332


,


333


or


334


whose secondary side is connected to the network interface module


104


containing the general interface card to be changed.




Assuming that the general interface card that needs to be changed is in the network interface module


104


which is connected by PCI bus and high density connector to bridge chip


331


, to remove the network interface module


104


without disrupting operation of the other portions of the server


100


, the bridge chip


331


may become an electrical termination to isolate the electrical hardware of the network server from the electrical removal or insertion on the bridge chip secondary side


331


A. This may be accomplished by having the CPU module


103


place the secondary side


331


A,


332


A,


333


A or


334


A of the bridge into a reset mode and having circuitry oil. the printed circuit board


561


of the network interface module


104


power down the canister


560


including the general interface cards within the canister


560


. Once the canister


560


is powered down and the bridge chip has electrically isolated the network interface module from the rest of the electrical hardware in the network server


100


, then the network interface module


104


may be pulled out its shelf


175


B in the chassis;


170


. After the network interface module


104


has been removed, then the general interface card can be removed from its interface card slot


562


and replaced. Subsequently, the network interface module


104


is removably mounted again on the shelf


175


B in the chassis


170


. The electrical hardware on the printed circuit board


561


of the network interface module


104


may then power up the canister


560


including the general interface cards within the canister


560


. The bridge chip secondary side


331


A,


332


A,


333


A or


334


A is brought out of reset by the CPU module


103


and the network interface module


104


is again functional.




At no time during the procedure did the network server


100


or the CPU module


103


have to be powered down. Although the one network interface module


104


was powered down during the procedure, the other network interface modules were still functioning normally. In fact, any workstation connected to the network server


100


by means other than the affected network interface module


104


would still have total access to the CPU module


103


, the other network interface modules, and all the networks and data storage modules such as, but not limited to hard disks, CD-ROM modules, or other data storage devices that do not rely upon the general interface cards inside the removed network interface module. This is a desired advantage since network server down time can be very costly to customers and to vendors, can create poor customer opinion of the vendor, vendor's products and services, and decrease overall computing throughput.




The foregoing description details certain embodiments of the present invention and describes the best mode contemplated. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the invention can be practiced in many ways. As is also stated above, it should be noted that the use of particular terminology when describing certain features or aspects of the present invention should not be taken to imply that the broadest reasonable meaning of such terminology is not intended, or that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the invention with which that terminology is associated. The scope of the presents invention should therefore be construed in accordance with the appended Claims and any equivalents thereof.



Claims
  • 1. A method of changing an interface card in a computer, comprising:prior to removing a first interface module from the computer, powering down the first interface module via power control circuit without powering down said computer, such that the computer is provided arbitrated access to at least a second interface module; removing the first interface module from the computer, comprising the act of electrically terminating and isolating electrical hardware of said computer upstream the point where said first interface module is removed; removing an interface card from said first interface module; replacing said interface card into said interface module; and replacing said first interface module into said computer without powering down said computer.
  • 2. A method as defined in claim 1 wherein said act of removing the interface module comprises the act of disconnecting a high density connector on said interface module from a high density connector on said server when said interface module is removed from said server.
  • 3. A method as defined in claim 1 wherein said act of removing said interface card comprises the act of removing a network interface card (NIC).
  • 4. A method as defined in claim 1 wherein said act of removing said interface card comprises the act of removing a small computer system interface (SCSI) controller card.
  • 5. A method as defined in claim 1 wherein said act of removing said interface card comprises the act of opening the interface module holding said interface card of said plural interface cards.
  • 6. A method as defined in claim 5 wherein said act of opening the interface module comprises opening a canister.
  • 7. A method as defined in claim 1 wherein said act of replacing said interface module comprises the act of connecting a high density connector on said interface module to a high density connector on said server when said interface module is mounted on said server.
  • 8. A method as defined in claim 1 wherein said act of replacing said interface module comprises the act of powering up said interface module including said plural interface cards after said interface module is replaced into said server.
  • 9. A method for interconnecting plural modules of a server, comprising:mounting a backplane printed circuit board on the back of a chassis with a front, back, sides, top, and bottom; connecting a CPU module to said backplane printed circuit board when mounting a CPU module on said chassis; connecting each interface module of plural interface modules to said backplane printed circuit board when removably mounting each said interface module on said chassis, wherein said backplane printed circuit board comprises electrical hardware that is configured to provide electrical termination and isolation between said interface module and electrical circuitry on said backplane printed circuit board upstream said electrical hardware when any of said interface modules has been removed; removably connecting at least one interface card to one of said plurality of interface modules; and removably connecting a plurality of power modules to said backplane printed circuit board.
  • 10. A method as defined in claim 9 wherein the act of connecting said plural power modules to said backplane printed circuit board comprises the act of connecting at least one redundant power module of said plural power modules to said backplane printed circuit board.
  • 11. A method as defined in claim 9 wherein said act of connecting said CPU module to said backplane printed circuit board comprises the act of connecting a high density connector of said CPU module to a high density connector of said backplane printed circuit board.
  • 12. A method as defined in claim 9 wherein said act of connecting said CPU module to said backplane printed circuit board comprises the act of connecting a 360 pin male connector of said CPU module to a 360 pin female connector of said backplane printed circuit.
  • 13. A method as defined in claim 9 where said act of connecting said interface module to said backplane printed circuit board comprises the act of connecting a high density connector of said interface module to a high density connector on said backplane printed circuit board.
  • 14. A method as defined in claim 9 where said act of connecting said interface module to said backplane printed circuit board comprises the act of connecting a 180 pin male connector of said interface module to a 180 pin female connector of said backplane printed circuit board.
  • 15. A method as defined in claim 9 wherein said act of connecting said interface module to said backplane printed circuit board comprises the acts of powering up said interface module and commanding electrical hardware on said backplane printed circuit board to stop acting as an electrical termination and to stop isolating electrical hardware of said backplane printed circuit board from the electrical hardware of said interface module.
  • 16. A method as defined in claim 15 wherein said act of powering up said interface module comprises the act of powering up plural general interface cards therein.
  • 17. A method as defined in claim 9 further comprising the act of connecting by cable plural data storage modules to a interface module of said plural interface modules.
  • 18. A method as defined in claim 9 wherein the backplane printed circuit board further comprises electrical hardware that is configured to provide arbitrated access to said interface module.
PRIORITY CLAIM

The benefit under 35 U.S.C. § 119(e) of the following U.S. provisional application(s) is hereby claimed:

US Referenced Citations (270)
Number Name Date Kind
4100597 Fleming et al. Jul 1978
4449182 Rubinson et al. May 1984
4692918 Elliott et al. Sep 1987
4695946 Andreasen et al. Sep 1987
4707803 Anthony, Jr. et al. Nov 1987
4774502 Kimura Sep 1988
4821180 Gerety et al. Apr 1989
4835737 Herrig et al. May 1989
4894792 Mitchell et al. Jan 1990
4949245 Martin et al. Aug 1990
4999787 McNally et al. Mar 1991
5006961 Monico Apr 1991
5007431 Donehoo, III Apr 1991
5033048 Pierce et al. Jul 1991
5073932 Yossifor et al. Dec 1991
5103391 Barrett Apr 1992
5118970 Olson et al. Jun 1992
5121500 Arlington et al. Jun 1992
5123017 Simpkins et al. Jun 1992
5136708 Lapourtre et al. Aug 1992
5136715 Hirose et al. Aug 1992
5138619 Fasang et al. Aug 1992
5210855 Bartol May 1993
5245615 Treu Sep 1993
5247683 Holmes et al. Sep 1993
5261094 Everson et al. Nov 1993
5265098 Mattson et al. Nov 1993
5269011 Yanai et al. Dec 1993
5272382 Heald et al. Dec 1993
5272584 Austruy et al. Dec 1993
5276814 Bourke et al. Jan 1994
5276863 Heider Jan 1994
5277615 Hastings et al. Jan 1994
5280621 Barnes et al. Jan 1994
5283905 Saadeh et al. Feb 1994
5307354 Cramer et al. Apr 1994
5311397 Harshberger et al. May 1994
5311451 Barrett May 1994
5317693 Cuenod et al. May 1994
5329625 Kannan et al. Jul 1994
5337413 Lui et al. Aug 1994
5351276 Doll, Jr. et al. Sep 1994
5367670 Ward et al. Nov 1994
5379184 Barraza et al. Jan 1995
5379409 Ishikawa Jan 1995
5386567 Lien et al. Jan 1995
5388267 Chan et al. Feb 1995
5402431 Saadeh et al. Mar 1995
5404494 Garney Apr 1995
5423025 Goldman et al. Jun 1995
5430717 Fowler et al. Jul 1995
5430845 Rimmer et al. Jul 1995
5432715 Shigematsu et al. Jul 1995
5438678 Smith Aug 1995
5440748 Sekine et al. Aug 1995
5448723 Rowett Sep 1995
5455933 Schieve et al. Oct 1995
5460441 Hastings et al. Oct 1995
5463766 Schieve et al. Oct 1995
5465349 Geronimi et al. Nov 1995
5471617 Farrand et al. Nov 1995
5471634 Giorgio et al. Nov 1995
5483419 Kaczeus, Sr. et al. Jan 1996
5485550 Dalton Jan 1996
5485607 Lomet et al. Jan 1996
5487148 Komori et al. Jan 1996
5491791 Glowny et al. Feb 1996
5493574 McKinley Feb 1996
5493666 Fitch Feb 1996
5513314 Kandasamy et al. Apr 1996
5513339 Agrawal et al. Apr 1996
5515515 Kennedy et al. May 1996
5517646 Piccirillo et al. May 1996
5519851 Bender et al. May 1996
5530810 Bowman Jun 1996
5533193 Roscoe Jul 1996
5533198 Thorson Jul 1996
5535326 Baskey et al. Jul 1996
5539883 Allon et al. Jul 1996
5542055 Amini et al. Jul 1996
5548712 Larson et al. Aug 1996
5555510 Verseput et al. Sep 1996
5559965 Oztaskin et al. Sep 1996
5560022 Dunstan et al. Sep 1996
5564024 Pemberton Oct 1996
5566299 Billings et al. Oct 1996
5566339 Perholtz et al. Oct 1996
5568610 Brown Oct 1996
5568619 Blackledge et al. Oct 1996
5577205 Hwang et al. Nov 1996
5579487 Meyerson et al. Nov 1996
5579491 Jeffries et al. Nov 1996
5579528 Register Nov 1996
5581712 Herrman Dec 1996
5581714 Amini et al. Dec 1996
5584030 Husak et al. Dec 1996
5586250 Carbonneau et al. Dec 1996
5586271 Parrett Dec 1996
5588121 Reddin et al. Dec 1996
5588144 Inoue et al. Dec 1996
5592610 Chittor Jan 1997
5592611 Midgely et al. Jan 1997
5596711 Burckhartt et al. Jan 1997
5602758 Lincoln et al. Feb 1997
5604873 Fite et al. Feb 1997
5606672 Wade Feb 1997
5608865 Midgely et al. Mar 1997
5608876 Cohen et al. Mar 1997
5615207 Gephardt et al. Mar 1997
5621892 Cook Apr 1997
5625238 Ady et al. Apr 1997
5627962 Goodrum et al. May 1997
5628028 Michelson May 1997
5630076 Saulpaugh et al. May 1997
5631847 Kikinis May 1997
5632021 Jennings et al. May 1997
5636341 Matsushita et al. Jun 1997
5638289 Yamada et al. Jun 1997
5644470 Benedict et al. Jul 1997
5644731 Liencres et al. Jul 1997
5651006 Fujino et al. Jul 1997
5652832 Kane et al. Jul 1997
5652833 Takizawa et al. Jul 1997
5652839 Giorgio et al. Jul 1997
5652908 Douglas et al. Jul 1997
5655081 Bonnell et al. Aug 1997
5655083 Bagley Aug 1997
5655148 Richman et al. Aug 1997
5659682 Devarakonda et al. Aug 1997
5664118 Nishigaki et al. Sep 1997
5664119 Jeffries et al. Sep 1997
5666538 DeNicola Sep 1997
5668943 Attanasio et al. Sep 1997
5668992 Hammer et al. Sep 1997
5669009 Buktenica et al. Sep 1997
5675723 Ekrot et al. Oct 1997
5680288 Carey et al. Oct 1997
5682328 Roeber et al. Oct 1997
5684671 Hobbs et al. Nov 1997
5689637 Johnson et al. Nov 1997
5696895 Hemphill et al. Dec 1997
5696899 Kalwitz Dec 1997
5696949 Young Dec 1997
5696970 Sandage et al. Dec 1997
5701417 Lewis et al. Dec 1997
5704031 Mikami et al. Dec 1997
5708775 Nakamura Jan 1998
5708776 Kikinis Jan 1998
5712754 Sides et al. Jan 1998
5715456 Bennett et al. Feb 1998
5717570 Kikinis Feb 1998
5721935 DeSchepper et al. Feb 1998
5724529 Smith et al. Mar 1998
5726506 Wood Mar 1998
5727207 Gates et al. Mar 1998
5732266 Moore et al. Mar 1998
5737747 Vishlitzky et al. Apr 1998
5740378 Rehl et al. Apr 1998
5742514 Bonola Apr 1998
5747889 Raynham et al. May 1998
5748426 Bedingfield et al. May 1998
5754449 Hoshal et al. May 1998
5754797 Takahashi May 1998
5758165 Shuff May 1998
5758352 Reynolds et al. May 1998
5761033 Wilhelm Jun 1998
5761045 Olson et al. Jun 1998
5761085 Giorgio Jun 1998
5761462 Neal et al. Jun 1998
5761707 Aiken et al. Jun 1998
5764924 Hong Jun 1998
5764968 Ninomiya Jun 1998
5765008 Desai et al. Jun 1998
5765198 McCrocklin et al. Jun 1998
5767844 Stoye Jun 1998
5768541 Pan-Ratzlaff Jun 1998
5768542 Enstrom et al. Jun 1998
5771343 Hafner et al. Jun 1998
5774640 Kurio Jun 1998
5774645 Beaujard et al. Jun 1998
5774741 Choi Jun 1998
5777897 Giorgio Jul 1998
5781703 Desai et al. Jul 1998
5781716 Hemphill et al. Jul 1998
5781744 Johnson et al. Jul 1998
5781767 Inoue et al. Jul 1998
5781798 Beatty et al. Jul 1998
5784555 Stone Jul 1998
5784576 Guthrie et al. Jul 1998
5787019 Knight et al. Jul 1998
5787491 Merkin et al. Jul 1998
5790831 Lin et al. Aug 1998
5793948 Asahi et al. Aug 1998
5793987 Quackenbush et al. Aug 1998
5794035 Golub et al. Aug 1998
5796185 Takata et al. Aug 1998
5796934 Bhanot et al. Aug 1998
5796981 Abudayyeh et al. Aug 1998
5797023 Berman et al. Aug 1998
5798828 Thomas et al. Aug 1998
5799036 Staples Aug 1998
5799196 Flannery Aug 1998
5801921 Miller Sep 1998
5802269 Poisner et al. Sep 1998
5802298 Imai et al. Sep 1998
5802393 Begun et al. Sep 1998
5802552 Fandrich et al. Sep 1998
5802592 Chess et al. Sep 1998
5805804 Laursen et al. Sep 1998
5805834 McKinley et al. Sep 1998
5809224 Schultz et al. Sep 1998
5809256 Najemy Sep 1998
5809287 Stupek, Jr. et al. Sep 1998
5809311 Jones Sep 1998
5809555 Hobson Sep 1998
5812750 Dev et al. Sep 1998
5812757 Okamoto et al. Sep 1998
5812858 Nookala et al. Sep 1998
5815117 Kolanek Sep 1998
5815647 Buckland et al. Sep 1998
5815651 Litt Sep 1998
5815652 Ote et al. Sep 1998
5819054 Ninomiya et al. Oct 1998
5822547 Boesch et al. Oct 1998
5826043 Smith et al. Oct 1998
5829046 Tzelnic et al. Oct 1998
5835738 Blackledge, Jr. et al. Nov 1998
5841964 Yamaguchi Nov 1998
5841991 Russell Nov 1998
5845061 Miyamoto et al. Dec 1998
5845095 Reed et al. Dec 1998
5850546 Kim Dec 1998
5852720 Gready et al. Dec 1998
5852724 Glenn, II et al. Dec 1998
5857074 Johnson Jan 1999
5857102 McChesney et al. Jan 1999
5864654 Marchant Jan 1999
5864713 Terry Jan 1999
5867730 Leyda Feb 1999
5875307 Ma et al. Feb 1999
5875308 Egan et al. Feb 1999
5878237 Olarig Mar 1999
5884027 Garbus et al. Mar 1999
5884049 Atkinson Mar 1999
5886424 Kim Mar 1999
5889965 Wallach et al. Mar 1999
5892898 Fujii et al. Apr 1999
5892915 Duso et al. Apr 1999
5892928 Wallach et al. Apr 1999
5893140 Vahalia et al. Apr 1999
5898846 Kelly Apr 1999
5898888 Guthrie et al. Apr 1999
5905867 Giorgio May 1999
5907672 Matze et al. May 1999
5909568 Nason Jun 1999
5911779 Stallmo et al. Jun 1999
5913034 Malcolm Jun 1999
5922060 Goodrum Jul 1999
5930358 Rao Jul 1999
5935262 Barrett et al. Aug 1999
5936960 Stewart Aug 1999
5938751 Tavallaei et al. Aug 1999
5941996 Smith et al. Aug 1999
5964855 Bass et al. Oct 1999
5983349 Kodama et al. Nov 1999
5987554 Liu et al. Nov 1999
5987621 Duso et al. Nov 1999
5987627 Rawlings, III Nov 1999
6012130 Beyda et al. Jan 2000
6038624 Chan et al. Mar 2000
Foreign Referenced Citations (5)
Number Date Country
0 866 403 A1 Sep 1998 EP
04 333 118 A Nov 1992 JP
07 261 874 A Oct 1995 JP
07 093 064 A Apr 1995 JP
05 233 110 A Sep 1993 JP
Non-Patent Literature Citations (35)
Entry
ftp.cdrom.com/pub/os2/diskutil/, PHDX software, phdx.zip download, Mar. 1995, “Parallel Hard Disk Xfer.”
Cmaster, Usenet post to microsoft.public.windowsnt.setup, Aug. 1997, “Re: FDISK switches.”
Hildebrand, N., Usenet post to comp.msdos.programmer, May 1995, “Re: Structure of disk partition into.”
Lewis, L., Usenet post to alt.msdos.batch, Apr. 1997, “Re: Need help with automating FDISK and Format.”
Netframe, http://www.netframe-support.com/technology/datasheets/data.htm, before Mar. 1997, “Netframe ClusterSystem 9008 Data Sheet.”
Simos, M., Usenet post to comp.os.msdos.misc, Apr. 1997, “Re: Auto FDISK and Format.”
Wood, M. H., Usenet post to comp.os.netware.misc, Aug. 1996, “Re: Workstation duplication method for WIN95.”
Lyons, Computer Reseller News, Issue 721, pp. 61-62, Feb. 3, 1997, “ACC Releases Low-Cost Solution for ISPs.”
M2 Communications, M2 Presswire, 2 pages, Dec. 19, 1996, “Novell IntranetWare Supports Hot Pluggable PCI from NetFRAME.”
Rigney, PC Magazine, 14(17):375-379, Oct. 10, 1995, “The One for the Road (Mobile-aware capabilities in Windows 95).”
Stanley, and Anderson, PCI System Architecture, Third Edition, p. 382, Copyright 1995.
Standard Overview, http://www.pc-card.com/stand-overview.html#1, 9 pages, Jun. 1990, “Detailed Overview of the PC Card Standard.”
Digital Equipment Corporation, datasheet, 140 pages, 1993, “DECchip 21050 PCI-TO-PCI Bridge.”
NetFrame Systems Incorporated, News Release, 3 pages, referring to May 9, 1994, “NetFrames's New High-Availability ClusterServer Systems Avoid Scheduled as well as Unscheduled Downtime.”
Compaq Computer Corporation, Phenix Technologies, LTD. and Intel Corporation, specification, 55 pages, May 5, 1995, “Plug & Play BIOS Specification.”
NetFrame Systems Incorporated, datasheet, Feb. 1996, “NF450FT Network Mainframe.”
NetFrame Systems Incorporated, datasheet, Mar. 1996, “NetFrame Cluster Server 8000.”
Joint work by Intel Corporation, Compaq, Adeptec, Hewlett Packard, and Novell, Presentation, 22 pages, Jun. 1996, “Intelligent I/O Architecture.”
Lockareff, M., HTINews, http://www.hometoys.com/htinews/dec96/articles/loneworks.htm, Dec. 1996, “Loneworks—An Introduction.”
Schofield, M.J., http://www.omegas.co.uk/CAN/canworks,htm, Copyright 1996, 1997, “Controller Area Network—How CAN Works.”
NRTT, Ltd., http://www.nrtt.demon.co.uk/cantech,html, 5 pages, May 28, 1997, “CAN: Technical Overview.”
Herr, et al., Linear Technology Magazine, Deasign Features, pp. 21-23, Jun. 1997, “Hot Swapping the PCI Bus.”
PCI Special Interest Group, specification, 35 pages, Draft For Review Only, Jun. 15, 1997, “PCI Bus Hot Plug Specification.”
Microsoft Corporation, file:///A ¦/Rem-devs.htm, 4 pages, Copyright 1997, updated Aug. 13, 1997, “Supporting Removable Devices Under Windows and Windows NT.”
Davis, T, Usenet post to alt.msdos.programmer, Apr. 1997, “Re: How do I create an FDISK batch file?”
Davis, T., Usenet post to alt.msdos.batch, Apr. 1997, “Re: Need help with automating FDISK and FORMAT . . . ”
NetFrame Systems Incorporated, Doc. No. 78-1000226-01, pp. 1-2, 5-8, 359-404, and 471-512, Apr. 1996, “NetFrame Clustered Multiprocessing Software: NW0496 DC-ROM for Novel® NetWare® 4.1 SMP, 4.1, and 3.12.”
Shanley, and Anderson, PCI System Architecture, Third Edition, Chapter 15, pp. 297-302, Copyright 1995, “Intro To Configuration Address Space.”
Shanley, and Anderson, PCI System Architecture, Third Edition, Chapter 16, pp. 303-328, Copyright 1995, “Configuration Transactions.”
Sun Microsystems Computer Company, Part No. 802-5355-10, Rev. A, May 1996, “Solstice SyMON User's Guid.”
Sun Microsystems, Part No. 802-6569-11, Release 1.0.1, Nov. 1996, “Remote Systems Diagnostics Installation & User Guide.”
Haban, D. & D. Wybranietz , IEEE Transaction on Software Engineering, 16(2):197-211, Feb. 1990, “A Hybrid Monitor for Behavior and Performance Analysis of Distributed Systems.”
Gorlick, M., Conf. Proceedings: ACM/ONR Workshop on Parallel and Distributed Debugging, pp. 175-181, 1991, “The Flight Recorder: An Architectural Aid for System Monitoring.”
IBM Technical Disclosure Bulletin, 92A+62947, pp. 391-394, Oct. 1992, Method for Card Hot Plug Detection and Control.
NetFrame ClusterSystem 9008 Data Sheet, company dated product to before Mar. 1997.
Provisional Applications (6)
Number Date Country
60/047016 May 1997 US
60/046416 May 1997 US
60/047003 May 1997 US
60/046490 May 1997 US
60/046398 May 1997 US
60/046312 May 1997 US