The present invention relates in general to a server architecture intended to emulate and replace various legacy functions and chips to provide plurality of simulated remotely managed functions such as Keyboard Video Mouse (KVM) over IP for remote management and BIOS.
Modern servers are critical component of any modern IT system. Enterprise class servers are hardly being used toady as a conventional computer connected locally to display, keyboard and mouse. Typical application for such server requires remote access for the administrator through a KVM (Keyboard Video Mouse). This access is typically limited for initial installation, maintenance, monitoring and trouble-shooting of the server.
In larger sites where one administrator need to manage many servers or when servers are co-located at remote sites a KVM over IP function can be added to the servers to enable remote management of multiple servers from remote computer.
This type of use requires double conversion of various functions from the digital domain to the physical domain (or User Interface) and back to the digital domain—for example: digital video image is generated in an on-board video controller, and then the image is converted into an analog video signal. Then this analog video signal is sampled by an Analog to Digital converter in the attached KVM device or management card. The sampled digital stream is then compressed, routed to the remote administrator location where it is converted again to analog signal for the administrator display.
This double conversion process suffers from several significant disadvantages:
Remote management of the BIOS (Basic Input Output System) is another problem exists in current servers. At the early days of the PC, the BIOS was extensively used to initiate the platform and its connected peripherals. Later the value of the BIOS reduced as PC operating systems took many of the early functions of the BIOS. In recent years as multiple core servers and blade architecture evolved, the BIOS role becomes significant again. In current enterprise servers the BIOS responsible for much complex platform initialization, security, health monitoring, power management, thermal management, multi-processor configuration, processors and busses initialization and many other roles.
Hence BIOS settings and upgrades become more important for servers and BIOS centralized management becoming more challenging.
The current invention intended to replace least two of the above mentioned real functions by providing a server function that emulates the exact or similar behavior of these functions to the server cores, but on the same time designed in a way that enables simple remote management over LAN or WAN. These functions defined hereafter as Remote Manageable Emulated Functions or RMEF.
One significant advantage of the architecture of the server of the present invention is that since real functions are emulating the behavior of similar functions in a standard x86 or 64 bit PC architecture, this architecture enabling minimal or no changes in the Operating system and the applications installed on such server.
The design of such RMEF follows the following general guidelines:
RMEF design aspects from the front side (server cores):
RMEF design aspects from the back side (management LANNVAN):
The server of the present invention may be implemented using emulated functions (RMEF) to replace all real functions other than the cores and the physical function (power, cooling etc.)
For example, the following real functions may be replaced by Remote Manageable Emulated Functions (RMEF) as part of the server apparatus of the present invention:
The benefits of such server architecture:
No need for management cards, modules or external KVMs
The present invention provides a server architecture and apparatus suitable for virtualization and remote management having one or more emulated functions substituting one or more functions.
With the latest advancements in server virtualization and client virtualization, the role of the enterprise server is shifting toward a highly replicated computational resource. The installation of the virtual server is typically done remotely and in many cases it is automated. The need for local user interface reducing and the need for detailed remote management are becoming more apparent.
The popularity of high-end multiprocessor servers and their use for virtual applications reduce the usability of the platform in a direct connection mode and therefore emulating these direct connection functions provide even better need for the present invention.
A better understanding of the present invention can be obtained when the following detailed description of the preferred embodiment is considered in conjunction with the following drawings.
The same reference numbers are used to designate the same or related features on different drawings. The drawings are generally not drawn to scale.
In the following detailed description, numerous details are set forth in order to provide a thorough understanding of the present disclosed subject matter. However, it will be understood by those skilled in the art that the disclosed subject matter may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as to not obscure the disclosed subject matter.
Coherent HyperTransport buses enable shared cache content access between the 4 nodes.
Non-coherent HyperTransport bus 19 extends from Node A—14a through HyperTransport bridge 22 and from there through HyperTransport bus 26 to the HyperTransport cave at the I/O Hub 55. The said bridge 22 bridges between the passing HyperTransport bus and PCI-X or PCI Express busses 23a and 23b connected to open PCI-X or PCI Express slots 24a and 24b respectively. This arrangement enables assembly of various standard PCI-X or PCI Express cards for data communications with the cores (for example Fiber Channel, SCSI, RAID, Video and LAN cards).
The PCI-X or PCI Express Buses 23a and 23b are 64 bit buses running at clock rates that may vary from 66 MHz single clock to 133 MHz quad clock (533 MHz) delivering data throughputs of 533 MB/s to 4.26 GB/s.
This type of HyperTransport structure called I/O Chain. This I/O Chain is originated in the HyperTransport bridge at Node A 14a, passed through the PCI-X or PCI Express bridge 22 and terminates at the HyperTransport Cave in the I/O Hub 55.
Similar non-coherent structure extends from Node B 14b through HyperTransport bus 20 to HyperTransport bridge 24 where this I/O chain terminates.
The said bridge 24 bridges between the HyperTransport bus and PCI-X or PCI Express busses 27a and 27b connected to PCI-X or PCI Express slots 29a and 29b respectively. This arrangement enables assembly of various standard PCI-X or PCI Express cards for data communications with the cores (for example storage interface cards such as Fiber Channel, SCSI and RAID, Video and LAN cards etc).
In this particular example PCI-X or PCI Express slots 29a are populated by a dual Local Area Network PCI-X or PCI Express card to enable connection of the server 12 to 2 LAN 33a and 33b. These LAN connections may be Giga bit per second or faster and it is extended from the server enclosure or blade trough set of connectors 36a.
Similarly PCI-X slots 29b are populated by a PCI-X or PCI Express storage interface card 30 to enable connection of a storage disk, array or other storage appliance 42 to the server 12. This storage interface is connected via bus 40 that may be SCSI, IDE, SATA, Fiber Channel or any other storage communications protocol. It may be extended outside the server 12 enclosure or blade through set of connectors 36a to interface with external storage resources.
The following text further explains the server 12 South side sub-system 50.
This sub-system is responsible for various secondary functions such as BIOS, Real-Time-Clock, slower peripherals interface and power management. Hyper-Transport bus 26 terminates in the I/O Hub block 55 where certain slower functions are interfaced. These functions may include:
Legacy PCI bus 44 having legacy PCI slots 45. In this example one of these PCI slots 45 is populated by a PCI video card 48. Generated video signals 49 extend from the server 12 enclosure or blade through set of connectors 36b. Legacy PCI bus is 32 bit width and may run at 33 or 66 MHz. PCI bus arbiter function contained in the said I/O Hub block 55. As servers typically do not require high video performance, a legacy PCI card 48 may be sufficient and cost effective compared to a faster PCI-X or PCI Express card needed for multimedia and gaming PC applications.
Audio CODEC 52 is coupled to the I/O Hub 55 by a serial AC-Link bus 51. The audio out signals 53 and the audio in signals 54 extend from the server 12 enclosure or blade through set of connectors 36b. CODEC may be AC '97 2.2 or any other standard with 2, 4, 6 or more audio channels. Typically enterprise servers do not require a high-quality audio CODEC as they are hardly used for local multimedia applications.
USB host controllers resides inside said I/O Hub function 55 (not shown here) and USB signals 58 and 59 extend from the server 12 enclosure or blade through set of connectors 36b. USB is typically used to connect a keyboard and mouse to the server though USB interface is typically USB 2.0 standard to enable connection of faster USB storage devices as well. The USB host controllers are typically interfacing with the system through the internal PCI bus. These USB controllers are typically comprises of Enhanced Host Controller (EHC) to handle the USB 2.0 high speed traffic, and one or more OHCI (Open Host Controller Interface) compliant host controllers to handle all USB1.1 compliant traffic and the legacy keyboard emulation (for non-USB aware environments). It also may comprise of a Port Router block to route the various host controller ports to the proper external USB ports.
The I/O Hub 55 typically contains System Management Bus (SMBus) host controller/s function to communicate with external system resources. SMBus 82 enables monitoring and control of various system resources such as power supplies 83, system clocks source 84 and various cooling functions 85. The supported SMBus protocol may be 1.0, 2.0 or any other usable protocol.
In addition to the SMBUS interface described above, certain discrete/PWM system functions are managed or monitored by the I/O Hub 55 through General Purpose Input/Output (GPIO) and Pulse Width Modulation (PWM) signals 61. Discrete signals may be used to monitor server AC power availability, CPUSLEEP and CPUSTOP signals to command certain CPU low power states, FANRPM and FANCONTROL to monitor and control cooling fans, PWROK signals from power supplies, Thermal warning detect input, Thermal trip input etc. PWM signals can be used to drive cooling fans and pumps at various speed as needed by the system.
Discrete I/Os 61 may also include legacy PC signals such as legacy interrupt inputs (IRQ1 to IRQ15), Serial IRQs, Non Mask able Interrupt (NMI), Power Management Interrupts, Keyboard A20 Gate signal etc.
Low Pin Count (LPC) bus 60 extends from the said I/O Hub 55 to interface with slower and legacy peripheral functions such as:
BIOS ROM/Flash 62 that is being used for server booting. The code residing in this memory space is used to initialize and boot the boot node (Node A in this example). After boot node initialization, same code is used to boot the other 3 nodes through the HyperTransport Coherent busses.
RTC (Real Time Clock) function 64 that provides the server with accurate date and time information. The RTC function is typically backed-up by a small battery to maintain accurate time even when the server is powered off. RTC function also typically includes a 256 bytes of CMOS, battery-powered RAM to store legacy BIOS settings and ACPI-compliant extensions.
Legacy Logic functions 68 comprising of various legacy x86 functions to assure compatibility with software and operating systems. These functions typically include:
Legacy Logic 68 circuitry may be physically located in the I/O Hub function 55. Super I/O 66 used to add external legacy interfaces such as serial port 67, parallel port 69 or PS/2. These interfaces are hardly in use today in typical enterprise server applications.
Optional Baseboard Management Card (BMC) 70 to enable remote server management and monitoring through the standard LAN ports 33a and 33b. BMC 70 is coupled to the LAN card 32 through “side band” connection using I2C or SMBus link 74.
BMC management function typically operates by industry standards such as IPMI 2.0 and Open Platform Management Architecture (OPMA). Such BMC 70 supports hardware monitor for the host CPUs, various system temperature, system fan & CPU fan status, system voltage. It usually supports Event Log information for hardware monitor events. As this function is typically powered by the always-on standby power plan, it supports remote management when system dead or in power standby mode. It enables remote power control through OS to perform Shutdown, Reboot and Power cycle. It also may control directly through buttons on system chassis of functions such as Reset, Power down, Power up and Power cycle. BMC also supports SNMP trap (multiple destinations), Console Redirection (text only) through LAN (SOL—Serial Over LAN) and User, password security control.
The I/O Hub 55 may also include an Ethernet controller function to interface with external LAN 72 and an integrated EIDE or SATA storage interface to enable direct connection of a hard-disk through the connected bus 75. The LAN MAC is typically interfacing with the system through the internal PCI bus. Typically it requires external Physical Layer circuitry to interface with the LAN cable 72.
The I/O Hub 55 functions may include System Power State Controller (SPSC) to enable power planes and CPU management at various power states. Through thermal sensors and cooling system 85 monitoring, operating system and BIOS may regulate various CPU power states and change power supply 83 and clock source 84 settings accordingly.
It should be noted that this typical implementation is popular as it does not require any internal installation and hardware and it is not depending on server software or power. Still it suffers from many disadvantages such as added cost, size and from the degraded monitoring and control functionality compared to internal BMC module.
This figure illustrate a block diagram of a similar 4-way server 100 as in
In this embodiment of the present invention, the functions contained in the conventional South side sub-system are fully replaced by Remotely Manageable Emulated Functions (RMEF) to emulate same operational functions to the host side and therefore enable operating system and software commonality with prior-art servers.
The only user interaction in this implementation is done remotely thorough the management LAN port 110 extending from the server enclosure or blade 100. Internal interfaces such as HyperTransport bus 26 interface, SMBUS 82 interface and discrete/PWM I/O 61 are identical to the prior-art server implementation shown in
The Hyper-Transport to PCI bridge 152 serves as an interface between the faster HyperTransport bus 26 and the slower internal PCI bus 155. In typical implementation the HyperTransport to PCI bridge 152 is capable of linking with the host at rates of 400 Mbps at each direction (aggregated bandwidth of 800 Mbps). This bandwidth is limited in time as the connected busses are much slower and therefore cannot handle traffic to or from the host at such high bandwidth. The HyperTransport to PCI bridge 152 is also connected to the Internal Management Bus (IMB) 156 to enable BIOS and remote management functionality. Typical functionality includes the configuration and assignment of HyperTransport protocol Unit ID for the various emulated functions connected to the PCI bus 155.
Video Controller RMEF 157 is similar to the real (prior art) video controller 48 shown in
The PCI to LPC bridge 160 function is similar to the PCI to LPC bridge 60 shown in
The USB Host Controller RMEF 162 connected to the internal PCI bus 155 similar to the real USB Host controller function in the I/O Hub 55 of
The Audio Codec RMEF 165 connected to the internal PCI bus 155 similar to the real Audio Codec function in the I/O Hub 55 of
The Storage Interface RMEF 166 connected to the internal PCI bus 155 similar to the real Storage controller function in the I/O Hub 55 of
The SMBus host function 190 is similar to the SMBus host function in the I/O Hub 55 of
The GPIO/PWM Controller RMEF 192 is similar to the real GPIO/PWM Controller function in the I/O Hub 55 of
The BIOS RMEF 188 connected to the internal LPC bus 179 similar to the real BIOS function 62 shown in
System setup and BIOS configuration can be loaded and manipulated remotely thorough a centralized management application. Interfaces with LDAP structures enable policy based management of server platforms.
The RTC RMEF 186 connected to the internal LPC bus 179 similar to the real RTC function 64 shown in
The Legacy Logic (LL) RMEF 185 is connected to the internal LPC bus 179 similar to the real LL function 68 shown in
The Super I/O RMEF 184 is connected to the internal LPC bus 179 similar to the real Super I/O function 66 shown in
The Management CPU function 178 is a low power typically RISC architecture processor that manage and control the whole Emulated south-side sub-system 150. This processor runs a code initially stored on a non-volatile memory space (typically flash based) 182. It also uses a volatile memory space such as RAM or SRAM 180. The Management CPU function 178 typically comprised of an integrated bus interface to enable interface with the IMB 156 and an integrated memory controller to interface with the memory 180. Depending on the implementation details the flash function 182 may interface with the Management CPU 178 through the IMB 156 or through a dedicated interface bus and signals.
Typically the Management CPU 178 and the entire Emulated south-side sub-system 150 components are designed for low power operation to enable efficient use when the host is in off state, failed or malfunctioned. Local power supply function 176 responsible for providing power to the Emulated south-side sub-system 150, typically with the host power supplies 83 as the primary power sources. When the host power supplies 83 are off there is still an always on power plane that may power the Emulated south-side sub-system 150 circuitry through the local power supplies 176. In case that host power is not available at all, the local power supply may still operate through power over Ethernet function 174 or through local battery (not shown here). The power over Ethernet (PoE) function 174 extracts power from the management LAN to enable Emulated south-side sub-system 150 operation during power out states. It typically connects to the management LAN interface 172 magnetics to receive power carried between the TX and RX LAN wires or through unused wire pairs.
The Management LAN interface function 172 interfaces between the IMB 156 and the management LAN 110. It is typically comprises of a 10/100 Mbps Media Access Controller (MAC), a Physical Layer circuitry and LAN magnetics to provide matching and isolation.
An optional Crypto Processor function 183 may be added on the IMB 145 to augment the CPU function 178 in complex operations that may be needed for management traffic encryption/decryption and authentications. This function may be adapted to accelerate SSL, IPSEC, DES, 3DES, AES, RING etc.
Components that are not suitable for integration in the digital core 196 may be connected as external components to reduce core complexity and cost. These external components typically include the optional OS flash 170, the Flash 182, the RAM 180, the analog parts of the management LAN interface—the LAN PHY 172a and LAN magnetics 172b, the power supplies 176 and the Power Over Ethernet function 174. Additional support circuitry such as reset generation and main clock 197 may or may not be integrated in the core 196 depending on the specific design.
This type of implementation reduces the complexity, the size and cost of the server embodiment and provides a secured remote access to that server without the need to KVM specific components.
I/O Chain non-coherent Hyper-Transport bus 19 couples the Hyper-Transport Tunnel 202 of the integrated Emulated server south side sub-system to the host. The Hyper-Transport Tunnel 202 further connected to two symmetric Hyper-Transport to PCI-X or PCI Express bridges 206 and 208 to interface the two connected PCI-X or PCI Express busses 205 and 207 respectively.
The PCI-X or PCI Express bus 205 couples with an integrated storage interface function 204 to enable connection of external or internal disks 24. Disk interface may be IDE, EIDE, PATA, SATA, SCSI, RAID, Fiber Channel or any other suitable disk interface technology.
The second PCI-X or PCI Express bus 27 is coupled with two integrated LAN MACs 209a and 210a. These MACs are typically Giga LAN or higher.
MAC 209a is coupled with external LAN PHY 209b and the PHY is coupled to the LAN magnetics 209c that interfaces with external LAN cabling 209d.
MAC 210a is coupled with external LAN PHY 210b and the PHY is coupled to the LAN magnetics 210c that interfaces with external LAN cabling 210d.
The PHY functions may be implemented internally on the core chip 198 to reduce external components needed.
Hyper-Transport tunnel 202 also coupled with a down stream Hyper-Transport bus segment 26 that interfaces with the Emulated South-side function Hyper-Transport cave 152 showed in
This higher integration further reduces the server chipset parts count, reduces the cost and size of this server and improves its manageability and reliability. The integration of the emulated functions into a single chip 198 with the real functions such as LAN and storage interfaces is efficient in terms of I/O pin count as emulated functions are narrowed into a single management LAN interface with just a few I/O lines.
In this embodiment there are two Hyper-Transport links 19 and 20 connected to the integrated core chip 222.
Hyper-Transport tunnel 202 is coupled to one host node through Hyper-Transport link 19 to connect PCI-X or PCI Express bridges 206 and 208.
Hyper-Transport link 26 connected internally to the South-side emulator sub-system is coupled to another host node through Hyper-Transport link 20.
This arrangement enables LAN and storage accesses to pass through Hyper-Transport link 19 while slower I/O such as BIOS, USB, video etc. to pass through another Hyper-Transport link 20.
This implementation is particularly suitable for large multi-processor systems having a “ladder” topology with 6 or 8 processors with coherent Hyper-Transport cross links.
Processor (CPU) 302 processes the data to perform useful server tasks and run various programs. The CPU 302 typically contains on-chip L2 cache to improve memory usage performance. The Front Side Bus (FSB) 303 interconnects the said processor (CPU) 302 and the Memory Control Hub (MCH) 305 that serves as the North Bridge. The FSB 303 is typically a 64 bit data and 32 bit address running at 400 MHz to 1.2 GHz clock. The MCH 305 interfaces between the CPU and the two channels of memory—Memory channel A 303a and Memory channel B 303b to enable access memory banks—Memory A 304a and Memory B 304b respectively. Each memory channel is typically 64 data bits with 8 bit Error Correction (ECC), 13 bit of address and various other control and clock signals to interface with Dual Data Rate (DDR) memory chips. The MCH 305 also bridges between the CPU 302 and the PCI-X or PCI Express graphics video card bus 310 to interface with the PCI-X or PCI Express video card 312. The said video card 312 connected to local display via analog or digital Video interface for display 320 that extends through the server system Chassis 321.
Direct Media Interface (DMI) or Enterprise South Bridge Interface (ESI) 311 interconnects the MCH 305 and the I/O and control functions at the I/O Controller Hub (ICH) 323. The DMI/ESI developed by Intel to meet the I/O device-to-memory bandwidth requirements of PCI-X, PCI Express, PCIe, SATA, USB 2.0, High Definition (HD) Audio and others. It is a proprietary serial interface, based on PCIe. This link offers 2 GB/s maximum bandwidth. The DMI/ESI integrates priority-based servicing to allow concurrent traffic and isochronous data transfer capabilities for HD Audio Codec. DMI is comprises of 4 differential Transmit pairs, 4 differential Receive pairs and various control signals. ESI is based on similar transmit/receive pairs as DMI but with additional clock pair.
The I/O functions attached to the DMI/ESI link 311 including:
In this server implementation 301 various real functions were replaced by the South Side emulator block 333 to enable secured remote management and monitoring through the management LAN 347.
USB Host controller 322 of server 300 in
Similarly Audio CODEC 52 of server 300 in
Similarly PCI-X or PCI Express video controller card 312 of server 300 in
Similarly BIOS 62 of server 300 in
Similarly RTC function 64 of server 300 in
Similarly the Super I/O function 66 of server 300 in
The IMB 156 located at the South Side emulator block 333 connecting the various RMEFs to the management CPU 178, the management LAN interface 172 to enable remote management functions through management LAN 347 remote console/application.
FIG. 12—illustrates a server 395 block-diagram according to the present invention similar to the single CPU server shown in
The ICH USB Host controller function 322 of
This single chip integration offers reduced number of server components and therefore reduced costs and size. It also does not require any additional management cards or KVM to enable full remote management.
PCI or PCI-X or PCI Express Interface 233 couples the RMEF 157 to the system PCI bus 155 to enable host access to the video controller resources. PCI or PCI-X or PCI Express Interface 233 may be configured as a PCI device or as a PCI bus master as necessary.
PCI or PCI-X or PCI Express Interface 233 is also coupled to the internal Video Controller RMEF bus 231. This bus is typically 64 or 128 bit wide to maximize video memory bandwidth. The internal RMEF bus 231 accepts video commands and data form the host through the PCI or PCI-X or PCI Express interface 233 and from the 128 bit graphics engine 234. The 128 bit graphics engine 234 is similar to standard server graphics engine with standard video BIOS registers, 2D video operations, windowing, text and drawing engines. The 128 bit graphics engine 234 accesses and manipulates the video (frame) memory 158 through the memory controller 232 coupled to the internal RMEF bus 231.
Two sets of registers are coupled together to maintain the various RMEF function settings. The External control registers 244 accessible to the host through the internal bus 231 and PCI or PCI-X or PCI Express interface 233. These registers may be monitored and manipulated remotely by the management console/application through the IMB 156. The Internal control registers 246 is only accessible to the management processor through the IMB and not to the host. This set of control registers may be used to define emulation specific parameters such as compression mode, compression quality etc.
The two sets of control registers may be connected to all other RMEF modules to control and monitor required functions.
Video data FIFO (First In First Out) buffer 236 is used to temporarily store frame data read from the video memory 158. FIFO is needed to assure constant and continuous flow of video frames data outside the RMEF through the optional Video Compression module 240 and IMB 156. As high-resolution high color depth and high-frame rates may generate very large amount of raw video data, video compression function may be added to compress this data on the fly. This module 240 get the uncompressed video data stream from Video Data FIFO 236 and applies predefined industry standard compression algorithm such as VNC or any other non-standard compression algorithm as necessary. Proper un-compression algorithm needs to be installed in the remote management console/application to enable video reconstruction and playing. Compression may be lossless or lossy as needed and depending on the actual network bandwidth available at the site.
Hardware cursor function 238 generates the required cursor graphic pattern (for example the mouse pointer) to be superimposed on the video frame stored in the Video data FIFO 236. Cursor location and characteristics may be controlled by the host through the internal bus 231 and the PCI or PCI-X or PCI Express bus interface 233. Cursor graphics may be stored locally at the Hardware cursor function 238 or at pre-defined area in the video memory 158. The video stream flow out of the video data FIFO 236 and the Hardware cursor video moved out of the Hardware cursor function 238 is synchronized and combined at the Video compression function input to create the required superposition effect.
Video clock and timing function 242 generates all synchronized clock outputs required for the operation of all other Video Controller RMEF functions. This function is controlled by the various External control registers 244 and Internal control registers 246. Video clock and timing function 242 also used to generate the horizontal and vertical sync signals required for compression along with the video streams.
Using industry standard protocols such as VESA DDC, remote console display type can be detected and transferred to the Video controller RMEF to enable automatic settings of transmitted video settings to match remote connected display.
FIG. 14—Illustrates a more detailed block diagram of the BIOS RMEF 188 of the present invention shown in
This boot process will be better illustrated referring both
When power first applied to the management power supply 176, it powers up the management CPU function 178 of
As management CPU 178 of
Upon successful test completion or if desired during the self-test, the management CPU 178 initializes the management LAN interface 172 and wait to receive IP address or set a static IP as configured. This step is shown as 1004 in
If management CPU function 178 fails to initiate the management LAN 172, does not have LINK or fail to receive IP address, it switch to a local mode. This is shown in
Upon successful establishment of management network connection, the management CPU attempts to authenticate itself against the management console or management application. This bi-sided authentication is necessary to confirm that the managed server belongs to the group of manageable servers for that management console and application. From the other side, this authentication is essential for the managed server to assure that connected management platform or application is authorized to configure or monitor it. This authentication step is shown as 1005 in
This authentication process may be assisted by the management crypto processor function 183 (shown in
At the next step (1007 in
If management console or application does not release or authorize the server boot, the management program will enter a hold state shown as step 1008 in
At any time after communication with remote management console/application established, the management CPU can monitor and report various server states and parameters through the coupled real functions such as the SMBus host controller 190 of
As server boot is being released by the management console/application or if in local mode, the management CPU 178 will initialize load the latest content into the various RMEFs and real functions coupled to the IMB 156. This step is shown as 1010 in
At the next step 1012 of
During the host boot (step 1014 of
If remote console is connected and set for remote shadowing, at the next step (1018 of
The said remote USB connection also may also enable remote connection of USB storage devices in order to load and install various software applications on the host server.
As first host core finishes to execute POST and BIOS the host is ready for next core boot (if multi-core) or core will start in parallel loading Operating System as shown in
At the end of this step the server will be ready to load the various installed applications—step 1020 in
It should be noted that at any time during the boot process or after the boot, the remote management console/application may command host reset if needed.
Router, Firewall, VPN or modem 420a couples the management LAN switch 410a to the Wide Area Network (WAN) 422. This WAN connected to remote site 402 through another Router, Firewall, VPN or modem 420b and LAN switch or router 410b. Administrator's management console computer 426 enables remote connection to each one of the 3 servers 100a, 100b and 100c located remotely at site 401. Using the management function of the managed servers, administrator at site 402 can see video images transmitted from the said servers Video Controller RMEF on the management console display 430. Management console may be a local application installed on the computer 426 or web browser that connects to the said servers using loadable viewers such as Active-X or JAVA components. Administrator's keyboard 434 may be linked to the managed server using the server management LAN and the USB Host controller RMEF. Similarly administrator's mouse 432 may be linked to the managed server to enable interaction, installation and monitoring.
Storage device 436 may be connected by the administrator to the management console computer 426 to enable data upload or download to the managed remote server. Such storage device 436 may be CD drive, DVD drive, hard-disk, flash drive etc.
Management server 415b and management database 418b located at site 402 are similar to server 415a and database 418a located locally at site 401. This may serve as a centralized management function for multiple remote sites like 401.
Different security schemes may be used to enable secured management functions. This may include multiple administrative permission levels centrally defined in management tree, administrator authentication, session encryption, use of firewalls and VPNs, server authentication and many other security options to protect this critical function from internal or external threats.
Optional management blade 505 comprises from similar managed server 415c or degraded special purpose core. Internal database 418c in the form of local disk or flash storage may be installed to store local software components and settings of the managed blades or 3DMC cells.
Administrator may interact with the managed servers using similar management console computer 426 as shown in
While the invention has been described with reference to certain exemplary embodiments, various modifications will be readily apparent to and may be readily accomplished by persons skilled in the art without departing from the spirit and scope of the above teachings.
It should be understood that features and/or steps described with respect to one embodiment may be used with other embodiments and that not all embodiments of the invention have all of the features and/or steps shown in a particular figure or described with respect to one of the embodiments. Variations of embodiments described will occur to persons of the art.
It is noted that some of the above described embodiments may describe the best mode contemplated by the inventors and therefore include structure, acts or details of structures and acts that may not be essential to the invention and which are described as examples. Structure and acts described herein are replaceable by equivalents which perform the same function, even if the structure or acts are different, as known in the art. Therefore, the scope of the invention is limited only by the elements and limitations as used in the claims. The terms “comprise”, “include” and their conjugates as used herein mean “include but are not necessarily limited to”
Number | Date | Country | |
---|---|---|---|
60957521 | Aug 2007 | US |