1. Technical Field
The present invention relates in general to network communication and, in particular, to a system for managing the update of software images for computer networks.
2. Description of the Related Art
As is known in the art, a communications network is a collection of terminals, links, and nodes connected together to enable communication between users of the terminals. Each terminal in the network must have a unique address so messages or connections can be routed to the correct recipients. Messages are generated by a sending or source terminal, then pass through the intermediate network of links and nodes until they arrive at the receiving or destination terminal. The intermediate network nodes handle these messages and route them down the correct network link towards their final destination terminal.
A large communications network typically includes a many switches, which operate independently at the management, control and data planes. Consequently, in conventional networks, each switch must be individually configured, since each switch implements its own means of handling data, control, and management traffic. Moreover, each switch forwards data, control, and management traffic independently of similar traffic handled by any other of the switches.
To maintain and/or improve network communication, software or firmware updates to installed network infrastructure (including network switches) are required occasionally. Further, network capacity and functionality is enhanced by installing new switches and/or replacing older switches.
In accordance with at least one embodiment, methods, systems and program products for updating system image(s) in a heterogeneous packet-switched network are disclosed.
In at least one embodiment of a switching network, the switching network has a plurality of switches including at least a switch and a managing master switch. At the managing master switch, a first capability vector (CV) is received from the switch. The managing master switch determines whether the first CV is compatible with at least a second CV in a network membership data structure that records CVs of multiple switches in the switching network. In response to detecting an incompatibility, the managing master switch initiates an image update to an image of the switch. In response to a failure of the image update at the switch, the switch boots utilizing a mini-DC module that reestablishes communication between the switch with the managing master switch and retries the image update.
Disclosed herein are methods, systems and program products for updating system image(s) in a heterogeneous packet-switched network, which may include switches from multiple vendors and/or switches with differing hardware and/or software. The update(s) of switch image(s) is/are preferably centrally managed by a managing master switch in the packet-switched network. By updating the system images of one or more switches in the packet-switched network, the managing master switch brings the packet-switched network into a consistent state in which all member switches of the packet-switched network are running the same or compatible switch images.
With reference now to the figures and with particular reference to
Referring now to
DFP switching network 200 includes two or more tiers of switches, which in the instant embodiment includes a lower tier having a plurality of follower switches, including follower switches 202a-202d, and an upper tier having a plurality of master switches, including master switches 204a-204b. In an embodiment with two tiers as shown, a port of each master switch 204 is directly connected by one of inter-tier links 206 to one of the ports of each follower switch 202, and a port of each master switch 204 is coupled directly or indirectly to a port at least one other master switch 204 by a master link 208. A port of each master switch 204a-204b and follower switch 202a-202d is coupled directly or indirectly to a port of File Transfer Protocol (FTP) server 209 by server-switch links 211 and 213. When such distinctions are relevant, ports supporting switch-to-switch communication via inter-tier links 206 are referred to herein as “inter-switch ports,” and other ports (e.g., of follower switch 202a-202d and FTP server 209) are referred to as “data ports.”
In a preferred embodiment, follower switches 202 are configured to operate on the data plane in a pass-through mode, meaning that all ingress data traffic received at data ports 210 of follower switches 202 (e.g., from host platforms) is forwarded by follower switches 202 via inter-switch ports and inter-tier links 206 to one of master switches 204. Master switches 204 in turn serve as the fabric for the data traffic (hence the notion of a distributed fabric) and implement all packet switching and routing for the data traffic. With this arrangement data traffic may be forwarded, for example, in the first exemplary flow indicated by arrows 212a-212d and the second exemplary flow indicated by arrows 214a-214e.
As will be appreciated, the centralization of data plane switching and routing for follower switches 202 in master switches 204 implies that master switches 204 have knowledge of the ingress data ports of follower switches 202 on which data traffic was received. In a preferred embodiment, switch-to-switch communication via links 206, 208 employs a Layer 2 protocol, such as the Inter-Switch Link (ISL) protocol developed by Cisco Corporation or IEEE 802.1 QnQ, that utilizes explicit tagging to establish multiple Layer 2 virtual local area networks (VLANs) over DFP switching network 200. Each follower switch 202 preferably applies VLAN tags (also known as service tags (S-tags)) to data frames to communicate to the recipient master switch 204 the ingress data port 210 on the follower switch 202 on which the data frame was received. In alternative embodiments, the ingress data port can be communicated by another identifier, for example, a MAC-in-MAC header, a unique MAC address, an IP-in-IP header, etc. As discussed further below, each data port 210 on each follower switch 202 has a corresponding virtual port (or vport) on each master switch 204, and data frames ingressing on the data port 210 of a follower switch 202 are handled as if ingressing on the corresponding vport of the recipient master switch 204.
Management of DFP switching network is preferably implemented by a single master switch 204, for example, master switch 204a, herein referred to as the managing master switch. In event of a failure of managing master switch 204a (as detected by the loss of heartbeat messaging with managing master switch 204a via master link 208), another master switch 204b, which may be predetermined or elected from among the remaining operative master switches 204, preferably automatically assumes the role of the managing master switch 204a and implements centralized management and control of the DFP switching network 200. In preparation for a failover operation, managing master switch 204a pushes its image information to other master switches 204, thus enabling seamless failover.
With reference now to
To switch data frames, each member switch 202, 204 within DFP switching network 200 generally includes a plurality of data ports, a switching fabric and a switch controller, which can be implemented with one or more centralized or distributed, special-purpose or general-purpose processing elements or logic devices that implement control entirely in hardware, or more commonly, through the execution of firmware and/or software by a processing element. In master switches 204, the switch controller 302 includes a management module 304 for managing DFP network 200. In a preferred embodiment, only the management module 304 of the managing master switch (i.e., managing master switch 204a or another master switch 204b operating in its stead) is operative at any given time.
Management module 304 preferably includes a management interface 306, for example, an XML or HTML interface accessible to an administrator stationed at a network-connected administrator console (e.g., one of clients 110a-110c) in response to login and entry of administrative credentials. Management module 304, which permits the administrator to centrally manage and control all member switches of DFP switching network 200, preferably presents via management interface 306 a global view of all ports residing on all switches (e.g., master switches 204 and follower switches 202) in a DFP switching network 200.
As further shown in
Referring again to
As further illustrated in
With reference now to
The process of
In response to acquiring the capability vectors of the network switches to which it is connected, managing master switch 204a determines a set of the network switches running compatible images and records the identities of the compatible network switches and their capability vectors in network membership table 400 (block 504). In addition, managing master switch 204a initializes the identified set of compatible network switches as member switches 202, 204 of DFP switching network (block 506). Switches running under incompatible images, if any, are not permitted to immediately join DFP switching network 200, and while capable of communication with master switches 204, remain under independent management and control until these excluded switches are updated to run under a compatible image, as described below with reference to
With reference now to
The illustrated process begins at block 600 and then proceeds to block 602, which depicts managing master switch 204a of DFP switching network 200 receiving a capability vector from a network switch to which it is directly connected by an inter-switch link 211, 213. The capability vector preferably reports the current version of the image running on the network switch. In response, managing master switch 204a determines, via its management module 304, whether the image version reported by the network member is the same as that contained in the combined image 314. If the image versions match, no image update is necessary, and the process proceeds through page connector A to block 620, which is described below.
If, however, managing master switch 204a detects a difference in image versions at block 604, managing master switch 204a determines whether the difference in image versions merits an update of the member switch's image (block 606). In this regard, it should be noted that it is not always necessary that managing master switch 204a and member switches 202, 204 have the same image version. For example, a follower switch 202 may have a higher release number than managing master switch 204a and still share the same capability vector. For this reason, in one embodiment, decision block 606 represents a comparison between the capability vector acquired from the network switch with the capability of the corresponding entry 402 in membership table 400 to determine whether difference in versions causes an incompatibility in capabilities between the images.
Incompatibility between an installed image and a more recent image within combined image 314 can arise for a number of reasons. For example, one source of incompatibility is a hardware or software update of some, but not all of member switches 202, 204. Such an update can lead to an installed image version not supporting a feature that the image version in combined image 314 requires. Other causes of incompatibility include, but are not limited to, protocol updates and changes in management and control data. It should therefore be appreciated that incompatibilities between switch images are not limited to those caused by data plane changes, but can be caused by changes along any of the network planes, including the management plane, control plane, and/or data plane.
If no incompatibility is detected at block 606, the process can return to block 604, and no switch image update is required. However, if an incompatibility is detected at block 606 (or if managing master switch 204a optionally determines to update the image despite its compatibility), managing master switch 204a automatically selects a compatible image version to which the network switch will be updated (block 608). Typically, managing master switch 204a initially searches FTP server 209 to locate a compatible image with which to perform the image update. If FTP server 209 is not configured or is unavailable, managing master switch 204a searches its own local file system (e.g., RAM disk 316) to locate the compatible image.
Upon locating the compatible image, managing master switch 204a initiates the update of the incompatible network switch (block 610). In one preferred embodiment, managing master switch 204a communicates a push request to FTP server 209 to push the updated switch image to the incompatible network switch. Alternatively, managing master switch 204a can communicate a download command to the incompatible member switch, which would in turn download the image directly from FTP server 209. In another alternative embodiment, managing master switch 204a may push the compatible image from its local file system (e.g., RAM disk 316).
Next, at block 612, managing master switch 204a receives a revised capability vector from the previously incompatible network switch and updates network membership table 400. As depicted in decision block 614 managing master switch 204a determines whether the newly received capability vector indicates a successful update of the previously incompatible network switch. If the update was successful, that is, the compatibility vector reported the image selected by managing master switch 204a at block 608, the process passes to block 620, which depicts managing master switch 204a updating network membership table 400 with the switch ID and feature information from the compatibility vector. The process thereafter returns to decision block 604. However, if managing master switch 204a determines at block 614 that the update was not successful, managing master switch 204a decides at block 616 whether to retry the update to the image of the incompatible network switch. If so, the process returns to block 608, which depicts managing master switch 204a selecting a possibly different compatible image with which to update the network switch. However, if managing master switch 204a does not elect to retry the image update, the process terminates at block 618.
With reference now to
The depicted process begins at block 700 and thereafter proceeds to block 702, which depicts a network switch that is directly connected to managing master switch 204a determining whether an image update has been received (e.g., due to managing master switch 204a pushing an updated image or commanding the network switch to pull the updated image). If not, the process iterates at block 702. If, however, an image update has been received, the network switch attempts to install the image update (block 704). As indicated at block 706, if the installation is successful, the process passes to block 712, which depicts the network switch transmitting a new capability vector to managing master switch 204a, as discussed above with reference to block 612 of
Returning to block 706, if the installation of the updated image fails, meaning that the network switch has crashed, the network switch boots with mini-DC module 318 (block 710). Mini-DC module 318 is pre-loaded when the network switch is first initialized and serves as a backup/default OS that loads in the event of an image update failure. While mini-DC module 318 contains all the basic hardware and configuration-related information, mini-DC module 318 has a fixed capability vector, which the network switch reports to managing master switch 204a at block 712. In response to receipt of this fixed capability vector, managing master switch 204a will discover an incompatibility when the fixed capability vector is compared to that listed in network membership table 400, which will trigger managing master switch 204a initiating an update to the incompatible image (as discussed above with reference to
As has been described, a switching network has a plurality of switches including at least a switch and a managing master switch. At the managing master switch, a first capability vector (CV) is received from the switch. The managing master switch determines whether the first CV is compatible with at least a second CV in a network membership data structure that records CVs of multiple switches in the switching network. In response to detecting an incompatibility, the managing master switch initiates an image update to an image of the switch. In response to a failure of the image update at the switch, the switch boots utilizing a mini-DC module that reestablishes communication between the switch with the managing master switch and retries the image update.
While the present invention has been particularly shown as described with reference to one or more preferred embodiments, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention. For example, although aspects have been described with respect to one or more machines (e.g., FTP server and/or network switches) executing program code (e.g., software, firmware or a combination thereof) that direct the functions described herein, it should be understood that embodiments may alternatively be implemented as a program product including a tangible machine-readable storage medium or storage device (e.g., an optical storage medium, memory storage medium, disk storage medium, etc.) storing program code that can be processed by a machine to cause the machine to perform one or more of the described functions.
This application is a continuation of U.S. patent application Ser. No. 13/229,867 entitled “UPDATING A SWITCH SOFTWARE IMAGE IN A DISTRIBUTED FABRIC PROTOCOL (DFP) SWITCHING NETWORK,” by Nirapada Ghosh et al., filed on Sep. 12, 2011, the disclosure of which is incorporated herein by reference in its entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
5394402 | Ross | Feb 1995 | A |
5515359 | Zheng | May 1996 | A |
5617421 | Chin et al. | Apr 1997 | A |
5633859 | Jain et al. | May 1997 | A |
5633861 | Hanson et al. | May 1997 | A |
5742604 | Edsall et al. | Apr 1998 | A |
5893320 | Demaree | Apr 1999 | A |
6147970 | Troxel | Nov 2000 | A |
6304901 | McCloghrie et al. | Oct 2001 | B1 |
6347337 | Shah et al. | Feb 2002 | B1 |
6567403 | Congdon et al. | May 2003 | B1 |
6646985 | Park et al. | Nov 2003 | B1 |
6839768 | Ma et al. | Jan 2005 | B2 |
6901452 | Bertagna | May 2005 | B1 |
7035220 | Simcoe | Apr 2006 | B1 |
7263060 | Garofalo et al. | Aug 2007 | B1 |
7483370 | Dayal et al. | Jan 2009 | B1 |
7561517 | Klinker et al. | Jul 2009 | B2 |
7593320 | Cohen et al. | Sep 2009 | B1 |
7668966 | Klinker et al. | Feb 2010 | B2 |
7830793 | Gai et al. | Nov 2010 | B2 |
7839777 | DeCusatis et al. | Nov 2010 | B2 |
7848226 | Morita | Dec 2010 | B2 |
7912003 | Radunovic et al. | Mar 2011 | B2 |
7974223 | Zelig et al. | Jul 2011 | B2 |
8139358 | Tambe | Mar 2012 | B2 |
8194534 | Pandey et al. | Jun 2012 | B2 |
8213429 | Wray et al. | Jul 2012 | B2 |
8265075 | Pandey | Sep 2012 | B2 |
8271680 | Salkewicz | Sep 2012 | B2 |
8325598 | Krzanowski | Dec 2012 | B2 |
8345697 | Kotha et al. | Jan 2013 | B2 |
8406128 | Brar et al. | Mar 2013 | B1 |
8498299 | Katz et al. | Jul 2013 | B2 |
20020191628 | Liu et al. | Dec 2002 | A1 |
20030185206 | Jayakrishnan | Oct 2003 | A1 |
20040031030 | Kidder et al. | Feb 2004 | A1 |
20040088451 | Han | May 2004 | A1 |
20040243663 | Johanson et al. | Dec 2004 | A1 |
20040255288 | Hashimoto et al. | Dec 2004 | A1 |
20050047334 | Paul et al. | Mar 2005 | A1 |
20060029072 | Perera et al. | Feb 2006 | A1 |
20060251067 | DeSanti et al. | Nov 2006 | A1 |
20070263640 | Finn | Nov 2007 | A1 |
20080205377 | Chao et al. | Aug 2008 | A1 |
20080225712 | Lange | Sep 2008 | A1 |
20080228897 | Ko | Sep 2008 | A1 |
20090129385 | Wray et al. | May 2009 | A1 |
20090185571 | Tallet | Jul 2009 | A1 |
20090213869 | Rajendran et al. | Aug 2009 | A1 |
20090252038 | Cafiero et al. | Oct 2009 | A1 |
20100054129 | Kuik et al. | Mar 2010 | A1 |
20100054260 | Pandey et al. | Mar 2010 | A1 |
20100158024 | Sajassi et al. | Jun 2010 | A1 |
20100183011 | Chao | Jul 2010 | A1 |
20100223397 | Elzur | Sep 2010 | A1 |
20100226368 | Mack-Crane et al. | Sep 2010 | A1 |
20100246388 | Gupta et al. | Sep 2010 | A1 |
20100265824 | Chao et al. | Oct 2010 | A1 |
20100303075 | Tripathi et al. | Dec 2010 | A1 |
20110007746 | Mudigonda et al. | Jan 2011 | A1 |
20110019678 | Mehta et al. | Jan 2011 | A1 |
20110026403 | Shao et al. | Feb 2011 | A1 |
20110026527 | Shao et al. | Feb 2011 | A1 |
20110032944 | Elzur et al. | Feb 2011 | A1 |
20110035494 | Pandey et al. | Feb 2011 | A1 |
20110103389 | Kidambi et al. | May 2011 | A1 |
20110134793 | Elsen et al. | Jun 2011 | A1 |
20110235523 | Jha et al. | Sep 2011 | A1 |
20110280572 | Vobbilisetty et al. | Nov 2011 | A1 |
20110299406 | Vobbilisetty et al. | Dec 2011 | A1 |
20110299409 | Vobbilisetty et al. | Dec 2011 | A1 |
20110299532 | Yu et al. | Dec 2011 | A1 |
20110299536 | Cheng et al. | Dec 2011 | A1 |
20120014261 | Salam et al. | Jan 2012 | A1 |
20120014387 | Dunbar et al. | Jan 2012 | A1 |
20120177045 | Berman | Jul 2012 | A1 |
20120228780 | Kim et al. | Sep 2012 | A1 |
20120243539 | Keesara | Sep 2012 | A1 |
20120243544 | Keesara | Sep 2012 | A1 |
20120287786 | Kamble et al. | Nov 2012 | A1 |
20120287787 | Kamble et al. | Nov 2012 | A1 |
20120287939 | Leu et al. | Nov 2012 | A1 |
20120320749 | Kamble et al. | Dec 2012 | A1 |
20130022050 | Leu et al. | Jan 2013 | A1 |
20130051235 | Song et al. | Feb 2013 | A1 |
20130064067 | Kamath et al. | Mar 2013 | A1 |
20130064068 | Kamath et al. | Mar 2013 | A1 |
Number | Date | Country |
---|---|---|
1897567 | Jan 2007 | CN |
101030959 | Sep 2007 | CN |
101087238 | Dec 2007 | CN |
0853405 | Jul 1998 | EP |
Entry |
---|
Martin, et al., “Accuracy and Dynamics of Multi-Stage Load Balancing for Multipath Internet Routing”, Institute of Computer Science, Univ. of Wurzburg Am Hubland, Germany, IEEE Int'l Conference on Communications (ICC) Glasgow, UK, pp. 1-8, Jun. 2007. |
Kinds, et al., “Advanced Network Monitoring Brings Life to the Awareness Plane”, IBM Research Spyros Denazis, Univ. of Patras Benoit Claise, Cisco Systems, IEEE Communications Magazine, pp. 1-7, Oct. 2008. |
Kandula, et al., “Dynamic Load Balancing Without Packet Reordering”, ACM SIGCOMM Computer Communication Review, vol. 37, No. 2, pp. 53-62, Apr. 2007. |
Vazhkudai, et al., “Enabling the Co-Allocation of Grid Data Transfers”, Department of Computer and Information Sciences, The Univ. of Mississippi, pp. 44-51, Nov. 17, 2003. |
Xiao, et al. “Internet QoS: A Big Picture”, Michigan State University, IEEE Network, pp. 8-18, Mar./Apr. 1999. |
Jo et al., “Internet Traffic Load Balancing using Dynamic Hashing with Flow Volume”, Conference Title: Internet Performance and Control of Network Systems III, Boston, MA pp. 1-12, Jul. 30, 2002. |
Schueler et al., “TCP-Splitter: A TCP/IP Flow Monitor in Reconfigurable Hardware”, Appl. Res. Lab., Washington Univ. pp. 54-59, Feb. 19, 2003. |
Yemini et al., “Towards Programmable Networks”; Dept. of Computer Science Columbia University, pp. 1-11, Apr. 15, 1996. |
Soule, et al., “Traffic Matrices: Balancing Measurements, Interference and Modeling”, vol. 33, Issue: 1, Publisher: ACM, pp. 362-373, Year 2005. |
De-Leon, “Flow Control for Gigabit”, Digital Equipment Corporation (Digital), IEEE 802.3z Task Force, Jul. 9, 1996. |
Schlansker, et al., “High-Performance Ethernet-Based Communications for Future Multi-Core Processors”, Proceedings of the 2007 ACM/IEEE conference on Supercomputing, Nov. 10-16, 2007. |
Yoshigoe, et al., “Rate Control for Bandwidth Allocated Services in IEEE 802.3 Ethernet”, Proceedings of the 26th Annual IEEE Conference on Local Computer Networks, Nov. 14-16, 2001. |
Tolmie, “HIPPI-6400—Designing for speed”, 12th Annual Int'l Symposium on High Performance Computing Systems and Applications (HPCSt98), May 20-22, 1998. |
Manral, et al., “Rbridges: Bidirectional Forwarding Detection (BFD) support for TRILL draft-manral-trill-bfd-encaps-01”, pp. 1-10, TRILL Working Group Internet-Draft, Mar. 13, 2011. |
Perlman, et al., “Rbridges: Base Protocol Specification”, pp. 1-117, TRILL Working Group Internet-Draft, Mar. 3, 2010. |
D.E. Eastlake, “Rbridges and the IETF TRILL Protocol”, pp. 1-39, TRILL Protocol, Dec. 2009. |
Leu, Dar-Ren, “dLAG-DMLT over TRILL”, BLADE Network Technologies, pp. 1-20, Copyright 2009. |
Posted by Mike Fratto, “Cisco's FabricPath and IETF TRILL: Cisco Can't Have Standards Both Ways”, Dec. 17, 2010; http://www.networkcomputing.com/data-networking-management/229500205. |
Cisco Systems Inc., “Cisco FabricPath Overview”, pp. 1-20, Copyright 2009. |
Brocade, “BCEFE in a Nutshell First Edition”, Global Education Services Rev. 0111, pp. 1-70, Copyright 2011, Brocade Communications Systems, Inc. |
Pettit et al., Virtual Switching in an Era of Advanced Edges, pp. 1-7, Nicira Networks, Palo Alto, California. Version date Jul. 2010. |
Pfaff et al., Extending Networking into the Virtualization Layer, pp. 1-6, Oct. 2009, Proceedings of the 8th ACM Workshop on Hot Topics in Networks (HotNets-VIII), New York City, New York. |
Sherwood et al., FlowVisor: A Network Virtualization Layer, pp. 1-14, Oct. 14, 2009, Deutsche Telekom Inc. R&D Lab, Stanford University, Nicira Networks. |
Yan et al., Tesseract: A 4D Network Control Plane, pp. 1-15, NSDI'07 Proceedings of the 4th USENIX conference on Networked systems design & implementation USENIX Association Berkeley, CA, USA 2007. |
Hunter et al., BladeCenter, IBM Journal of Research and Development, vol. 49, No. 6, p. 905. Nov. 2005. |
VMware, Inc., “VMware Virtual Networking Concepts”, pp. 1-12, Latest Revision: Jul. 29, 2007. |
Perla, “Profiling User Activities on Guest OSes in a Virtual Machine Environment.” (2008). |
Shi et al., Architectural Support for High Speed Protection of Memory Integrity and Confidentiality in Multiprocessor Systems, pp. 1-12, Proceedings of the 13th International Conference on Parallel Architecture and Compilation Techniques (2004). |
Guha et al., ShutUp: End-to-End Containment of Unwanted Traffic, pp. 1-14, (2008). |
Recio et al., Automated Ethernet Virtual Bridging, pp. 1-11, IBM 2009. |
Sproull et al., “Control and Configuration Software for a Reconfigurable Networking Hardware Platform”, Applied Research Laboratory, Washington University, Saint Louis, MO 63130; pp. 1-10 (or 45-54)—Issue Date: 2002, Date of Current Version: Jan. 6, 2003. |
Papadopoulos et al.,“NPACI Rocks: Tools and Techniques for Easily Deploying Manageable Linux Clusters”, The San Diego Supercomputer Center, University of California San Diego, La Jolla, CA 92093-0505—Issue Date: 2001, Date of Current Version: Aug. 7, 2002. |
Ruth et al., Virtual Distributed Environments in a Shared Infrastructure, pp. 63-69, IEEE Computer Society, May 2005. |
“Rouiller, Virtual LAN Security: weaknesses and countermeasures, pp. 1-49, GIAC Security Essentials Practical Assignment Version 1.4b”. |
Walters et al., An Adaptive Heterogeneous Software DSM, pp. 1-8, Columbus, Ohio, Aug. 14-Aug. 18. |
Skyrme et al., Exploring Lua for Concurrent Programming, pp. 3556-3572, Journal of Universal Computer Science, vol. 14, No. 21 (2008), submitted: Apr. 16, 2008, accepted: Jun. 5, 2008, appeared: Dec. 1, 2008. |
Dobre, Multi-Architecture Operating Systems, pp. 1-82, Oct. 4, 2004. |
Int'l Searching Authority; Int. Appln. PCT/IB2012/051803; Int'l Search Report dated Sep. 13, 2012 (7 pg.). |
U.S. Appl. No. 13/107,893, Notice of Allowance Dated Jul. 10, 2013. |
U.S. Appl. No. 13/107,893, Non-Final Office Action Dated Apr. 1, 2013. |
U.S. Appl. No. 13/472,964, Notice of Allowance Dated Jul. 12, 2013. |
U.S. Appl. No. 13/472,964, Non-Final Office Action Dated Mar. 29, 2013. |
U.S. Appl. No. 13/107,903, Notice of Allowance Dated Sep. 11, 2013. |
U.S. Appl. No. 13/107,903, Final Office Action Dated Jul. 19, 2013. |
U.S. Appl. No. 13/107,903, Non-Final Office Action Dated Feb. 22, 2013. |
U.S. Appl. No. 13/585,446, Notice of Allowance Dated. |
U.S. Appl. No. 13/585,446, Final Office Action Dated Jul. 19, 2013. |
U.S. Appl. No. 13/585,446, Non-Final Office Action Dated Feb. 16, 2013. |
U.S. Appl. No. 13/107,894, Non-Final Office Action Dated Jun. 20, 2013. |
U.S. Appl. No. 13/594,970, Final Office Action Dated Sep. 25, 2013. |
U.S. Appl. No. 13/594,970, Non-Final Office Action Dated May 29, 2013. |
U.S. Appl. No. 13/107,397, Final Office Action Dated May 29, 2013. |
U.S. Appl. No. 13/107,397, Non-Final Office Action Dated Jan. 4, 2013. |
U.S. Appl. No. 13/466,754, Non-Final Office Action Dated Sep. 25, 2013. |
U.S. Appl. No. 13/229,867, Non-Final Office Action Dated May 24, 2013. |
U.S. Appl. No. 13/595,047, Non-Final Office Action Dated May 24, 2013. |
U.S. Appl. No. 13/107,985, Notice of Allowance Dated Jul. 18, 2013. |
U.S. Appl. No. 13/107,985, Non-Final Office Action Dated Feb. 28, 2013. |
U.S. Appl. No. 13/107,433, Final Office Action Dated Jul. 10, 2013. |
U.S. Appl. No. 13/107,433, Non-Final Office Action Dated Jan. 28, 2013. |
U.S. Appl. No. 13/466,790, Final Office Action Dated Jul. 12, 2013. |
U.S. Appl. No. 13/466,790, Non-Final Office Action Dated Feb. 15, 2013. |
U.S. Appl. No. 13/107,554, Final Office Action Dated Jul. 3, 2013. |
U.S. Appl. No. 13/107,554, Non-Final Office Action Dated Jan. 8, 2013. |
U.S. Appl. No. 13/229,891, Non-Final Office Action Dated May 9, 2013. |
U.S. Appl. No. 13/595,405, Non-Final Office Action Dated May 9, 2013. |
U.S. Appl. No. 13/107,896, Notice of Allowance Dated Jul. 29, 2013. |
U.S. Appl. No. 13/107,896, Non-Final Office Action Dated Mar. 7, 2013. |
U.S. Appl. No. 13/267,459, Non-Final Office Action Dated May 2, 2013. |
U.S. Appl. No. 13/267,578, Non-Final Office Action Dated Aug. 6, 2013. |
U.S. Appl. No. 13/267,578, Non-Final Office Action Dated Apr. 5, 2013. |
U.S. Appl. No. 13/314,455, Final Office Action Dated Aug. 30, 2013. |
U.S. Appl. No. 13/314,455, Non-Final Office Action Dated Apr. 24, 2013. |
Number | Date | Country | |
---|---|---|---|
20130067049 A1 | Mar 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13229867 | Sep 2011 | US |
Child | 13595047 | US |