This invention in general relates to in-vehicle communication networks and particularly to a system and method for streaming sequential data through an automotive switch fabric network.
The commonly assigned United States patent application entitled “Vehicle Active Network,” Ser. No. 09/945,581, filed Aug. 31, 2001, Publication No. US 20030043793, the disclosure of which is hereby expressly incorporated herein by reference, introduces the concept of an active network that includes a switch fabric. The switch fabric is a web of interconnected switching devices or nodes. The switching device or nodes are joined by communication links for the transmission of data packets between the switching devices or nodes. Control devices, sensors, actuators and the like are coupled to the switch fabric, and the switch fabric facilitates communication between these coupled devices.
The coupled devices may be indicator lights, vehicle control systems, vehicle safety systems, and comfort and convenience systems. A command to actuate a device or devices may be generated by a control element coupled to the switch fabric and is communicated to the device or devices via the switch fabric nodes.
In the context of vehicular switch fabric networks, a challenge is presented in terms of how relatively large data records and messages are transported across the switch fabric network. In particular, when sending large data records and messages across the switch fabric network, the size of the data packets may be constrained by the physical layer on which the communication links that join the switching devices or nodes are built. A need exists for the ability to transmit large records and messages across the switch fabric when size restrictions for the communication links exist.
It is, therefore, desirable to provide a system and method to overcome or minimize most, if not all, of the preceding problems especially in the area of transmitting large data records and messages across the nodes in an automotive switch fabric network. This would help in several areas including the reprogramming of switch fabric nodes where large records need to be downloaded.
While the invention is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
What is described is a system and method for streaming sequential data through a vehicle switch fabric network. This is particular useful in areas such as reprogramming nodes in the automotive switch fabric network where relatively large records or messages need to be transmitted through the switch fabric, although the invention may be used in other areas. In sum, the system and method described herein takes large data records and breaks them down into smaller units (data packets) that fit within the constraints of the physical layer on which communication links in the switch fabric network is built. The smaller data packets are assigned with a message identification and a sequence number. Data packets associated with the same data record or message are assigned with the same message identification but may differ in their sequence number. Each data packet is transmitted over the vehicle switch fabric network to a destination node. At the destination node, the data packets may be reassembled to its original data format based on the message identification and sequence numbers. The reassembled message may then be presented to an application in the node for processing.
Now, turning to the drawings,
The interfaces 26a-d are any suitable interface for coupling the particular vehicle device 24a-d to the network 22, and may be wire, optical, wireless or combinations thereof. The vehicle device 24a-d is particularly adapted to provide one or more functions associated with the vehicle 20. These vehicle devices 24a-d may be data producing, such as a sensor, data consuming, such as an actuator, or processing, which both produces and consumes data. In one embodiment, the external device 24a is a diagnostic device that permits a user to exchange data with the network of the vehicle, as will be explained further below. Data produced by or provided to a vehicle device 24a-d, and carried by the network 22, is independent of the function of the vehicle device 24a-d itself. That is, the interfaces 26a-d provides independent data exchange between the coupled device 24a-d and the network 22.
The connection between the devices 24a-d and the interfaces 26a-d may be a wired or wireless connection.
The network 22 may include a switch fabric 28 defining a plurality of communication paths between the vehicle devices 24a-d. The communication paths permit multiple simultaneous peer-to-peer, one-to-many, many-to-many, etc. communications between the vehicle devices 24a-d. During operation of the vehicle 20, data exchanged, for example, between devices 24a and 24d may utilize any available path or paths between the vehicle devices 24a, 24d. In operation, a single path through the switch fabric 28 may carry all of a single data communication between one vehicle device 24a and another vehicle device 24d, or several communication paths may carry portions of the data communication. Subsequent communications may use the same path or other paths as dictated by the then state of the network 22. This provides reliability and speed advantages over bus architectures that provide single communication paths between devices, and hence are subject to failure with failure of the single path. Moreover, communications between other of the devices 24b, 24c may occur simultaneously using the communication paths within the switch fabric 28.
The network 22 may comply with transmission control protocol/Internet (TCP/IP), asynchronous transfer mode (ATM), Infiniband, RapidIO, or other packet data protocols. As such, the network 22 utilizes data packets, having fixed or variable length, defined by the applicable protocol. For example, if the network 22 uses asynchronous transfer mode (ATM) communication protocol, ATM standard data cells are used.
The internal vehicle devices 24b-d need not be discrete devices. Instead, the devices may be systems or subsystems of the vehicle and may include one or more legacy communication media, i.e., legacy bus architectures such as the Controller Area Network (CAN) Protocol, the SAE J1850 Communication Standard, the Local Interconnect Network (LIN) Protocol, the FLEXRAY Communications System Standard, the Media Oriented Systems Transport or MOST Protocol, or similar bus structures. In such embodiments, the respective interface 26b-d may be configured as a proxy or gateway to permit communication between the network 22 and the legacy device.
Referring to
The cooperation of the nodes 30a-h and the connection media 32 define a plurality of communication paths between the devices 24a-d that are communicatively coupled to the network 22. For example, a route 34 defines a communication path from the gateway node 30a to a target node 30g. If there is a disruption along the route 34 inhibiting communication of the data packets from the gateway node 30a to the target node 30g, for example, if one or more nodes are at capacity or have become disabled or there is a disruption in the connection media joining the nodes along route 34, a new route, illustrated as route 36, can be used. The route 36 may be dynamically generated or previously defined as a possible communication path, to ensure the communication between the gateway node 30a and the target node 30g.
Some applications may require reprogramming of one or more nodes 30a-h in the switch fabric 28. The embodiment and topology shown in
Each of the nodes 30a-h in the switch fabric 28 contain software components to enable data communications between the nodes 30a-h and devices 24a-d. A user 42 may use the diagnostic device 24a and the system manager 40 to send commands to upgrade or replace software and code in the switch fabric 28, including reprogramming software and code residing in the nodes 30a-h. For purposes of illustrating the present invention, assume that a user 42 desires to reprogram software components residing in a target node 30g.
To illustrate the functionality and the adaptability of the target node 30g, it is shown to include a plurality of input/output ports 50a-d although separate input and output ports could also be used. Various configurations of the target node 30g having more or fewer ports may be used in the network 22 depending on the application. The target node 30g includes a processor 52, at least one transceiver 54, and a memory 56. The memory 56 includes an erasable memory portion 62 and a protected memory portion 64. The processor 52 is configured to transfer control and execute instructions from software components residing in either the erasable memory portion 62 or the protected memory portion 64. The erasable memory portion 62 contains a set of software components (code block) to operate the target node 30g for normal data communications and operation within the switch fabric 28. In one embodiment, as shown in
The protected memory portion 64 contains a set of software components (boot block) that includes functions to load software components safely and securely to the erasable memory portion 62. In one embodiment, as shown in
Upon startup of the target node 30g, control should go directly to the software components residing on the protected memory portion 64, including the flash memory loader module 80 mentioned above. If the flash memory loader module 80 fails to initialize hardware in the target node 30g, the target node 30g may be configured to go to a low power standby. In one embodiment, the flash memory loader 80, upon node startup, will determine if valid software components reside (and is available) in the erasable memory portion 62. This will ensure that corrupted or partial software components in the erasable memory portion 62 does not deadlock the target node 30g. This determination may be done by checking a key number stored in a prescribed location in the erasable memory portion 62. If the key number is stored in the prescribed location, the processor 50 may be configured to switch control of the target node 30g from executing the software components residing on its protected memory portion 64 to the software components residing on its erasable memory portion 62.
If, however, the key number is not stored in the prescribed location, the flash memory loader 80 may assume that the software components in the erasable memory portion 62 is not valid and send a notification that the target node 30g needs to be reprogrammed. This notification may be sent to the gateway node 30a that will then forward the request to the system manager 40 residing on the diagnostic device 24a. The flash memory loader 80 should then remain in an idle state to await instructions from the system manager 40 to initiate reprogramming of the software components in the erasable memory portion 62, as will be explained in more detail below.
Additionally, the diagnostic system may be configured to allow the system manager 40 to query each node 30a-h in the switch fabric 28 to determine whether a node needs to be reprogrammed. In one embodiment, the system manager 40 may initiate a status dialogue with a target node 30g by sending a status request message to the gateway node 30a. The gateway node 30a will then route the status request message to the target node 30g. The target node 30g may then be configured to respond to the status request message by transmitting a status response message to the gateway node 30a, who may then forward the message back to the system manager 40. Depending on the content of the status response message, a user 42 may decide to reprogram a specific target node 30g.
The system manager 40, residing on the diagnostic device 24a, will then initiate a download session with the target node 30g. In one embodiment, the system manager 40 may send an initiate download session message through the diagnostic interface 26a to the gateway node 30a (arrow 106). The gateway node 30a will then route the initiate download session message to the target node 30g (arrow 108).
In response to receiving an initiate download session message, the target node 30g, including processor 50, may be configured to switch from executing the software components residing on its erasable memory portion 62 to the software components residing on its protected memory portion 64. As mentioned above, it is preferred that software components in both the erasable memory portion 62 and the protected memory portion 64 include at least standard software components for the network layer 74, the Distributed System Management (DSM) component 76, and the link (or bus) layer 78. This will cause normal network functions to continue uninterrupted. However, any applications running on the target node 30g will not be available. After switching control from the software components residing on its erasable memory portion 62 to the software components residing on its protected memory portion 64, the target node 30g may then send an acknowledge download session message to the gateway node 30a (arrow 110), who will then forward the message to the system manager 40 (arrow 112).
After receiving the acknowledgement from the target node 30g, the system manager 40 will then send an erase flash command to the gateway node 30a for each block of memory that needs to be erased (arrow 114). The diagnostic device 24a may be configured to analyze the current software components and send one or more commands to erase some or all of the memory blocks in erasable memory portion 62. The gateway node 30a will route the erase flash command to the target node 30g (arrow 116). Upon receipt of the erase flash command, the target node 30g will erase the corresponding memory locations in the command. The target node 30g may then send an acknowledge erase flash command to the gateway node 30a (arrow 118), who will then forward the message to the system manager 40 (arrow 120).
The system manager 40 may then send a new set of compiled software components or records to the gateway node 30a (arrow 122). The software components or records are included in the build file loaded into the system manager 40 (arrow 104). The downloadable build file for reprogramming the target node 30g may contain thousands of records. Each record may be relatively large in size compared to the physical constraints of the data packets that can be transmitted over the communication links 32. In that case, the records should be broken down as described further below in relation to
The system manager 40 may then send a check data message to the gateway node 30a (arrow 130). In one embodiment, the check data message includes a checksum for the new downloaded software components. The gateway node 30a will route the check data message to the target node 30g (arrow 132). The target node 30g will then calculate the checksum for the new set of software components into its erasable memory portion 62 and compare it against checksum received from the system manager 40. Assuming that the checksum matches, the target node 30g will then write the new set of software components into its erasable memory portion 62. The target node 30g may then send an acknowledge check data message to the gateway node 30a (arrow 134), who will then forward the message to the system manager 40 (arrow 136).
The system manager 40 may then send an entry point message to the gateway node 30a (arrow 138). In one embodiment, the entry point message includes an entry point for the code block. The gateway node 30a will route the entry point message to the target node 30g (arrow 140). In response, the target node 30g sends an acknowledge entry point message to the gateway node 30a (arrow 142), who will then forward the message to the system manager 40 (arrow 144).
Upon receiving the acknowledgement for the entry point message, the system manager 40 may then inform the user 42 about the successful completion of the download operation and provide the user 42 with an option to restore or reset the target node 30g (arrow 146). The user 42 may wish to postpone the restoration of the node until diagnosis of other nodes is complete. However, when the user 42 desires to restore the node, the user 42 may select a restore option to the system manager 40 (arrow 148). At this point, the system manager 40 may then send a restore operation message to the gateway node 30a (arrow 150). The gateway node 30a will then route the restore operation message to the target node 30g (arrow 152).
After receiving the restore operation message, the target node 30g, including processor 50, will then switch from executing the software components residing on its protected memory portion 64 to the software components residing on its erasable memory portion 62. This will allow normal operation of applications to run again on the target node 30g. The target node 30g may then send an acknowledge restore operation message to the gateway node 30a (arrow 154), who will then forward the message to the system manager 40 (arrow 156). The system manager 40 may then alert the user 42 that the acknowledgement was received from the target node 30g (arrow 158).
The active portion of the data packet may represent a packet state. For example, the active portion may reflect a priority of the data packet based on aging time. That is, a packet initially generated may have a normal state, but for various reasons, is not promptly delivered. As the packet ages as it is routed through the active network, the active portion can monitor time since the data packet was generated or time when the packet is required, and change the priority of the data packet accordingly. The packet state may also represent an error state, either of the data packet or of one or more nodes of the network 22. The active portion may also be used to messenger data unrelated to the payload within the network 22, track the communication path taken by the data packet through the network 22, provide configuration information (route, timing, etc.) to nodes 30a-h of the network 22, provide functional data to one or more devices 24a-d coupled to the network 22 or provide receipt acknowledgement.
The payload portion of the data packets carries data and other information relating to the message being transmitted through the network 22. The size of the data packet (including the payload portion) will be constrained by the physical layer on which the switch fabric 28 is built. There are situations where the message size at the application layer will be larger than the packet size allowed to be transmitted over the network 22. One situation, as described above, is where software components or records need to be downloaded to a node 30a-h. Accordingly, in one embodiment of the present invention, a message in the application layer that is larger than the packet size of the network 22 will be broken into smaller units to fit the packet size limitation. Each unit is placed into an individual data packet and transmitted independently over the switch fabric 28 to a destination node (such as the target node 30g receiving downloaded software components or records described above). At the destination node, the individual data packets are reassembled to its original form and passed to the application that receives and processes the message.
Assume for purposes of illustration that the payload portion 204 of network data packets 200 is limited to 8 bytes. Also assume for purposes of illustration that the message 300 that needs to be transmitted through the switch fabric 28 is larger than the network limitation. For instance, the downloadable build file for reprogramming node software components may contain thousands of build records. In one embodiment, where the size of each build record is up to 38 bytes, one build file may include: the message type field 302 (1 byte); the message length field 304 (1 byte); the address field 306 (3 bytes); the message data field 306 (32 bytes); and the checksum field 310 (1 byte). In one embodiment of the present invention, the message 300 is divided into smaller data packets 200 where each data packet is assigned the same message identification but different sequence numbers. This is shown further in
In
The message identification field 322 for each of the payload portions 204a-g will contain a unique message identification assigned to the particular record or message 300 being transmitted. The message identification within the field 322 will be the same for all payload portions 204a-g that are common to the same record or message 300. In the reprogramming example described above, the message identification is used by the flash loader module 80 to track the received data packets so that it can associate different payload portions 204a-g with the same record or message 300.
The command or sequence field 324 contains either a command or a sequence number associated with the payload portion 204a-g. The command will indicate to the receiving node how to use the data carried by the following payload portions 204a-g. The command value should be different from the record identification (RID)/sequence value by design. Each payload portion 204a-g may have a record identification (RID)/sequence value except for the first payload portion 204a, which contains a command. In the reprogramming example described above, the record identification (RID)/sequence values may be used by the flash loader module 80 to group the received data packets so that it can re-assemble the record or message in the right order at the receiving node.
In one embodiment, the first payload portion 204a may include the values for the address field 306 (divided into 1 byte segments), the message length field 304 (1 byte), and the message type field (1 byte) of the original record or message 300. The first payload portion 204a may also include a record identification (RID) (1 byte). The remaining payload portions 204b-g may include the values found in the message data field 306 (divided into 32 byte segments) and the checksum field 310 of the original record or message 300. The value in the checksum 310 field may be used to protect against possible data corruption. After the original build record is reassembled at the receiving node, the build record's checksum is calculated. If the checksum does not match the received value, the whole record should be discarded and a negative response is sent.
What has been described is a system and method for streaming sequential data through a vehicle switch fabric network. This is particular useful in areas such as reprogramming nodes in the automotive switch fabric network where relatively large records or messages need to be transmitted through the switch fabric, although the invention may be used in other areas. In sum, the system and method described herein takes large data records and breaks them down into smaller units (data packets) that fit within the constraints of the physical layer on which communication links in the switch fabric network is built. The smaller data packets are assigned with a message identification and a sequence number. Data packets associated with the same data record or message are assigned with the same message identification but may differ in their sequence number. Each data packet is transmitted over the vehicle switch fabric network to a destination node. At the destination node, the data packets may be reassembled to its original data format based on the message identification and sequence numbers. The reassembled message may then be presented to an application in the node for processing. The above description of the present invention is intended to be exemplary only and is not intended to limit the scope of any patent issuing from this application. The present invention is intended to be limited only by the scope and spirit of the following claims.
The present application claims priority from provisional application, Ser. No. 60/619,669, entitled “System and Method for Streaming Sequential Data Through an Automotive Switch Fabric Network,” filed Oct. 18, 2004, which is commonly owned and incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4816989 | Finn et al. | Mar 1989 | A |
5151899 | Thomas et al. | Sep 1992 | A |
5195091 | Farwell et al. | Mar 1993 | A |
5321689 | Suzuki et al. | Jun 1994 | A |
5566180 | Eidson et al. | Oct 1996 | A |
5612953 | Olnowich | Mar 1997 | A |
5802052 | Venkataraman | Sep 1998 | A |
6356823 | Iannotti et al. | Mar 2002 | B1 |
6373834 | Lundh et al. | Apr 2002 | B1 |
6420797 | Steele et al. | Jul 2002 | B1 |
6430164 | Jones et al. | Aug 2002 | B1 |
6477453 | Oi et al. | Nov 2002 | B2 |
6559783 | Stoneking | May 2003 | B1 |
6611519 | Howe | Aug 2003 | B1 |
6611537 | Edens et al. | Aug 2003 | B1 |
6643465 | Bosinger et al. | Nov 2003 | B1 |
6732031 | Lightner et al. | May 2004 | B1 |
6747365 | Reinold et al. | Jun 2004 | B2 |
6757521 | Ying | Jun 2004 | B1 |
6845416 | Chasmawala et al. | Jan 2005 | B1 |
7027773 | McMillin | Apr 2006 | B1 |
7210063 | Holcroft et al. | Apr 2007 | B2 |
7272496 | Remboski et al. | Sep 2007 | B2 |
20020077739 | Augsburger et al. | Jun 2002 | A1 |
20020080829 | Ofek et al. | Jun 2002 | A1 |
20020087891 | Little et al. | Jul 2002 | A1 |
20030043739 | Reinold et al. | Mar 2003 | A1 |
20030043750 | Remboski et al. | Mar 2003 | A1 |
20030043779 | Remboski et al. | Mar 2003 | A1 |
20030043793 | Reinold et al. | Mar 2003 | A1 |
20030043799 | Reinold et al. | Mar 2003 | A1 |
20030043824 | Remboski et al. | Mar 2003 | A1 |
20030045234 | Remboski et al. | Mar 2003 | A1 |
20030045971 | Reinold et al. | Mar 2003 | A1 |
20030045972 | Remboski et al. | Mar 2003 | A1 |
20030046327 | Reinold et al. | Mar 2003 | A1 |
20030051131 | Reinold et al. | Mar 2003 | A1 |
20030065630 | Brown et al. | Apr 2003 | A1 |
20030091035 | Roy et al. | May 2003 | A1 |
20030185201 | Dorgan | Oct 2003 | A1 |
20030188303 | Barman et al. | Oct 2003 | A1 |
20040001593 | Reinold et al. | Jan 2004 | A1 |
20040002799 | Dabbish et al. | Jan 2004 | A1 |
20040003227 | Reinhold et al. | Jan 2004 | A1 |
20040003228 | Fehr et al. | Jan 2004 | A1 |
20040003229 | Reinold et al. | Jan 2004 | A1 |
20040003230 | Puhl et al. | Jan 2004 | A1 |
20040003231 | Levenson et al. | Jan 2004 | A1 |
20040003232 | Levenson et al. | Jan 2004 | A1 |
20040003233 | Reinold et al. | Jan 2004 | A1 |
20040003234 | Reinold et al. | Jan 2004 | A1 |
20040003237 | Puhl et al. | Jan 2004 | A1 |
20040003242 | Fehr et al. | Jan 2004 | A1 |
20040003243 | Fehr et al. | Jan 2004 | A1 |
20040003245 | Dabbish et al. | Jan 2004 | A1 |
20040003249 | Dabbish et al. | Jan 2004 | A1 |
20040003252 | Dabbish et al. | Jan 2004 | A1 |
20040042469 | Clark et al. | Mar 2004 | A1 |
20040043739 | Jordanger et al. | Mar 2004 | A1 |
20040043750 | Kim | Mar 2004 | A1 |
20040043824 | Uzelac | Mar 2004 | A1 |
20040045234 | Morgan et al. | Mar 2004 | A1 |
20040045971 | Lothe | Mar 2004 | A1 |
20040131014 | Thompson et al. | Jul 2004 | A1 |
20040148460 | Steinmetz et al. | Jul 2004 | A1 |
20040213295 | Fehr | Oct 2004 | A1 |
20040227402 | Fehr et al. | Nov 2004 | A1 |
20040254700 | Fehr et al. | Dec 2004 | A1 |
20040258001 | Remboski et al. | Dec 2004 | A1 |
20050004727 | Remboski et al. | Jan 2005 | A1 |
20050038583 | Fehr et al. | Feb 2005 | A1 |
20050160285 | Evans | Jul 2005 | A1 |
20050251604 | Gerig | Nov 2005 | A1 |
20050251608 | Fehr et al. | Nov 2005 | A1 |
20060013263 | Fellman | Jan 2006 | A1 |
20060013565 | Baumgartner | Jan 2006 | A1 |
20060020717 | Remboski et al. | Jan 2006 | A1 |
20060083172 | Jordan et al. | Apr 2006 | A1 |
20060083173 | Jordan et al. | Apr 2006 | A1 |
20060083250 | Jordan et al. | Apr 2006 | A1 |
20060083264 | Jordan et al. | Apr 2006 | A1 |
20060083265 | Jordan et al. | Apr 2006 | A1 |
20060282549 | Vinnemann | Dec 2006 | A1 |
Number | Date | Country | |
---|---|---|---|
20060083229 A1 | Apr 2006 | US |
Number | Date | Country | |
---|---|---|---|
60619669 | Oct 2004 | US |