The present disclosure pertains in general to automated driving assistance systems and in particular to technologies to facilitate automated driving assistance based on objects that have been sensed and reported by remote senders.
A vehicle may include a driving assistance system that includes an electronic control unit (ECU) and various sensors in communication with the ECU. Based on data from the sensors, the ECU senses objects around the vehicle and responds accordingly. For instance, in a subject vehicle with a driving assistance system that provides for adaptive cruise control, the ECU may monitor the distance between the subject vehicle and another vehicle in front of the subject vehicle, and the ECU may automatically reduce the speed of the subject vehicle if that distance becomes too small. Thus, a conventional driving assistance system may provide automated driving assistance for a vehicle based on objects sensed by that vehicle.
In addition, a conventional driving assistance system in a subject vehicle may broadcast messages to other vehicles, and each of those messages may describe certain characteristics of the subject vehicle, such as the current location, heading, speed, and acceleration of the subject vehicle. However, when other vehicles receive such messages, the content of those messages may not be reliable. For instance, if the driving assistance system of the subject vehicle has been compromised with malicious software (“malware”), the driving assistance system may broadcast false information to the other vehicles.
Features and advantages of the present invention will become apparent from the appended claims, the following detailed description of one or more example embodiments, and the corresponding figures, in which:
As indicated above, a conventional driving assistance system may provide automated driving assistance for a subject vehicle based on objects sensed by that vehicle. The driving assistance system may also broadcast messages to other vehicles, to describe certain characteristics of the subject vehicle. For instance, standards have been developed in the U.S. and in Europe calling for each vehicle to periodically send messages that describe the current location and speed of the sending vehicle. In the U.S., for example, on Jan. 12, 2017, the National Highway Traffic Safety Administration (NHTSA) of the U.S. Department of Transportation (DOT) published Federal Motor Vehicle Safety Standard (FMVSS) No. 150 (“FMVSS 150”) in the Notice of Proposed Rulemaking that starts on page 3854 of the Federal Register, Vol. 82, No. 8. FMVSS 150 proposes to mandate vehicle-to-vehicle (V2V) communications for new vehicles and to standardize the message and format of V2V transmissions. In particular, FMVSS 150 proposes to “require all new light vehicles to be capable of [V2V] communications, such that they will send and receive Basic Safety Messages” (BSMs) to and from other vehicles. In particular, FMVSS 150 “contains V2V communication performance requirements predicated on the use of on-board dedicated short-range radio communication (DSRC) devices to transmit [BSMs] about a vehicle's speed, heading, brake status, and other vehicle information to surrounding vehicles, and receive the same information from them.” FMVSS also mentions various standards, including standards from SAE International, such as the “Dedicated Short Range Communications (DSRC) Message Set Dictionary J2735_201603” (“SAE J2735”). More information on DSRC standards and on the related topics of wireless access in vehicular networks (WAVE) and Institute of Electrical and Electronics Engineers (IEEE) standards 1609.1/.2/.3/.4 may also be found in the article entitled “Notes on DSRC & WAVE Standards Suite: Its Architecture, Design, and Characteristics” by Y. L. Morgan in the publication IEEE Communications Surveys & Tutorials, Vol. 12, No. 4, Fourth Quarter 2010. Similarly, in Europe, the Intelligent Transport Systems (ITS) Committee of the European Telecommunications Standards Institute (ETSI) has promulgated European Standard (EN) 302 637-2, entitled “Specification of Cooperative Awareness Basic Service.” That standard provides for messages known as “Cooperative Awareness Messages” or “CAMs.” In particular, according to version 1.3.2 of EN 302 637-2, “Cooperative awareness [(CA)] means that road users and roadside infrastructure are informed about each other's position, dynamics and attributes. It is achieved by regular exchange of information among vehicles (V2V, in general all kind of road users) and between vehicles and road side infrastructure . . . based on wireless networks, called V2X network.”
For purposes of this disclosure, the following terms have the following meanings:
Conventional standards such as FMVSS 150, SAE J2735, and EN 302 637-2 provide for basic TSMs. For instance, SAE J2735 prescribes a two part structure for BSMs, with “Part 1” listing various mandatory fields and “Part 2” listing various optional extensions. In particular, Part 1 is for “Basic Vehicle State,” and it lists the following mandatory fields:
By contrast, the present disclosure introduces multi-object TSMs. As indicated above, a multi-object TSM is structured in such a way as to enable the TSM to describe multiple objects. Those objects include the TSN node that generates the multi-object TSM, as well as the objects detected by that node. As indicated above, the data describing the detected objects may be referred to as a DOL. The DOL may identify various different types of objects, and it may describe various aspects or attributes for each detected object. For instant, the DOL may identify, the following types of objects, and others:
In addition, the standard may require each TSM to include descriptions only for objects detected by the sender. Alternative, the standard may allow or require each TSM to also include descriptions for objects reported to the sender by other nodes; and the standard may require the TSM to indicate, for each object, whether that object was (a) detected by the sender, (b) reported to the sender by another node, or (c) both detected by the sender and reported to the sender by another node. In addition or alternatively, the standard may require each object description to include a numerical confidence score.
TSN nodes typically communicate via at least one wireless link. A sender of a TSM may include a digital certificate in the TSM to provide for security. The recipient of a TSM with a digital certificate may use the digital certificate to verify the authenticity and integrity of the message. In other words, the recipient may use the digital certificate (a) to verify the identity of the sender and (b) to determine whether or not the message was modified in transit.
However, digital certificates alone are not sufficient to guarantee the reliability of the data in TSMs. For instance, the source of the data (e.g., the sender) could be compromised by malware, or the source could be an attacker that has obtained a digital certificate and that then uses that digital certificate in TSMs with false data. In a conventional TSN, a TSM with a valid certificate but false data may be taken as legitimate by the receiving nodes. Consequently, it may be dangerous for vehicles to rely TSMs from other nodes, as those TSMs may contain false information.
As indicated above, the present disclosure describes technology to facilitate automated driving assistance based on objects that have been sensed and reported by remote senders. This technology may be promoted, for instance, by modifying standards for TSMs to allow for or to require multi-object TSMs which include locally sourced observations about other objects within the perception range of the transmitting car. In addition, the present disclosure describes technology for determining, at a recipient node, whether the data in TSMs from other nodes is trustworthy. For instance, the present disclosure describes a mechanism to determine a confidence level for the data received from TSMs sent by multiple independent sources. For example, as described in greater detail below, a driving assistance system in a vehicle may process object lists received within TSMs from multiple independent sources and assign a confidence score to each of the objects, with the confidence score reflecting the degree of consistency of the object's information across multiple independent TSMs. The driving assistance system then uses the confidence score to filter out spoofed or erroneous data. The driving assistance system thus determines whether the data in received TSMs are trustworthy, to prevent rogue senders from fooling the subject vehicle into taking unsafe actions.
In particular, as described in greater detail below, a driving assistance system in a subject vehicle may enable that vehicle to participate in a TSN by receiving TSMs from other nodes in the TSN. Those other nodes (remote senders) may include other vehicles, as well as stationary structures such as roadside units. Those TSMs may describe objects sensed by the remote senders. The driving assistance system that receives those TSMs may then provide automated driving assistance for the subject vehicle, based on the objects reported by the remote senders. Likewise, the driving assistance system in the subject vehicle may send reports to other vehicles in the TSN to describe objects sensed by the subject vehicle. Driving assistance systems in the other vehicles may provide driving assistance for those vehicles based on the objects reported by the subject vehicle.
Also, in the illustrated scenario, the driving assistance system in compromised vehicle 16A has been infected with malware which causes compromised vehicle 16A to include false data in its TSMs. In particular, compromised vehicle 16A sends a multi-object TSM 32 to subject vehicle 12, and TSM 32 falsely reports that compromised vehicle 16A has detected another vehicle in the middle lane, in the location depicted as simulated vehicle 16B. In other words, compromised vehicle 16A falsely report the existence of and position of simulated vehicle 16B. Also, in the illustrated scenario, simulated vehicle 16B is reported as being outside of object detection range 20 and outside of network range 22 of subject vehicle 12. However, the object detection range 24 for trustworthy vehicle 14 encompasses at least part of the space purportedly occupied by simulated vehicle 16B.
In another scenario, the TSM that compromised vehicle 16A sends to subject vehicle 12 is a basic TSM that falsely reports they location of compromised vehicle 16A as being in the middle lane, in the location depicted as simulated vehicle 16B. In other words, compromised vehicle 16A may, in effect, represent itself as being simulated vehicle 16B.
If driving assistance system 40 were to treat either of those TSMs from compromised vehicle 16A as trustworthy, driving assistance system 40 might adversely affect the operation of subject vehicle 12, based on the falsely reported existence and location of simulated vehicle 16B. However, as described in greater detail below, driving assistance system 40 includes technology for determining whether or not the data from compromised vehicle 16A (and from other nodes) is trustworthy.
Figure also depicts trustworthy vehicle 14 sending a TSM 30 to subject vehicle 12. TSM 30 includes data describing the location, speed, and heading of trustworthy vehicle 14. As described in greater detail below, TSM 30 may also include additional data, preferably including data describing objects detected by trustworthy vehicle 14. Accordingly, as described in greater detail below, TSM 30 may enable subject vehicle 12 to determine whether or not the data from compromised vehicle 16A is trustworthy.
NVS 52 includes driving assistance system software 54. ECU 42 may copy driving assistance system software 54 from NVS 52 into RAM 56 for execution. As described in greater detail below, when driving assistance system software 54 is executing, driving assistance system software may create, obtain, and/or use a system object list (SOL) 60, a detected object list (DOL) 62, and a reported object list (ROL) 64. In fact, driving assistance system 40 may receive reported object lists from multiple other nodes, and driving assistance system 40 may accumulate those reported object lists into an ROL collection 66.
As shown at block 124, driving assistance system 40 may also receive and collect TSMs from other TSN nodes. In one scenario, the nodes in the TSN follow a standard that allows for basic TSMs and for multi-object TSMs. In another scenario, the nodes in the TSN follow a standard that requires all TSMs to be multi-object TSMs. As indicated above, each TSM includes data describing attributes of the sending node, and each multi-object TSM also includes data describing objects detected by the sending node. As indicated above, the data in a multi-object TSM that describes objects detected by the sending node may be referred to as an ROL.
As shown at block 126, driving assistance system 40 may extract the ROL from each multi-object TSM it receives, and driving assistance system 40 may save each extracted ROL to ROL collection 66. As described in greater detail below, driving assistance system 40 may then use the ROLs in ROL collection 66, together with other data, to make decisions affecting the operation of subject vehicle 12.
In the object recognition phase, driving assistance system 40 may collect data from local data sources, such as sensing unit 48 and from remote data sources, such as other nodes in TSN 10. As indicated above, sensing unit 48 represents sensing components such a camera, etc. The data from remote data sources may include TSMs from other vehicles (i.e., V2V messages) and TSMs from roadside units (i.e., F2V messages). As described in greater detail below, driving assistance system 40 may then process the data from remote sources using object scoring and filtering, and driving assistance system 40 may process the data from local sources using object detection and classification. Driving assistance system 40 may then use object fusion to merge or combine those results into a unified list of objects. For instance, as described in greater detail below, in the object fusion stage, driving assistance system 40 may reject data from one node that describes an object purportedly detected by that node, based on inconsistent or contrary data from one or more other nodes. In one embodiment, SOL 60 is the unified list of objects that is produced using object fusion. Driving assistance system 40 may then use SOL 60 for the path planning and actuation phases.
Referring again to
Accordingly, block 130 shows that driving assistance system 40 determines whether it is time to update SOL 60. If it is time, the process passes through page connect A to block 132 of
As shown at block 134, driving assistance system 40 then performs time alignment for the objects described in SOL 60, DOL 62, and ROL collection 66 by generating an adjusted SOL, an adjusted DOL, and adjusted ROLs for the current time slice. In particular, driving assistance system 40 determines or predicts the current state of the objects in those lists, and saves data describing the predicted current state in the adjusted lists. For each object, the prediction of the current state is based on factors such as (a) how much time has elapsed (relative to the current time) since the object was last reported, (b) where was the object when it was last reported, (c) what were the speed and acceleration of the object when it was last reported, (d) what was the heading of the object when it was last reported, etc.
The ROLS in ROL collection 66 for the current time slice may be referred to as a snapshot. As part of time alignment, driving assistance system 40 may create an adjusted ROL for each ROL in ROL collection 66 that falls within the current time slice or snapshot. However, if the current time slice includes a sequence of ROLs from the same node, driving assistance system 40 may either drop all but the most current ROL from that sequence or consolidate that sequence of ROLs into one adjusted ROL, to prevent an individual sending node from having inordinate influence.
Driving assistance system 40 may use any suitable technique or combination of techniques to generate the adjusted SOL, the adjusted DOL, and the adjusted ROLs. For instance, in one embodiment or scenario, driving assistance system 40 may use data synchronization techniques such as those described in the article from June of 2012 entitled “A Track-To-Track Association Method for Automotive Perception Systems” by Adam Houenou et al. from the IEEE Intelligent Vehicle Symposium (IV 2012) (hereinafter “the Track-to-Track report”).
Then, as shown at block 136, to determine whether reported objects from different ROLs likely refer to the same physical object, driving assistance system 40 performs object clustering, based on the adjusted ROLs, to generate a clustered list of reported objects. In other words, driving assistance system 40 uses object clustering over the adjusted ROLs to associate reported objects with physical objects. For instance, if ROL collection 66 includes multiple different ROLs from multiple different nodes, the object clustering operation generates a unified list of reported objects (i.e., the clustered list of reported objects), based on the adjusted ROLs. Thus, driving assistance system 40 groups similar reported objects, for subsequent fusion. Driving assistance system 40 may use any suitable technique or combination of techniques to perform object clustering, including without limitation techniques such as those described in the Track-to-Track report.
As shown at block 138, driving assistance system 40 then performs object fusion within each cluster of reported objects to generate a fused list of reported objects. That list includes a redundancy metric and a fusion error estimate for each object. The redundancy metric indicates how many different nodes or independent sources reported that object.
The fusion error estimate for a fused object is based on the error metrics for the objects that were fused. And the error metric for an object is based on the perception abilities of the sensing unit(s) that sensed the object and on the actual data collected by the sensing unit(s). For instance, when driving assistance system 40 detects an object based on data from a depth camera, the data for that object in DOL 62 may include (a) a value to describe the distance from subject vehicle 12 to that object and (b) an error metric to indicate an expected degree of accuracy or precision for the distance value. Such error metrics propagate to the fusion error metric.
In one embodiment or scenario, driving assistance system 40 uses an integer for the redundancy metric and a value between 0 and 1 for the fusion error estimate. However, other types of values may be used in other embodiments or scenarios. For instance, a driving assistance system may use a covariance matrix for the fusion error estimate for an object, instead of a single value. Such a covariance matrix may be referred to as an error covariance matrix. In addition or alternatively, a driving assistance system may derive the fusion error estimate as a value between 0 and 1, based on an error covariance matrix.
Driving assistance system 40 may use any suitable technique or combination of techniques to generate the fused list of reported objects. For instance, driving assistance system 40 may use a covariance intersection (CI) algorithm to determine whether reported objects should be combined, based on the error covariance matrixes for those objects. As shown at block 140, driving assistance system 40 then calculates a confidence metric for each object in the fused list of reported objects, based on that object's redundancy metric and fusion error estimate. For instance, in one embodiment or scenario, the confidence metric is a number within the range from 0 to 1, and the calculation algorithm uses as input the redundancy metric and the fusion error estimate, which is a metric that reflects the degree of consistency of the object across multiple sources. In addition, if driving assistance system 40 has previously computed one or more confidence metrics for the object, the algorithm also uses the last N confidence metrics for the object when computing the current confidence metric. Driving assistance system 40 may use any suitable formula to compute the current confidence metric based on the redundancy metric, the fusion error estimate, and the previous confidence metrics (if any). For example, the formula may use concepts from the recommendation systems literature, and the existence of an object may be interpreted as an opinion expressed by an independent entity; the more the opinions on the same object, the higher will be the confidence in that object.
As shown at block 142, driving assistance system 40 then generates a filtered list of reported objects, based on the fused list of reported objects, the confidence metrics for those objects, and a confidence threshold. For instance, in one embodiment or scenario, driving assistance system 40 uses a confidence threshold of 0.9. Consequently, when generating the filtered list of reported objects, driving assistance system 40 will include each object with a confidence metric of at least 0.9 and reject each object with a confidence metric less than 0.9. Additionally, driving assistance system 40 may compute the current confidence metric for each object using a formula that generates a result of less than 0.9 if an object is not detected by at least two nodes. For instance, if an object has been reported by only one remote node, and that object has not been detected by subject vehicle 12, the formula or algorithm for computing confidence metrics may generate a result of less than 0.9 for that object. Consequently, driving assistance system 40 may omit that object from the filtered list of reported objects. Consequently, when updating the SOL (as described in greater detail below), driving assistance system 40 will not add that object to the SOL. Thus, driving assistance system filters out reported objects from rogue nodes.
For instance, in the scenario depicted in
Referring again to
As shown at block 146, driving assistance system 40 then affects driving operations, based on updated SOL 60. In particular, referring again to
For instance, if subject vehicle 12 has a driver who is using adaptive cruise control, driving assistance system 40 may automatically reduce the speed of subject vehicle 12 based on data from SOL 60 indicating that there is a vehicle within a certain distance ahead of subject vehicle 12. As another example, if subject vehicle 12 has a driver and SOL 60 indicates that there is debris on the road ahead, driving assistance system 40 may sound a warning beep and display a suitable visual warning for the driver to see. As another example, if subject vehicle 12 is operating autonomously, driving assistance system 40 may automatically adjust the speed and/or direction of subject vehicle 12, based on SOL 60.
Driving assistance system 40 may also periodically broadcast multi-object TSMs to other nodes in TSN 10 according to a predetermined time interval. Accordingly, as shown at block 150 of
However, if it is time to report detected objects, driving assistance system 40 may generate an outgoing ROL, based on DOL 62, as shown at block 152. For instance, driving assistance system 40 may create a multi-object TSM with an ROL that describes all of the objects in DOL 62. As shown at block 154, driving assistance system 40 may then broadcast that multi-object TSM to the other nodes in TSN 10. The process may then return to block 110 through page connector C, and driving assistance system 40 may continue to collect sensor data, etc., as indicated above.
As has been described, a TSN includes vehicles with driving assistance systems that share multi-object TSMs, and the driving assistance systems use those multi-object TSMs to detect and filter out false information from malfunctioning or rogue senders. Such driving assistance systems may provide for greater safety and reliability, compared to driving assistance systems which do not use multi-object TSMs. Thus, multi-object TSMs may allow autonomous systems, for instance, to make decisions based on a higher degree of redundancy, thereby increasing security and the overall system safety. Each participating vehicle may use multi-object TSMs to inform other vehicles not only about that subject vehicle itself but also about objects perceived in the environment by the subject vehicle from the perspective of the subject vehicle. When an object is detected and reported by multiple participants, the credibility of that information dramatically increases. Accordingly, as indicated above, vehicles can cross-check information from multiple sources, and derive confidence scores based on the received information. For instance, referring again to
In addition, multi-object TSMs facilitate more precise and fault tolerant identification of obstacles which are outside of the perception range of a subject vehicle. For instance, with regard to
Although certain example embodiments are described herein, one of ordinary skill in the art will understand that those example embodiments may easily be divided, combined, or otherwise altered to implement additional embodiments. For instance, according to the process described above, a driving assistance system processes its ROL collection on a periodic basis; but in an alternative embodiment, the driving assistance system may process each ROL as it is received. Also, the above description focuses on a driving assistance system in a subject vehicle. However, a roadside unit may perform the same or similar types of operations. A TSN may thereby leverage the broader coverage and the extended sensing capabilities that may be provided by stationary transportation facilities. For instance, a subject roadside unit may generate an ROL collection based on multi-object TSMs received from vehicles and/or other roadside units within network range of the subject roadside unit. Moreover, that network range may be global, since a roadside unit may include wireless and wired networking connectivity, with access, for example, to the Internet. The roadside unit may then process the ROLs using techniques such as those described above. For instance, the roadside unit may match reported objects with objects detected by the roadside unit using its perception layer (e.g., cameras and/or radars deployed on highways and/or at intersections), and the roadside unit may filter out objects with a low confidence metric. The roadside unit may then broadcast (with a tunable periodicity) the whole list of high-confidence objects detected under the coverage of that roadside unit. In addition or alternatively, in one embodiment, vehicles only include directly detected objects in their TSMs, while roadside unit include both directly detected objects and reported objects that have a sufficiently high confidence metric.
In the present disclosure, expressions such as “an embodiment,” “one embodiment,” and “another embodiment” are meant to generally reference embodiment possibilities. Those expressions are not intended to limit the invention to particular embodiment configurations. As used herein, those expressions may reference the same embodiment or different embodiments, and those embodiments are combinable into other embodiments. In light of the principles and example embodiments described and illustrated herein, it will be recognized that the illustrated embodiments can be modified in arrangement and detail without departing from such principles.
Also, as described above, a device may include instructions and other data which, when accessed by a processor, cause the device to perform particular operations. For purposes of this disclosure, instructions which cause a device to perform operations may be referred to in general as software. Software and the like may also be referred to as control logic. Software that is used during a boot process may be referred to as firmware. Software that is stored in nonvolatile memory may also be referred to as firmware. Software may be organized using any suitable structure or combination of structures. Accordingly, terms like program and module may be used in general to cover a broad range of software constructs, including without limitation application programs, subprograms, routines, functions, procedures, drivers, libraries, data structures, processes, microcode, and other types of software components. Also, it should be understood that a software module may include more than one component, and those components may cooperate to complete the operations of the module. Also, the operations which the software causes a device to perform may include creating an operating context, instantiating a particular data structure, etc. Any suitable operating environment and programming language (or combination of operating environments and programming languages) may be used to implement software components described herein.
A medium which contains data and which allows another component to obtain that data may be referred to as a machine-accessible medium or a machine-readable medium. In one embodiment, software for multiple components is stored in one machine-readable medium. In other embodiments, two or more machine-readable media may be used to store the software for one or more components. For instance, instructions for one component may be stored in one medium, and instructions another component may be stored in another medium. Or a portion of the instructions for one component may be stored in one medium, and the rest of the instructions for that component (as well instructions for other components), may be stored in one or more other media. Similarly, software that is described above as residing on a particular device in one embodiment may, in other embodiments, reside on one or more other devices. For instance, in a distributed environment, some software may be stored locally, and some may be stored remotely. Similarly, operations that are described above as being performed on one particular device in one embodiment may, in other embodiments, be performed by one or more other devices.
Accordingly, alternative embodiments include machine-readable media containing instructions for performing the operations described herein. Such media may be referred to in general as apparatus and in particular as program products. Such media may include, without limitation, tangible non-transitory storage components such as magnetic disks, optical disks, dynamic RAM, static RAM, read-only memory (ROM), etc., as well as processors, controllers, and other components that include data storage facilities. For purposes of this disclosure, the term “ROM” may be used in general to refer to nonvolatile memory devices such as erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash ROM, flash memory, etc.
It should also be understood that the hardware and software components depicted herein represent functional elements that are reasonably self-contained so that each can be designed, constructed, or updated substantially independently of the others. In alternative embodiments, many of the components may be implemented as hardware, software, or combinations of hardware and software for providing the functionality described and illustrated herein. In some embodiments, some or all of the control logic for implementing the described operations may be implemented in hardware logic (e.g., as microcode in an integrated circuit chip, as a programmable gate array (PGA), as an application-specific integrated circuit (ASIC), etc.).
Additionally, the present teachings may be used to advantage in many different kinds of data processing systems. Such data processing systems may include, without limitation, accelerators, systems on a chip (SOCs), wearable devices, handheld devices, smartphones, telephones, entertainment devices such as audio devices, video devices, audio/video devices (e.g., televisions and set-top boxes), vehicular processing systems, personal digital assistants (PDAs), tablet computers, laptop computers, portable computers, personal computers (PCs), workstations, servers, client-server systems, distributed computing systems, supercomputers, high-performance computing systems, computing clusters, mainframe computers, mini-computers, and other devices for processing or transmitting information. Accordingly, unless explicitly specified otherwise or required by the context, references to any particular type of data processing system (e.g., a PC) should be understood as encompassing other types of data processing systems, as well. A data processing system may also be referred to as an apparatus. The components of a data processing system may also be referred to as apparatus.
Also, unless expressly specified otherwise, components that are described as being coupled to each other, in communication with each other, responsive to each other, or the like need not be in continuous communication with each other and need not be directly coupled to each other. Likewise, when one component is described as receiving data from or sending data to another component, that data may be sent or received through one or more intermediate components, unless expressly specified otherwise. In addition, some components of the data processing system may be implemented as adapter cards with interfaces (e.g., a connector) for communicating with a bus. Alternatively, devices or components may be implemented as embedded controllers, using components such as programmable or non-programmable logic devices or arrays, ASICs, embedded computers, smart cards, and the like. For purposes of this disclosure, the term “bus” includes pathways that may be shared by more than two devices, as well as point-to-point pathways. Similarly, terms such as “line,” “pin,” etc. should be understood as referring to a wire, a set of wires, or any other suitable conductor or set of conductors. For instance, a bus may include one or more serial links, a serial link may include one or more lanes, a lane may be composed of one or more differential signaling pairs, and the changing characteristics of the electricity that those conductors are carrying may be referred to as signals on a line. Also, for purpose of this disclosure, the term “processor” denotes a hardware component that is capable of executing software. For instance, a processor may be implemented as a central processing unit (CPU), a processing core, or as any other suitable type of processing element. A CPU may include one or more processing cores, and a device may include one or more CPUs.
Also, although one or more example processes have been described with regard to particular operations performed in a particular sequence, numerous modifications could be applied to those processes to derive numerous alternative embodiments of the present invention. For example, alternative embodiments may include processes that use fewer than all of the disclosed operations, process that use additional operations, and processes in which the individual operations disclosed herein are combined, subdivided, rearranged, or otherwise altered.
In view of the wide variety of useful permutations that may be readily derived from the example embodiments described herein, this detailed description is intended to be illustrative only, and should not be taken as limiting the scope of coverage.