The present disclosure relates generally to Ethernet communication, and particularly to methods and systems for sensor synchronization over Ethernet networks.
Various systems and applications, such as automotive systems and industrial control systems, employ sensors that are connected to an Ethernet network. The sensors send the data they acquire, and receive control commands, over the network. Ethernet networks in such systems sometimes use Energy-Efficient Ethernet (EEE) protocols to reduce power consumption. EEE is specified, for example, in IEEE Standard 802.3ch-2020, entitled “IEEE Standard for Ethernet—Amendment 8: Physical Layer Specifications and Management Parameters for 2.5 Gb/s, 5 Gb/s, and 10 Gb/s Automotive Electrical Ethernet,” July, 2020, which is incorporated herein by reference.
The description above is presented as a general overview of related art in this field and should not be construed as an admission that any of the information it contains constitutes prior art against the present patent application.
An embodiment that is described herein provides an apparatus for controlling a sensor over a network. The apparatus includes a transceiver and a processor. The transceiver is configured to communicate over a network. The processor is configured to receive or generate control data for controlling a sensor connected to the network, to generate a packet including (i) the control data and (ii) a trigger timestamp indicative of a future time at which the control data is to be provided to the sensor, and to transmit the packet using the transceiver over the network.
In some embodiments, the processor is configured to set the future time to exceed a maximal latency of the network between the apparatus and the sensor.
In an example embodiment, the processor is further configured to wake-up a peer device in accordance with a schedule, to include in the packet a transport timestamp, the transport timestamp indicative of an additional future time at which, in accordance with the schedule, a link with the peer device will be awake, and to send the packet to the peer device at a transmission time corresponding to the additional future time.
In an embodiment the network includes an Ethernet network.
There is additionally provided, in accordance with an embodiment that is described herein, an apparatus for controlling a sensor over a network. The apparatus includes a transceiver, a sensor interface, a memory and a processor. The transceiver is configured to communicate over a network. The sensor interface is configured to communicate with a sensor over a local link that does not traverse the network. The processor is configured to receive from the network, using the transceiver, a packet including (i) control data for controlling the sensor and (ii) a trigger timestamp indicative of a future time at which the control data is to be provided to the sensor, to buffer at least the control data in the memory until a delivery time corresponding to the future time indicated in the trigger timestamp, and, at the delivery time, to retrieve the control data from the memory and to send the control data to the sensor over the local link.
In some embodiments the network includes an Ethernet network.
There is also provided, in accordance with an embodiment that is described herein, an apparatus for controlling a sensor over a network. The apparatus includes a transceiver and a processor. The transceiver is configured to communicate over a network. The processor is configured to receive or generate control data for controlling a sensor, the sensor being connected to the network by a peer device, to wake-up a link with the peer device in accordance with a schedule, and, during a time period in the schedule in which the link with the peer device is awake, to send to the peer device a packet including the control data.
In some embodiments, the processor is configured to embed a transport timestamp in the packet, the transport timestamp indicative of a future time at which, in accordance with the schedule, the link with the peer device will be awake, and to send the packet to the peer device at a transmission time corresponding to the future time. In an example embodiment, the apparatus further includes a memory, and the processor is configured to buffer the control data in the memory, and, at the transmission time, to retrieve the control data from the memory and to send the control data to the peer device.
In another embodiment, the processor is configured to schedule sending of the packet based on a Quality-of-Service (QoS) metric associated with the control data. In yet another embodiment, the processor is configured to embed a trigger timestamp in the packet, the trigger timestamp indicative of a future time at which the control data is to be provided to the sensor. In some embodiment the network includes an Ethernet network.
There is further provided, in accordance with an embodiment that is described herein, an apparatus for communicating sensor data over a network. The apparatus includes a transceiver, a sensor interface and a processor. The transceiver is configured to communicate over a network. The sensor interface is configured to communicate with a sensor over a local link that does not traverse the network. The processor is configured to receive from the sensor, over the local link, sensor data for sending over the network to a peer device, to generate a packet including (i) the sensor data, (ii) a capture timestamp indicative of a time at which the sensor captured the sensor data, and (iii) a presentation timestamp indicative of a future time at which the sensor data is to be presented by the peer device for subsequent processing, and to transmit the packet using the transceiver over the network to the peer device.
In an embodiment, the processor is configured to set the future time, indicated in the presentation timestamp, to exceed a maximal latency of the network between the apparatus and the peer device. In some embodiments the network includes an Ethernet network.
There is additionally provided, in accordance with an embodiment that is described herein, an apparatus for controlling a sensor over a network. The apparatus includes a transceiver, a memory and a processor. The transceiver is configured to communicate over a network. The processor is configured to receive multiple packets from the network using the transceiver, each of the packets including (i) sensor data captured by a sensor, (ii) a capture timestamp indicative of a time at which the sensor captured the sensor data, and (iii) a presentation timestamp indicative of a future time at which the sensor data is to be presented by the apparatus for subsequent processing, to buffer each of the packets in the memory until a respective retrieval time corresponding to the future time indicated in the presentation timestamp of the packet, and, at the respective retrieval time of each packet, to retrieve the packet from the memory and present the sensor data and the capture timestamp of the packet for subsequent processing.
In some embodiment the network includes an Ethernet network.
There is also provided, in accordance with an embodiment that is described herein, a method for controlling a sensor over a network. The method includes receiving or generating control data for controlling a sensor connected to the network. A packet is generated, the packet including (i) the control data and (ii) a trigger timestamp indicative of a future time at which the control data is to be provided to the sensor. The packet is transmitted over the network.
There is further provided, in accordance with an embodiment that is described herein, a method for controlling a sensor over a network. The method includes communicating packets over a network, and communicating with a sensor over a local link that does not traverse the network. A packet is received from the network, the packet including (i) control data for controlling the sensor and (ii) a trigger timestamp indicative of a future time at which the control data is to be provided to the sensor. At least the control data is buffered in a memory until a delivery time corresponding to the future time indicated in the trigger timestamp. At the delivery time, the control data is retrieved from the memory and the control data is sent to the sensor over the local link.
There is also provided, in accordance with an embodiment that is described herein, a method for controlling a sensor over a network. The method includes receiving or generating control data for controlling a sensor, the sensor being connected to the network by a peer device. A link with the peer device is woken-up in accordance with a schedule. A packet, which includes the control data, is sent to the peer device during a time period in the schedule in which the link with the peer device is awake.
There is additionally provided, in accordance with an embodiment that is described herein, a method for communicating sensor data over a network. The method includes communicating over a network, and communicating with a sensor over a local link that does not traverse the network. Sensor data, for sending over the network to a peer device, is received from the sensor over the local link. A packet is generated, the packet including (i) the sensor data, (ii) a capture timestamp indicative of a time at which the sensor captured the sensor data, and (iii) a presentation timestamp indicative of a future time at which the sensor data is to be presented by the peer device for subsequent processing. The packet is transmitted over the network to the peer device.
There is further provided, in accordance with an embodiment that is described herein, a method for controlling a sensor over a network. The method includes receiving multiple packets from the network using the transceiver, each of the packets including (i) sensor data captured by a sensor, (ii) a capture timestamp indicative of a time at which the sensor captured the sensor data, and (iii) a presentation timestamp indicative of a future time at which the sensor data is to be presented by the apparatus for subsequent processing. Each of the packets is buffered in a memory until a respective retrieval time corresponding to the future time indicated in the presentation timestamp of the packet. At the respective retrieval time of each packet, the packet is retrieved from the memory, and the sensor data and the capture timestamp of the packet are presented for subsequent processing.
The present disclosure will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
In systems that involve communicating with sensors over a network, it is sometimes required to control a sensor with high time accuracy. For example, in some cases it is required to deliver control data to a sensor (e.g., deliver a command for triggering the sensor to acquire sensor data) at a specific time. In other cases, it is necessary that sensor data captured by a sensor (e.g., an image captured by a camera) be presented for subsequent processing at a specific time. Controlling a sensor over a network with high time accuracy is particularly challenging when the network has a variable latency. When using Energy-Efficient Ethernet (EEE), for example, the propagation time of a packet may vary by several tens of microseconds. The time accuracy requirement, on the other hand, may be on the order of less than one microsecond.
Embodiments that are described herein provide improved systems and methods for controlling sensors over a network with high time accuracy. Embodiments are described in the context of automotive systems in which multiple sensors are connected to an in-vehicle Ethernet network. The disclosed techniques, however, are suitable for use in any other suitable system, e.g., in industrial control systems, and with various types of sensors and surveillance systems. Embodiments are described in the context of Ethernet, and EEE, by way of example. The disclosed techniques, however, are suitable for use in various other types of packet networks. Non-limiting examples of sensors used in automotive systems include video cameras, velocity sensors, accelerometers, audio sensors, infra-red sensors, radar sensors, lidar sensors, ultrasonic sensors, rangefinders or other proximity sensors, and the like.
In some embodiments, a system comprises one or more sensors, and a controller that controls the sensors over an Ethernet network. The controller, also referred to herein as an initiator, may comprise, for example, a system Central Processor Unit (CPU), a Graphics Processing Unit (GPU) or any other suitable type of processor. In some embodiments the controller or initiator may be integrated into a switch of the Ethernet network. A given sensor is typically connected to the network via a “sensor bridge”. The sensor bridge communicates with the controller over the network using Ethernet, and with the sensor using one or more local interfaces (also referred to as local links). The type of local interface may vary from one sensor type to another. The link between a sensor and a sensor bridge is considered local in the sense that the link does not traverse the Ethernet network.
In the embodiments described herein, a sensor may be controlled using any of three different mechanisms, referred to as a “trigger timestamp” mechanism, a “transport timestamp” mechanism and a “presentation timestamp” mechanism. The trigger timestamp and transport timestamp mechanisms pertain to the direction from the controller to the sensor. The presentation timestamp mechanism pertains to the direction from the sensor to the controller. The presentation timestamp mechanism is useful, for example, for synchronizing presentation (and subsequent processing) of sensor data captured by multiple different sensors. A given implementation may employ a single mechanism or any combination of mechanisms.
In an embodiment, in the “trigger timestamp” mechanism, the controller receives or generates control data for controlling a sensor. The controller generates and sends to the sensor bridge an Ethernet packet, which comprises (i) the control data and (ii) a trigger timestamp. The trigger timestamp is indicative of a future time at which the control data is to be provided to the sensor. For example, the trigger timestamp may indicate, for example, a system-determined future time at which all image sensors are required to receive control data. The controller typically defines the future time in the packet (the “trigger time”) so as to exceed the maximal expected latency of the Ethernet network between the controller and the sensor. In an example embodiment, the controller sets the trigger time to be several milliseconds later than the time at which the packet is generated.
In this manner, the sensor bridge is able to deliver the control data to the sensor exactly at the specified future time, regardless of the variable latency of the Ethernet network. A value for the future time can be a constant fixed delay after sending of the command, depending on requirements of the sensor, or it can be adjusted to synchronize to a particular time interval. For instance, packets can be timed according to a blanking interval of a video sensor by keeping continuous track of the timing of video sensor frames.
The sensor bridge receives the Ethernet packet and buffers the packet, or at least the control data, in memory. Shortly before the trigger time indicated in the trigger timestamp, the sensor bridge retrieves the control data from the memory and sends the control data to the sensor over the local link. The time at which the sensor bridge retrieves the control data from the memory (referred to as “delivery time”, slightly preceding the trigger time) is derived from the trigger time indicated in the trigger timestamp, so that the sensor will actually receive the control data at the specified trigger time notwithstanding delays in the sensor bridge and the local interface.
The “trigger timestamp” mechanism described above governs the time at which the sensor bridge delivers the control data to the sensor. The “transport timestamp” mechanism, in contrast, governs the time at which the controller sends the packet carrying the control data to the network.
The time at which the control data is sent to the network is important, for example, when the link between the controller and the sensor bridge is awake only intermittently in accordance with a certain schedule, in order to reduce energy consumption. This is the case, for example, when the controller and the sensor bridge communicate using Energy-Efficient Ethernet (EEE). When using EEE, it is possible to reduce packet propagation latency considerably by sending a packet to the network only at times when the destination sensor bridge is awake.
In the present context, the term “the link is awake” means that the Ethernet transceivers in both the controller and the sensor bridge (referred to as a “link partner” or “peer Ethernet device” of the controller) are active. Deactivating the link by the controller comprises instructing the sensor bridge to deactivate at least its Ethernet transceiver, and may comprise deactivating the Ethernet transceiver of the controller as well. Waking-up (activating) the link by the controller comprises instructing the sensor bridge to activate at least its Ethernet transceiver, and may comprise activating the Ethernet transceiver of the controller.
In an embodiment, in the “transport timestamp” mechanism, the controller generates an Ethernet packet comprising (i) the control data and (ii) a transport timestamp. The transport timestamp is indicative of a future time at which, according to the schedule, the link between the controller and the sensor bridge will be awake. The controller buffers the packet in memory. Shortly before the future time (“transport time”) indicated in the transport timestamp, the controller retrieves the packet from the memory and sends the packet to the network, en route to the sensor bridge. The time at which the controller retrieves the packet from the memory (referred to as “transmission time”, slightly preceding the specified transport time) is derived from the transport time indicated in the transport timestamp, so that the packet is sent to the network at the specified transport time regardless of delays in the controller.
The “transport timestamp” mechanism enables significant power savings, particularly in asymmetric Ethernet links. This mechanism also provides a deterministic latency for the control data in spite of the use of EEE. This ability also enables synchronizing control of different sensors. The disclosed implementations of the “transport timestamp” mechanism are compatible with existing Ethernet standards.
In an embodiment, in the “presentation timestamp” mechanism, the sensor bridge receives sensor data from a sensor and generates an Ethernet packet comprising (i) the sensor data, (ii) a capture timestamp and (iii) a presentation timestamp. The capture timestamp is indicative of the time at which the sensor captured the sensor data. The presentation timestamp is indicative of a future time (“presentation time”) at which the sensor data is to be presented by the controller, e.g., to a user application, for subsequent processing. In an example embodiment, the sensor bridge sets the presentation time to be several milliseconds in the future (relative to the time the packet is generated), to account for any delays in the network.
In this embodiment, the controller receives multiple Ethernet packets from one or more sensor bridges, each packet comprising sensor data, a capture timestamp and a presentation timestamp. The controller buffers the received packets in memory. A given packet is buffered until shortly before the presentation time indicated in the presentation timestamp of that packet. Shortly before the presentation time of a given packet, the controller retrieves the packet from the memory and outputs (“presents”) the sensor data and the capture timestamp for subsequent processing. The time at which the controller retrieves the packet from the memory (referred to as “retrieval time”, slightly preceding the presentation time) is derived from the presentation time indicated in the presentation timestamp, so that the sensor data will be output from the controller at the specified presentation time notwithstanding delays in the controller. In this manner, the controller can present sensor data originating from different sensors, at times that reflect the actual capture times, even though the network latency varies over time and from one sensor bridge to another.
Various implementation examples of systems, controllers and sensor bridges, which use the disclosed “trigger timestamp”, “transport timestamp” and “presentation timestamp” mechanisms are described herein.
In various embodiments, sensors 28 may comprise any suitable types of sensors. Several non-limiting examples of sensors comprise video cameras, velocity sensors, accelerometers, audio sensors, infra-red sensors, radar sensors, lidar sensors, ultrasonic sensors, rangefinders or other proximity sensors, and the like. Controller 32 may comprise any suitable type of processor, e.g., a CPU or a GPU.
Sensors 28 and controller 32 communicate via an Ethernet network comprising multiple network links 40 and one or more Ethernet switches 44. Ethernet links 40 may comprise, for example, twisted-pair cables. Sensors 28 connect to the Ethernet network via one or more sensor bridges 36. A given sensor bridge may connect a single sensor or multiple sensors to the Ethernet network.
Sensor bridges 36, switches 44 and controller 32 may communicate over network links 40 at any suitable bit rate. Example bit rates are 2.5 Gb/s, 5 Gb/s or 10 Gb/s, in accordance with the IEEE 802.3ch-2020 standard. In some embodiments, sensor bridges 36 and controller 32 communicate using EEE, in accordance with IEEE 802.3ch-2020, cited above. Controller 32 and sensor bridges 36 are assumed to be synchronized to some central clock or time-base of system 20, for example using Time-Precision Protocol (PTP).
An inset on the left-hand side of
In the embodiment of
In a typical flow, processor 56 receives or generates control data for controlling sensor 28 via sensor bridge 36. Timestamp generator 60 generates an Ethernet packet that comprises (i) the control data, (ii) a trigger timestamp and (iii) a transport timestamp.
The trigger timestamp is indicative of a future time (denoted “trigger time”) at which the control data is to be delivered from sensor bridge 36 to sensor 28. The transport timestamp is indicative of another future time (denoted “transport time”) at which (i) the link between controller 32 and sensor bridge 36 will be awake according to the EEE schedule between the controller and the sensor bridge, and (ii) the packet is to be sent from controller 32 to network 48.
Processor 56 buffers the Ethernet packet in packet buffer 64. Shortly before the transport time specified in the transport timestamp of the buffered packet, scheduler 68 retrieves the packet from buffer 64 and sends the packet to network 48 using Ethernet transceiver 52.
In sensor bridge 36, Ethernet transceiver 72 receives the Ethernet packet from network 48 and forwards the packet to processor 80. Processor 80 buffers the packet in packet buffer 84. Shortly before the trigger time specified in the transport timestamp of the buffered packet, scheduler 88 retrieves the packet from buffer 84 and sends the control data to sensor 28 using sensor interface 76.
In an embodiment, controller 32 may carry out the above process concurrently for multiple Ethernet packets that convey control data to various sensors 28 via various sensor bridges 36. Similarly, sensor bridge 36 may carry out the above process concurrently for multiple Ethernet packets that convey control data to various sensors 28 connected thereto.
The additional components of processor 56, at the bottom of
Time-stamper 98 receives control data to be delivered to sensors 28. Timestamp generator 60 generates suitable trigger timestamps and transport timestamps for the various control data. Time-stamper 98 encapsulates the control data in Ethernet packets that each comprise (i) control data, (ii) a respective trigger timestamp and (iii) a respective transport timestamp.
Transport-timestamp extractor 102 extracts the transport timestamps from the packets and buffers the packets in packet buffer 64. The extracted transport timestamps will be used later, to initiate transmission of the buffered packets according to the specified transport times.
Idle/wake-up module 106 activates and deactivates the links between controller 32 and the various sensor bridges 36 in accordance with EEE as specified in IEEE 802.3ch-2020, cited above. For a given sensor bridge 36 (referred to in the present context as a link partner or peer Ethernet device of controller 32), idle/wake-up module 106 activates (“wakes-up”) and deactivates at least the Ethernet transceiver of the sensor bridge in accordance with a suitable time schedule in order to reduce power consumption. Idle/wake-up module 106 may also activate and deactivate other components, e.g., other components of sensor bridge 36 and/or the Ethernet transceiver of controller 32.
The activation/deactivation schedule is referred to herein as an EEE schedule. The EEE schedule may be decided internally by module 106, or provided to module 106 from an external source. In either case, processor 56 is aware of the time periods in which the link with each sensor bridge 36 is active (awake). Using this information, processor 56 (e.g., timestamp generator 60, or scheduler 68 in
Clock module 118 synchronizes processor 56 to the central clock of system 20, e.g., using PTP. Comparator 114 compares the transport times of the various buffered packets (as provided by transport-timestamp extractor 102) to the current time (as provided by clock module 118). When the current time matches the transport time of a certain packet (e.g., when the current time is equal to a “transmission time” that slightly precedes the transport time) comparator 114 retrieves the packet from buffer 64 and sends the packet via MUX 110 to Ethernet transceiver 52, for transmission to the appropriate sensor bridge 36.
In some embodiments, processor 56 further reduces power consumption, and increases communication efficiency, by jointly setting the EEE schedule and the transport times of packets. For example, processor 56 may accumulate control data destined to a certain sensor bridge 36. When a sufficient amount of control data has been accumulated, processor 56 may (i) activate the link to the sensor bridge and (ii) set the transport timestamps of the packets destined to that sensor bridge. As another example, processor 56 may wait until slightly before the specified transport time of a certain packet, and then activate the link to the sensor bridge and send the packet. Other scheduling schemes that jointly consider the EEE schedule and the packet transport times are also possible.
For reception, processor 56 of
A host I/O interface can be used to simplify the control interface for the sensor with bi-directional signal forwarding across the Ethernet link. Processor 56 of
For transmission, processor 80 of sensor bridge 36 comprises a sensor data RX 174 that receives sensor data from sensor 28, an encapsulator 176 that encapsulates the sensor data into Ethernet packets, and a time-stamper 180 that time-stamps the Ethernet packets before the packets are transmitted to network 48 using transceiver 72. For reception, processor 80 comprises a trigger-timestamp extractor 184, packet buffer 84, a comparator 188 and a decapsulator 196. Processor 80 further comprises a clock module 192 and an idle/wake-up module 200. Clock module 192 synchronizes processor 80 to the central clock of system 20, e.g., using PTP. Idle/wake-up module 200 activates and deactivates suitable elements of sensor bridge 36, e.g., Ethernet transceiver 72, according to the EEE schedule.
Upon reception, trigger-timestamp extractor 184 extracts the trigger timestamps from received Ethernet packets, and buffers the packets in buffer 84. Comparator 188 compares the trigger times of the various buffered packets (as provided by trigger-timestamp extractor 184) to the current time (as provided by clock module 192). When the current time matches the trigger time of a certain packet (e.g., when the current time is equal to a “delivery time” that slightly precedes the trigger time) comparator 188 retrieves the packet from buffer 84 and sends the packet to decapsulator 196. Decapsulator 196 extracts the control data from the packet, and sends the control data at the specified trigger time via sensor interface 76 to sensor 28.
In this example, processor 56 of controller 32 comprises a host I/O interface 204, which (i) receives control data from a host, for transmission to sensor bridge 36, and (ii) supports a physical signaling mechanism via which a host can activate (wake-up) and deactivate the link with sensor bridge 36 in accordance with EEE. This signaling mechanism can be implemented, for example, using pin-based signaling such as Serial Peripheral Interface (SPI), Universal Asynchronous Receiver Transmitter (UART), I2C or I3C, or any other suitable interface such as PCIe, Universal Serial Bus (BUS) or Ethernet.
Processor 56 further comprises a packet encapsulator and scheduler 208 and an idle/wake-up module 212. Idle/wake-up module 212 wakes-up the link with sensor bridge 36 (the link partner or peer Ethernet device of controller 32 in this context) according to the EEE schedule (as provided by the host via host I/O interface 204). Encapsulator & scheduler 208 (i) encapsulates control data received from the host in Ethernet packets with suitable data timestamps, and (ii) schedules the packets for transmission to network 48 in accordance with the EEE schedule and the signaling from host I/O 204. The WAKE signal is used to schedule EEE for active data mode on network 48 by the host IO interface, and the READY signal is used to inform the host of the EEE timing of network 48.
In some embodiments, encapsulator & scheduler 208 schedules transmission of Ethernet packets using multiple Quality-of-Service (QoS) levels. For example, the host may specify different QoS levels for different control data (e.g., to control data destined to different sensors or sensor types). Encapsulator & scheduler 208 may, for example, give higher scheduling priority to packets that carry control data having a higher QoS level, and vice versa.
Host I/O interface 204 is also connected to a decapsulator 216, which decapsulates Ethernet packets received from sensor bridge 36 before transferring the received sensor data. In addition, decapsulated data can be transferred to an output of host I/O interface 204, and input data can be encapsulated input Ethernet packets by packet encapsulator & scheduler 208. The host I/O interface, thus, provides pin-based methods to control the wake-up times to activate the link to send and receive input and output data at well controlled times.
In some embodiments, sensor interface 76 in sensor bridge 36 also supports a physical signaling interface for activation and deactivation. Sensor bridge 36 in this embodiment comprises a packet encapsulator & scheduler 220 and a decapsulator 224, which both operate in accordance with the EEE schedule and the signaling received via sensor interface 76 in a similar manner to the host interface. Input data to sensor interface 76 can be multiplexed and encapsulated with the sensor data Rx by packet encapsulator and scheduler 220, and output data can be extracted from the packets by decapsulator 224.
In an example embodiment, processor 56 receives from the host, via SERDES 228, packets that comprise the control data and wake-up/idle signaling. In this embodiment, processor 56 further comprises a packet decoder 232, which decodes the packets received from the host, forwards the control data to packet encapsulator & scheduler 208, and forwards the wake-up/idle signaling to wake-up/idle module 212.
The method begins with an encapsulation operation 240, in which processor 56 of controller 32 encapsulates control data in an Ethernet packet together with a suitable transport timestamp and a suitable trigger timestamp. At a buffering operation 244, processor 56 buffers the packet in packet buffer 64.
At a transport-time checking operation 248, processor 56 checks whether the current time (e.g., PTP time of system 20) matches (e.g., slightly precedes) the transport time specified in the transport timestamp of the buffered packet. If so, processor 56 retrieves the packet from buffer 64 and sends the packet to network 48 using Ethernet transceiver 52, at a transmission operation 252. Using this technique, the packet is sent to the network at the specified transport time.
The operations up to this point are carried out by controller 32. The operations from this point onwards are carried out by sensor bridge 36, in an embodiment.
At a reception and buffering stage 256, Ethernet transceiver 72 of sensor bridge 36 receives the packet from network 48, and processor 80 of sensor bridge 36 buffers the packet in packet buffer 84.
At a trigger-time checking operation 260, processor 80 checks whether the current time (e.g., PTP time of system 20) matches (e.g., slightly precedes) the trigger time specified in the trigger timestamp of the buffered packet. If so, processor 80 retrieves the packet from buffer 84 and sends the control data to sensor 28 via sensor interface 76, at a delivery operation 264. Using this technique, the control data is provided from the sensor bridge to the sensor at the specified trigger time.
In the example of
In an embodiment, each sensor bridge 36 comprises a sensor data receiver (RX) 270, a data encapsulator 274, a capture-time time-stamper 278, a presentation-time time-stamper 282, an Ethernet transmitter (TX) 286, a clock module 290 and a clock offset module 294. Encapsulator 274 and time-stampers 278 and 282 are typically embodied in processor 80 of sensor bridge 36.
In each sensor bridge 36, clock module 290 synchronizes sensor bridge 36 to the central clock of system 20, e.g., using PTP. Clock offset module 294 adjusts the time provided by clock module 290, by a time offset that may vary from one sensor bridge to another.
In each sensor bridge 36, sensor data RX 270 receives sensor data from the respective sensor. Encapsulator 274 generates Ethernet packets. Capture-time time-stamper 278 adds capture timestamps to the packet. Presentation-time time-stamper 282 adds presentation timestamps to the packets. The capture timestamp in a given packet is indicative of the time at which the sensor captured the sensor data carried by that packet. The presentation timestamp in a given packet is indicative of a future time (“presentation time”) at which the sensor data of that packet is to be presented by controller 32 for subsequent processing. Ethernet TX 286 transmits the packets to network 48.
In the embodiment of
Ethernet RX 300 receives the packets sent from the two sensor bridges 36. Extractor 304 extracts the presentation timestamps from the received packets and buffers the packets in buffer 308. Comparator 320 compares the current time (as provided by clock module 316) to the presentation timestamps of the packets buffered in buffer 308. When the presentation timestamp of a certain buffered packet matches (i.e., slightly precedes) the current time, comparator 320 retrieves the packet from buffer 308 and provides the packet to decapsulator 312. Decapsulator 312 extracts the sensor data and the capture timestamp from the packet, and outputs (“presents”) the sensor data and the capture timestamp for subsequent processing.
In the embodiment of
The method begins with processor 80 of sensor bridge 36 encapsulating received sensor data in an Ethernet packet, at an encapsulation operation 330. After encapsulation, the Ethernet packet comprises (i) the sensor data, (ii) a capture timestamp, and (iii) a presentation timestamp. At a transmission operation 334, processor 80 sends the packet to network 48 using Ethernet transceiver 72. The packet is destined to controller 32.
At a reception and buffering operation 338, processor 56 of controller 32 receives the packet using Ethernet transceiver 52, and buffers the packet in packet buffer 64.
At a presentation-time checking operation 342, processor 56 checks whether the current time (e.g., PTP time of system 20) matches (e.g., slightly precedes) the presentation time specified in the presentation timestamp of the buffered packet. If so, processor 56 retrieves the packet from buffer 64 and outputs (“presents”) the sensor data and the capture timestamp for subsequent processing, at a presentation operation 346. Using this technique, the sensor data and the corresponding capture timestamp are presented at the specified presentation time. The capture timestamp enables the subsequent processing to synchronize sensor data that was captured by different sensors 28.
The configurations of the various systems, controllers and sensor bridges, shown in
In some embodiments, some functions of the disclosed systems, controllers and sensor bridges, e.g., functions of processor 56 and/or processor 80, may be implemented in one or more programmable processors, e.g., one or more Central Processing Units (CPUs), microcontroller and/or Digital Signal Processors (DSPs), which are programmed in software to carry out the functions described herein. The software may be downloaded to any of the processors in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.
Although the embodiments described herein mainly address an automotive Ethernet sensor link, the methods and systems described herein can also be used in other applications, such as in communications channels in data centers and industrial applications with highly asymmetrical data.
It is noted that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art. Documents incorporated by reference in the present patent application are to be considered an integral part of the application except that to the extent any terms are defined in these incorporated documents in a manner that conflicts with the definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.
This application is a continuation of U.S. patent application Ser. No. 18/150,213, filed Jan. 5, 2023, which claims the benefit of U.S. Provisional Patent Applications 63/297,625, 63/297,632, 63/297,640 and 63/297,643, all filed Jan. 7, 2022. The disclosures of these related applications are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
8644352 | Hutchison et al. | Feb 2014 | B1 |
9042411 | Hutchison et al. | May 2015 | B1 |
9356721 | Haulin | May 2016 | B2 |
9391728 | Haulin | Jul 2016 | B2 |
11165651 | Fang et al. | Nov 2021 | B2 |
11538287 | Fang et al. | Dec 2022 | B2 |
11637713 | Razavi Majomard | Apr 2023 | B1 |
11714478 | Sedarat | Aug 2023 | B1 |
11721137 | Fang et al. | Aug 2023 | B2 |
20100183034 | Kroepfl et al. | Jul 2010 | A1 |
20140043954 | Wang | Feb 2014 | A1 |
20140121898 | Diab | May 2014 | A1 |
20170339653 | Hui | Nov 2017 | A1 |
20180295535 | Kavars | Oct 2018 | A1 |
20200081516 | Zyskind | Mar 2020 | A1 |
20200083974 | Dalmia | Mar 2020 | A1 |
20200116502 | Xu et al. | Apr 2020 | A1 |
20200126416 | Montemurro et al. | Apr 2020 | A1 |
20200186414 | Das Sharma | Jun 2020 | A1 |
20200218979 | Kwon | Jul 2020 | A1 |
20200252320 | Zemach et al. | Aug 2020 | A1 |
20200293064 | Wu | Sep 2020 | A1 |
20200319324 | Au | Oct 2020 | A1 |
20210190923 | Golomedov et al. | Jun 2021 | A1 |
20210192867 | Fang et al. | Jun 2021 | A1 |
20210297230 | Dror et al. | Sep 2021 | A1 |
20220417083 | Das Sharma | Dec 2022 | A1 |
20230112004 | Hari et al. | Apr 2023 | A1 |
Number | Date | Country |
---|---|---|
3614176 | Feb 2020 | EP |
3651429 | May 2020 | EP |
Entry |
---|
Wikipedia, “FPD-Link,” pp. 1-4, last edited Nov. 21, 2022, as downloaded from https://web.archive.org/web/20221211221359/https://en.wikipedia.org/wiki/FPD-Link. |
Wikipedia, “Energy-Efficient Ethernet (EEE),” pp. 1-4, Jun. 12, 2021, as downloaded https://web.archive.org/web/20210623011901/https://en.wikipedia.org/wiki/Energy-Efficient_Ethernet. |
Nordman et al., “Energy Efficiency and Regulation,” IEEE 802 Tutorial, pp. 1-58, Jul. 13, 2009. |
Hearst Autos Inc, “Adas: Everything You Need to Know”, pp. 1-6, Nov. 5, 2021, as downloaded from https://www.caranddriver.com/research/a31880412/adas/. |
IEEE Std 802.3ch-2020, “IEEE Standard for Ethernet—Amendment 8: Physical Layer Specifications and Management Parameters for 2.5 Gb/s, 5 Gb/s, and 10 Gb/s Automotive Electrical Ethernet,” IEEE Computer Society, pp. 1-207, year 2020. |
“Precision Time Protocol,” PTP Clock Types, Cisco, pp. 1-52, Jul. 30, 2020, as downloaded from https://www.cisco.com/c/en/us/td/docs/dcn/aci/apic/5x/system-management-configuration/cisco-apic-system-management-configuration-guide-52x/m-precision-time-protocol.pdf. |
Thekkeettil et al., U.S. Appl. No. 18/150,213, filed Jan. 5, 2023. |
Steinbaeck et al., A Hybrid Timestamping Approach for Multi-Sensor Perception Systems, 2020 23rd Euromicro Conference on Digital System Design (DSD), IEEE, pp. 447-454, Aug. 26, 2020. |
EP Application # 23150699.9 Search Report dated Jun. 6, 2023. |
Dror, U.S. Appl. No. 17/976,658, filed Oct. 28, 2022. |
Dror, U.S. Appl. No. 17/879,587, filed Aug. 2, 2022. |
U.S. Appl. No. 18/150,213 Office Action dated Mar. 14, 2024. |
Number | Date | Country | |
---|---|---|---|
20230224140 A1 | Jul 2023 | US |
Number | Date | Country | |
---|---|---|---|
63297640 | Jan 2022 | US | |
63297643 | Jan 2022 | US | |
63297625 | Jan 2022 | US | |
63297632 | Jan 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 18150213 | Jan 2023 | US |
Child | 18161065 | US |