This disclosure relates to cooperative positioning.
There is a desire for a technology of cooperative positioning, such as when various positions of various distance sensors are unknown or when various distance sensors have limited sensing capabilities. However, such technology does not exist. Accordingly, this disclosure enables such technology.
In an embodiment, a method comprises: receiving, by a server, in real-time, a first set of data from a first mobile phone positioned within an enclosed area, wherein the first mobile phone includes a first inertial measurement unit (IMU) and a first distance sensor, wherein the first set of data includes a first real-time inertia reading from the first IMU and a first real-time distance reading from the first distance sensor off a first vertically extending surface stationarily positioned within the enclosed area; receiving, by the server, in real-time, a second set of data from a second mobile phone positioned within the enclosed area, wherein the second mobile phone includes a second inertial measurement unit (IMU) and a second distance sensor, wherein the second set of data includes a second real-time inertia reading from the second IMU and a second real-time distance reading from the second distance sensor off a second vertically extending surface stationarily positioned within the enclosed area, wherein the first vertically extending surface is spaced apart from the second vertically extending surface within the enclosed area; performing, by the server, in real-time, a data fusion of the first real-time distance reading, the first real-time inertia reading, the second real-time distance reading, and the second real-time inertia reading while the first mobile phone and the second mobile phone are positioned within the enclosed area, wherein the data fusion is based on a specific time or a specific time range at which the first real-time distance reading, the first real-time inertia reading, the second real-time distance reading, and the second real-time inertia reading were made; determining, by the server, in real-time, a first real-time position of the first mobile phone within the enclosed area based on the data fusion and a second real-time position of the second mobile phone within the enclosed area based on the data fusion; and taking, by the server, in real-time, an action based on the first real-time position and the real-time second position.
In an embodiment, a method comprises: allowing, by a server, in real-time, a first mobile phone having a first distance sensor to transmit a distance sensing wireless signal such that the distance sensing wireless signal causes an echo, wherein the distance sensing wireless signal includes an identification code, wherein the echo includes the identification code; allowing, by the server, in real-time, a second mobile phone having a second distance sensor to directly receive the distance sensing wireless signal and the echo such that the second mobile phone real-time generates a set of data based on the distance sensing wireless signal including the identification code and the echo including the identification code, wherein the second distance sensor has an operational wireless sensing distance, wherein the first mobile phone is positioned within the wireless operational distance; receiving, by the server, in real-time, the set of data from the second mobile phone; determining, by the server, in real-time, a first real-time position of the first mobile phone based on the set of data and a second real-time position of the second mobile phone based on the set of data; and sending, by the server, in real-time, the first real-time position to the first mobile phone or the second mobile phone or the second real-time position to the first mobile phone or the second mobile phone.
Generally, this disclosure enables various technologies of cooperative positioning, such as when various positions of various distance sensors are unknown or when various distance sensors have limited sensing capabilities. This disclosure is now described more fully with reference to
Note that various terminology used herein can imply direct or indirect, full or partial, temporary or permanent, action or inaction. For example, when an element is referred to as being “on,” “connected” or “coupled” to another element, then the element can be directly on, connected or coupled to the other element or intervening elements can be present, including indirect or direct variants. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present.
Likewise, as used herein, a term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, features described with respect to certain embodiments may be combined in or with various other embodiments in any permutational or combinatory manner. Different aspects or elements of example embodiments, as disclosed herein, may be combined in a similar manner. The term “combination”, “combinatory,” or “combinations thereof” as used herein refers to all permutations and combinations of listed items preceding that term. For example, “A, B, C, or combinations thereof” is intended to include at least one of: A, B, C, AB, AC, BC, or ABC, and if order is important in a particular context, also BA, CA, CB, CBA, BCA, ACB, BAC, or CAB. Continuing with this example, expressly included are combinations that contain repeats of one or more item or term, such as BB, AAA, AB, BBC, AAABCCCC, CBBAAA, CABABB, and so forth. A skilled artisan will understand that typically there is no limit on a number of items or terms in any combination, unless otherwise apparent from the context.
Similarly, as used herein, various singular forms “a,” “an” and “the” are intended to include various plural forms as well, unless context clearly indicates otherwise. For example, a term “a” or “an” shall mean “one or more,” even though a phrase “one or more” is also used herein.
Moreover, terms “comprises,” “includes” or “comprising,” “including” when used in this specification, specify a presence of stated features, integers, steps, operations, elements, or components, but do not preclude a presence and/or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof. Furthermore, when this disclosure states that something is “based on” something else, then such statement refers to a basis which may be based on one or more other things as well. In other words, unless expressly indicated otherwise, as used herein “based on” inclusively means “based at least in part on” or “based at least partially on.”
Additionally, although terms first, second, and others can be used herein to describe various elements, components, regions, layers, or sections, these elements, components, regions, layers, or sections should not necessarily be limited by such terms. Rather, these terms are used to distinguish one element, component, region, layer, or section from another element, component, region, layer, or section. As such, a first element, component, region, layer, or section discussed below could be termed a second element, component, region, layer, or section without departing from this disclosure.
Also, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in an art to which this disclosure belongs. As such, terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in a context of a relevant art and should not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
This disclosure discloses various technologies for determining distances between a sensing apparatus and a target. The distances may be determined by measuring times of flight of transmitted signals (e.g., radar, light, or other signals) that reflect off the targets. As one example, a signal that includes a known or designated transmit pattern (such as waveforms that represent a sequence of bits) is transmitted and echoes of this signal are received. This transmit pattern can be referred to as a coarse stage transmit pattern. The echoes may include information representative of the pattern in the transmitted signal. For example, the echoes may be received and digitized to identify a sequence or stream of data that is representative of noise, partial reflections of the transmitted signal off one or more objects other than the target, and reflections off the target.
A coarse stage receive pattern can be compared to the digitized data stream that is based on the received echoes to determine a time of flight of the transmitted signal. The coarse stage receive pattern can be the same as the transmit pattern or differ from the transmit pattern by having a different length and/or sequence of bits (e.g., “0” and “1”). The coarse stage receive pattern is compared to different portions of the digitized data stream to determine which portion of the data stream more closely matches the receive pattern than one or more other portions. For example, the coarse stage receive pattern may be shifted (e.g., with respect to time) along the data stream to identify a portion of the data stream that matches the coarse stage receive pattern. A time delay between the start of the data stream and the matching portion of the coarse stage receive pattern may represent the time of flight of the transmitted signal. This measurement of the time of flight may be used to calculate a separation distance to the target. As described below, this process for measuring the time of flight may be referred to as coarse stage determination of the time of flight. The coarse stage determination may be performed once or several times in order to measure the time of flight. For example, a single “burst” of a transmitted signal may be used to measure the time of flight, or several “bursts” of transmitted signals (having the same or different transmit patterns) may be used.
A fine stage determination may be performed in addition to or in place of the coarse stage determination. The fine stage determination can include transmitting one or more additional signals (e.g., “bursts”) toward the target and generating one or more baseband echo signals based on the received echoes of the signals. The additional signals may include a fine stage transmit pattern that is the same or different pattern as the coarse stage transmit pattern. The fine stage determination can use the time of flight measured by the coarse stage determination (or as input by an operator) and compare a fine stage receive pattern that is delayed by the measured time of flight to a corresponding portion of the data stream. For example, instead of shifting the fine stage receive pattern along all or a substantial portion of the baseband echo signal, the fine stage receive pattern (or a portion thereof) can be time shifted by an amount that is equal to or based on the time delay measured by the coarse stage determination. Alternatively, the fine stage receive pattern may be shifted along all or a substantial portion of the baseband echo signal. The time shifted fine stage receive pattern can be compared to the baseband echo signal to determine an amount of overlap or, alternatively, an amount of mismatch between the waveforms of the time-shifted fine stage receive pattern and the baseband echo signal. This amount of overlap or mismatch may be translated to an additional time delay. The additional time delay can be added with the time delay measured by the coarse stage determination to calculate a fine stage time delay. The fine stage time delay can then be used to calculate a time of flight and separation distance to the target.
An ultrafine stage determination may be performed in addition to or in place of the coarse stage determination and/or the fine stage determination. The ultrafine stage determination can involve a similar process as the fine stage determination, but using a different component of the receive pattern and/or the data stream. For example, the fine stage determination may examine the in-phase (I) component or channel of the receive pattern and the data stream to measure the overlap or mismatch between the receive pattern and the data stream. The ultrafine stage determination can use the quadrature (Q) component or channel of the receive pattern and the data stream to measure an additional amount of overlap or mismatch between the waveforms of the receive pattern and the data stream. Alternatively, the ultrafine stage determination may separately examine the I channel and Q channel of the receive pattern and the data stream. The use of I and Q channels or components is provided as one example embodiment. Alternatively, one or more other channels or components may be used. For example, a first component or channel and a second component or channel may be used, where the first and second components or channels are phase shifted relative to each other by an amount other than ninety degrees.
The amounts of overlap or mismatch calculated by the ultrafine stage determination can be used to calculate an additional time delay that can be added to the time delays from the coarse stage and/or the fine stage to determine a time of flight and/or separation distance to the target. Alternatively or additionally, the amount of overlap or mismatch between the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to detect motion of the target.
Alternatively or additionally, the ultrafine stage determination may involve a similar process as the coarse stage determination. For example, the coarse stage determination may examine the I channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a corresponding time-of-flight, as described herein. The ultrafine stage determination can use the Q channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a time-of-flight. The times-of-flight from the I channel and Q channel can be combined (e.g., averaged) to calculate a time of flight and/or separation distance to the target. The correlation values calculated by the ultrafine stage determination can be used to calculate an additional time delay that can be added to the time delays from the coarse stage and/or the fine stage to determine a time of flight and/or separation distance to the target. Alternatively or additionally, the correlation values of the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to calculate separation distance or motion of the target.
The coarse, fine, and ultrafine stage determinations can be performed independently (e.g., without performing one or more of the other stages) and/or together. The fine and ultrafine stage determinations can be performed in parallel (e.g., with the fine stage determination examining the I channel and the ultrafine stage determination examining the Q channel) or sequentially (e.g., with the ultrafine stage determination examining both the I and Q channels). The coarse and ultrafine stage determinations can be performed in parallel (e.g., with the coarse stage determination examining the I channel and the ultrafine stage determination examining the Q channel) or sequentially (e.g., with the ultrafine stage determination examining both the I and Q channels).
A receive pattern mask may be applied to the digitized data stream to remove (e.g., mask off) or otherwise change one or more portions or segments of the data stream. The masked data stream can then be compared to the receive pattern of the corresponding stage determination (e.g., coarse stage, fine stage, or ultrafine stage) to measure the time of flight, as described herein.
The various patterns (e.g., the coarse stage transmit pattern, the fine stage transmit pattern, the coarse stage receive pattern, the fine stage receive pattern, and/or the receive pattern mask) may be the same. Alternatively, one or more (or all) of these patterns may differ from each other. For example, different ones of the patterns may include different sequences of bits and/or lengths of the sequences. The various patterns (e.g., the coarse stage transmit pattern, the fine stage transmit pattern, the coarse stage receive pattern, the fine stage receive pattern, and/or the receive pattern mask) that are used in the ultrafine stage may also differ from those used in the coarse or fine stages alone, and from each other.
A time of flight of the transmitted signals 106 and echoes 108 represents the time delay between transmission of the transmitted signals 106 and receipt of the echoes 108 off of the target object 104. The time of flight can be proportional to a distance between the sensing apparatus 102 and the target object 104. The sensing apparatus 102 can measure the time of flight of the transmitted signals 106 and echoes 108 and calculate a separation distance 110 between the sensing apparatus 102 and the target object 104 based on the time of flight.
The sensing system 100 may include a control unit 112 (“External Control Unit” in
In one embodiment, the control unit 112 can be communicatively coupled with several sensing assemblies 102 located in the same or different places. For example, several sensing assemblies 102 that are remotely located from each other may be communicatively coupled with a common control unit 112. The control unit 112 can separately send control messages to each of the sensing assemblies 102 to individually activate (e.g., turn ON) or deactivate (e.g., turn OFF) the sensing assemblies 102. In one embodiment, the control unit 112 may direct the sensing assembly 102 to take periodic measurements of the separation distance 110 and then deactivate for an idle time to conserve power.
In one embodiment, the control unit 112 can direct the sensing apparatus 102 to activate (e.g., turn ON) and/or deactivate (e.g., turn OFF) to transmit transmitted signals 106 and receive echoes 108 and/or to measure the separation distances 110. Alternatively, the control unit 112 may calculate the separation distance 110 based on the times of flight of the transmitted signals 106 and echoes 108 as measured by the sensing apparatus 102 and communicated to the control unit 112. The control unit 112 can be communicatively coupled with an input device 114, such as a keyboard, electronic mouse, touchscreen, microphone, stylus, and the like, and/or an output device 116, such as a computer monitor, touchscreen (e.g., the same touchscreen as the input device 114), speaker, light, and the like. The input device 114 may receive input data from an operator, such as commands to activate or deactivate the sensing apparatus 102. The output device 116 may present information to the operator, such as the separation distances 110 and/or times of flight of the transmitted signals 106 and echoes 108. The output device 116 may also connect to a communications network, such the internet.
The form factor of the sensing assembly 102 may have a wide variety of different shapes, depending on the application or use of the system 100. The sensing assembly 102 may be enclosed in a single enclosure 1602, such as an outer housing. The shape of the enclosure 1602 may depend on factors including, but not limited to, needs for power supply (e.g., batteries and/or other power connections), environmental protection, and/or other communications devices (e.g., network devices to transmit measurements or transmit/receive other communications). In the illustrated embodiment, the basic shape of the sensing assembly 102 is a rectangular box. The size of the sensing assembly 102 can be relatively small, such as three inches by six inches by two inches (7.6 centimeters by 15.2 centimeters by 5.1 centimeters), 70 mm by 140 mm by 10 mm, or another size. Alternatively, the sensing assembly 102 may have one or more other dimensions.
The sensing apparatus 102 includes a front end 200 and a back end 202. The front end 200 may include the circuitry and/or other hardware that transmits the transmitted signals 106 and receives the reflected echoes 108. The back end 202 may include the circuitry and/or other hardware that forms the pulse sequences for the transmitted signals 106 or generates control signals that direct the front end 200 to form the pulse sequences for inclusion in the transmitted signals 106, and/or that processes (e.g., analyzes) the echoes 108 received by the front end 200. Both the front end 200 and the back end 202 may be included in a common housing. For example (and as described below), the front end 200 and the back end 202 may be relatively close to each other (e.g., within a few centimeters or meters) and/or contained in the same housing. Alternatively, the front end 200 may be remotely located from the back end 202. The components of the front end 200 and/or back end 202 are schematically shown as being connected by lines and/or arrows in
The front end 200 includes a transmitting antenna 204 and a receiving antenna 206. The transmitting antenna 204 transmits the transmitted signals 106 toward the target object 104 and the receiving antenna 206 receives the echoes 108 that are at least partially reflected by the target object 104. As one example, the transmitting antenna 204 may transmit radio frequency (RF) electromagnetic signals as the transmitted signals 106, such as RF signals having a frequency of 24 gigahertz (“GHz”) ±1.5 GHz. Alternatively, the transmitting antenna 204 may transmit other types of signals, such as light, and/or at another frequency. In the case of light transmission the antenna may be replaced by a laser or LED or other device. The receiver may be replaced by a photo detector or photodiode.
A front end transmitter 208 (“RF Front-End,” “Transmitter, and/or “TX” in
An oscillating device 214 (“Oscillator” in
In the illustrated embodiment, the mixer 210A receives an in-phase (I) component or channel of a pattern signal 230A and mixes the I component or channel of the pattern signal 230A with the oscillating signal 216 to form an I component or channel of the transmitted signal 106. The mixer 210B receives a quadrature (Q) component or channel of a pattern signal 230B and mixes the I component or channel of the pattern signal 230B with the oscillating signal 216 to form a Q component or channel of the transmitted signal 106.
The transmitted signal 106 (e.g., one or both of the I and Q channels) is generated when the TX baseband signal 230 flows to the mixers 210. The digital output gate 250 may be disposed between the TX pattern generator and the mixers 210 for added control of the TX baseband signal 230. After a burst of one or more transmitted signals 106 is transmitted by the transmitting antenna 204, the sensing assembly 102 may switch from a transmit mode (e.g., that involves transmission of the transmitted signals 106) to a receive mode to receive the echoes 108 off the target object 104. In one embodiment, the sensing assembly 102 may not receive or sense the echoes 108 when in the transmit mode and/or may not transmit the transmitted signals 106 when in the receive mode. When the sensing assembly 102 switches from the transmit mode to the receive mode, the digital output gate 250 can reduce the amount of time that the transmit signal 106 generated by the transmitter 208 to the point that it is eliminated (e.g., reduced to zero strength). For example, the gate 250 can include tri-state functionality and a differential high-pass filter (which is represented by the gate 250). The baseband signal 230 passes through the filter before the baseband signal 230 reaches the upconversion mixer 210. The gate 250 can be communicatively coupled with, and controlled by, the control unit 112 (shown in
A front end receiver 218 (“RF Front-End,” “Receiver,” and/or “RX”) of the front end 200 is communicatively coupled with the receiving antenna 206. The front end receiver 218 receives an echo signal 224 representative of the echoes 108 (or data representative of the echoes 108) from the receiving antenna 206. The echo signal 224 may be an analog signal in one embodiment. The receiving antenna 206 may generate the echo signal 224 based on the received echoes 108. In the illustrated embodiment, an amplifier 238 may be disposed between the receive antenna 206 and the front end receiver 218. The front end receiver 218 can include an amplifier 220 and mixers 222A, 222B. Alternatively, one or more of the amplifiers 220, 238 may not be provided. The amplifiers 220, 238 can increase the strength (e.g., gain) of the echo signal 224. The mixers 222A, 222B may include or represent one or more mixing devices that receive different components or channels of the echo signal 224 to mix with the oscillating signal 216 (or a copy of the oscillating signal 216) from the oscillating device 214. For example, the mixer 222A can combine the analog echo signal 224 and the I component of the oscillating signal 216 to extract the I component of the echo signal 224 into a first baseband echo signal 226A that is communicated to the back end 202 of the sensing apparatus 102. The first baseband echo signal 226A may include the I component or channel of the baseband echo signal. The mixer 222B can combine the analog echo signal 224 and the Q component of the oscillating signal 216 to extract the Q component of the analog echo signal 224 into a second baseband echo signal 226B that is communicated to the back end 202 of the sensing apparatus 102. The second baseband echo signal 226B can include the Q component or channel of the baseband echo signal. In one embodiment, the echo signals 226A, 226B can be collectively referred to as a baseband echo signal 226. In one embodiment, the mixers 222A, 222B can multiply the echo signal 224 by the I and Q components of the oscillating signal 216 to form the baseband echo signals 226A, 226B.
The back end 202 of the sensing apparatus 102 includes a transmit (TX) pattern code generator 228 that generates the pattern signal 230 for inclusion in the transmitted signal 106. The transmit pattern code generator 228 includes the transmit code generators 228A, 228B. In the illustrated embodiment, the transmit code generator 228A generates the I component or channel pattern signal 230A (“I TX Pattern” in
The transmit pattern code generator 228 creates the pattern of bits and communicates the pattern in the pattern signals 230A, 230B to the front end transmitter 208. The pattern signals 230A, 230B may be individually or collectively referred to as a pattern signal 230. In one embodiment, the pattern signal 230 may be communicated to the front end transmitter 208 at a frequency that is no greater than 3 GHz. Alternatively, the pattern signal 230 may be communicated to the front end transmitter 208 at a greater frequency. The transmit pattern code generator 228 also communicates the pattern signal 230 to a correlator device 232 (“Correlator” in
The backend section 202 includes or represents hardware (e.g., one or more processors, controllers, and the like) and/or logic of the hardware (e.g., one or more sets of instructions for directing operations of the hardware that is stored on a tangible and non-transitory computer readable storage medium, such as computer software stored on a computer memory). The RX backend section 202B receives the pattern signal 230 from the pattern code generator 228 and the baseband echo signal 226 (e.g., one or more of the signals 226A, 226B) from the front end receiver 200. The RX backend section 202B may perform one or more stages of analysis of the baseband echo signal 226 in order to determine the separation distance 110 and/or to track and/or detect movement of the target object 104.
The stages of analysis can include a coarse stage, a fine stage, and/or an ultrafine stage, as described above. In the coarse stage, the baseband processor 232 compares the pattern signal 230 with the baseband echo signal 226 to determine a coarse or estimated time of flight of the transmitted signals 106 and the echoes 108. For example, the baseband processor 232 can measure a time delay of interest between the time when a transmitted signal 106 is transmitted and a subsequent time when the pattern in the pattern signal 230 (or a portion thereof) and the baseband echo signal 226 match or substantially match each other, as described below. The time delay of interest may be used as an estimate of the time of flight of the transmitted signal 106 and corresponding echo 108.
In the fine stage, the sensing assembly 102 can compare a replicated copy of the pattern signal 230 with the baseband echo signal 226. The replicated copy of the pattern signal 230 may be a signal that includes the pattern signal 230 delayed by the time delay of interest measured during the coarse stage. The sensing assembly 102 compares the replicated copy of the pattern signal 230 with the baseband echo signal 226 to determine a temporal amount or degree of overlap or mismatch between the replicated pattern signal and the baseband echo signal 226. This temporal overlap or mismatch can represent an additional portion of the time of flight that can be added to the time of flight calculated from the coarse stage. In one embodiment, the fine stage examines I and/or Q components of the baseband echo signal 226 and the replicated pattern signal.
In the ultrafine stage, the sensing assembly 102 also can examine the I and/or Q component of the baseband echo signal 226 and the replicated pattern signal to determine a temporal overlap or mismatch between the I and/or Q components of the baseband echo signal 226 and the replicated pattern signal. The temporal overlap or mismatch of the Q components of the baseband echo signal 226 and the replicated pattern signal may represent an additional time delay that can be added to the time of flight calculated from the coarse stage and the fine stage (e.g., by examining the I and/or Q components) to determine a relatively accurate estimation of the time of flight. Alternatively or additionally, the ultrafine stage may be used to precisely track and/or detect movement of the target object 104 within the bit of interest. The terms “fine” and “ultrafine” are used to mean that the fine stage may provide a more accurate and/or precise (e.g., greater resolution) calculation of the time of flight (tF) and/or the separation distance 110 relative to the coarse stage and that the ultrafine stage may provide a more accurate and/or precise (e.g., greater resolution) calculation of the time of flight (tF) and/or the separation distance 110 relative to the fine stage and the coarse stage. Alternatively or additionally, the time lag of the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to calculate separation distance or motion of the target.
As described above, the ultrafine stage determination may involve a similar process as the coarse stage determination. For example, the coarse stage determination may examine the I channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a corresponding time-of-flight, as described herein. The ultrafine stage determination can use the I and/or Q channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a time-of-flight. The times-of-flight from the I channel and Q channel can be combined (e.g., averaged) to calculate a time of flight and/or separation distance to the target. The correlation values calculated by the ultrafine stage determination can be used to calculate an additional time delay that can be added to the time delays from the coarse stage and/or the fine stage to determine a time of flight and/or separation distance to the target. Alternatively or additionally, the correlation values of the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to calculate separation distance or motion of the target.
The backend 202 can include a first baseband processor 232A (“I Baseband Processor” in
As described below, a correlation window that also includes the pattern (e.g., the pulse sequence of bits) or a portion thereof that was transmitted in the transmitted signal 106 may be compared to the baseband echo signal 226. The correlation window may be progressively shifted or delayed from a location in the baseband echo signal 226 representative of a start of the echo signal 226 (e.g., a time that corresponds to the time at which the transmitted signal 106 is transmitted, but which may or may not be the exact beginning of the baseband echo signal) and successively, or in any other order, compared to different subsets or portions of the baseband echo signal 226. Correlation values representative of degrees of match between the pulse sequence in the correlation window and the subsets or portions of the baseband echo signal 226 can be calculated and a time delay of interest (e.g., approximately the time of flight) can be determined based on the time difference between the start of the baseband echo signal 226 and one or more maximum or relatively large correlation values. The maximum or relatively large correlation value may represent at least partial reflection of the transmitted signals 106 off the target object 104, and may be referred to as a correlation value of interest.
As used herein, the terms “maximum,” “minimum,” and forms thereof, are not limited to absolute largest and smallest values, respectively. For example, while a “maximum” correlation value can include the largest possible correlation value, the “maximum” correlation value also can include a correlation value that is larger than one or more other correlation values, but is not necessarily the largest possible correlation value that can be obtained. Similarly, while a “minimum” correlation value can include the smallest possible correlation value, the “minimum” correlation value also can include a correlation value that is smaller than one or more other correlation values, but is not necessarily the smallest possible correlation value that can be obtained.
The time delay of interest can then be used to calculate the separation distance 110 from the coarse stage. For example, in one embodiment, the separation distance 110 may be estimated or calculated as:
where d represents the separation distance 110, tF represents the time delay of interest (calculated from the start of the baseband echo signal 226 to the identification of the correlation value of interest), and c represents the speed of light. Alternatively, c may represent the speed at which the transmitted signals 106 and/or echoes 108 move through the medium or media between the sensing apparatus 102 and the target object 104. In another embodiment, the value of tF and/or c may be modified by a calibration factor or other factor in order to account for portions of the delay between transmission of the transmitted signals 106 and receipt of the echoes 108 that are not due to the time of flight of the transmitted signals 106 and/or echoes 108.
With continued reference to the sensing assembly 102 shown in
The baseband echo signal 226 includes in one embodiment a sequence of square waves (e.g., low and high values 328, 330), but the waves may have other shapes. The echo signal 226 may be represented as a digital echo signal 740 (shown and described below in connection with
The baseband echo signal 226 begins at a transmission time (to) of the axis 304. The transmission time (to) may correspond to the time at which the transmitted signal 106 is transmitted by the sensing assembly 102. Alternatively, the transmission time (to) may be another time that occurs prior to or after the time at which the transmitted signal 106 is transmitted.
The baseband processor 232 obtains a receive pattern signal 240 from the pattern generator 228, similar to the transmit pattern (e.g., in the signal 230) that is included in the transmitted signal 106, the receive pattern signal 240 may include a waveform signal representing a sequence of bits, such as a digital pulse sequence receive pattern 306 shown in
The baseband processor 232 compares the receive pattern 306 to the echo signal 226. In one embodiment, the receive pattern 306 is a copy of the transmit pattern of bits that is included in the transmitted signal 106 from the pattern code generator 228, as described above. Alternatively, the receive pattern 306 may be different from the transmit pattern that is included in the transmitted signal 106. For example, the receive pattern 306 may have a different sequence of bits (e.g., have one or more different waveforms that represent a different sequence of bits) and/or have a longer or shorter sequence of bits than the transmit pattern. The receive pattern 306 may be represented by one or more of the pattern waveform segments 326, or a portion thereof, shown in
The baseband processor 232 uses all or a portion of the receive pattern 306 as a correlation window 320 that is compared to different portions of the digitized echo signal 740 in order to calculate correlation values (“CV”) at the different positions. The correlation values represent different degrees of match between the receive pattern 306 and the digitized echo signal 740 across different subsets of the bits in the digitized echo signal 740. In the example illustrated in
For example, the correlator device 731 may compare the bits in the correlation window 320 to a first subset 308 of the bits 300, 302 in the digitized echo signal 740. For example, the correlator device 731 can compare the receive pattern 306 with the first six bits 300, 302 of the digitized echo signal 740. Alternatively, the correlator device 731 can begin by comparing the receive pattern 306 with a different subset of the digitized echo signal 740. The correlator device 731 calculates a first correlation value for the first subset 308 of bits in the digitized echo signal 740 by determining how closely the sequence of bits 300, 302 in the first subset 308 match the sequence of bits 300, 302 in the receive pattern 306.
In one embodiment, the correlator device 731 assigns a first value (e.g., +1) to those bits 300, 302 in the subset of the digitized echo signal 740 being compared to the correlation window 320 that match the sequence of bits 300, 302 in the correlation window 320 and a different, second value (e.g., -1) to those bits 300, 302 in the subset of the digitized echo signal 740 being examined that do not match the sequence of bits 300, 302 in the correlation window 320. Alternatively, other values may be used. The correlator device 731 may then sum these assigned values for the subset of the digitized echo signal 740 to derive a correlation value for the subset.
With respect to the first subset 308 of bits in the digitized echo signal, only the fourth bit (e.g., zero) and the fifth bit (e.g., one) match the fourth bit and the fifth bit in the correlation window 320. The remaining four bits in the first subset 308 do not match the corresponding bits in the correlation window 320. As a result, if +1 is assigned to the matching bits and -1 is assigned to the mismatching bits, then the correlation value for the first subset 308 of the digitized echo signal 740 is calculated to be -2. On the other hand, if +1 is assigned to the bits and 0 is assigned to the mismatching bits, then the correlation value for the first subset 308 of the digitized echo signal 740 is calculated to be +2. As described above, other values may be used instead of +1 and/or -1.
The correlator device 731 then shifts the correlation window 320 by comparing the sequence of bits 300, 302 in the correlation window 320 to another (e.g., later or subsequent) subset of the digitized echo signal 740. In the illustrated embodiment, the correlator device 731 compares the correlation window 320 to the sixth through seventh bits 300, 302 in the digitized echo signal 740 to calculate another correlation value. As shown in
The correlator device 731 may continue to compare the correlation window 320 to different subsets of the digitized echo signal 740 to calculate correlation values for the subsets. In continuing with the above example, the correlator device 731 calculates the correlation values shown in
In another embodiment, the receive pattern 306 that is included in the correlation window 320 and that is compared to the subsets of the digitized echo signal 740 may include a portion, and less than the entirety, of the transmit pattern that is included in the transmitted signal 106 (shown in
In one embodiment, the correlator device 731 can compare less than the entire receive pattern 306 to the subsets by applying a mask to the receive pattern 306 to form the correlation window 320 (also referred to as a masked receive pattern). With respect to the receive pattern 306 shown in
The correlator 731 may identify a correlation value that is largest, that is larger than one or more correlation values, and/or that is larger than a designated threshold as a correlation value of interest 312. In the illustrated example, the fifth correlation value (e.g., +6) may be the correlation value of interest 312. The subset or subsets of bits in the digitized echo signal 740 that correspond to the correlation value of interest 312 may be identified as the subset or subsets of interest 314. In the illustrated example, the subset of interest 314 includes the fifth through tenth bits 300, 302 in the digitized echo signal 740. In this example, if the start of the subset of interest is used to identify the subset of interest then the delay of interest would be five. Multiple subsets of interest may be identified where the transmitted signals 106 (shown in
Each of the subsets of the digitized echo signal 740 may be associated with a time delay (td) between the start of the digitized echo signal 740 (e.g., to) and the beginning of the first bit in each subset of the digitized echo signal 740. Alternatively, the beginning of the time delay (td) for the subset can be measured from another starting time (e.g., a time before or after the start of the digitized echo signal 740 (to) and/or the end of the time delay (td) may be at another location in the subset, such as the middle or at another bit of the subset.
The time delay (td) associated with the subset of interest may represent the time of flight (tF) of the transmitted signal 106 that is reflected off a target object 104. Using Equation #1 above, the time of flight can be used to calculate the separation distance 110 between the sensing assembly 102 and the target object 104. In one embodiment, the time of flight (tF) may be based on a modified time delay (td), such as a time delay that is modified by a calibration factor to obtain the time of flight (tF). As one example, the time of flight (tF) can be corrected to account for propagation of signals and/or other processing or analysis. Propagation of the echo signal 224, formation of the baseband echo signal 226, propagation of the baseband echo signal 226, and the like, through the components of the sensing assembly 102 can impact the calculation of the time of flight (tF). The time delay associated with a subset of interest in the baseband echo signal 226 may include the time of flight of the transmitted signals 106 and echoes 108, and also may include the time of propagation of various signals in the analog and digital blocks (e.g., the correlator device 731 and/or the pattern code generator 228 and/or the mixers 210 and/or the amplifier 238) of the system 100.
In order to determine the propagation time of data and signals through these components, a calibration routine can be employed. A measurement can be made to a target of known distance. For example, one or more transmitted signals 106 can be sent to the target object 104 that is at a known separation distance 110 from the transmit and/or receiving antennas 204, 206. The calculation of the time of flight for the transmitted signals 106 can be made as described above, and the time of flight can be used to determine a calculated separation distance 110. Based on the difference between the actual, known separation distance 110 and the calculated separation distance 110, a measurement error that is based on the propagation time through the components of the sensing assembly 102 may be calculated. This propagation time may then be used to correct (e.g., shorten) further times of flight that are calculated using the sensing assembly 102.
In one embodiment, the sensing assembly 102 may transmit several bursts of the transmitted signal 106 and the correlator device 731 may calculate several correlation values for the digitized echo signals 740 that are based on the reflected echoes 108 of the transmitted signals 106. The correlation values for the several transmitted signals 106 may be grouped by common time delays (td), such as by calculating the average, median, or other statistical measure of the correlation values calculated for the same or approximately the same time delays (td). The grouped correlation values that are larger than other correlation values or that are the largest may be used to more accurately calculate the time of flight (tF) and separation distance 110 relative to using only a single correlation value and/or burst.
As described above, the received echo signal 224 may be conditioned by circuits 506 (e.g., by the front end receiver 218 shown in
Also as described above, the pattern code generator 228 generates the pattern (e.g., a digital pulse sequence) that is communicated in the pattern signal 230. The digital pulse sequence may be relatively high speed in order to make the pulses shorter and increase accuracy and/or precision of the system 100 (shown in
In one embodiment, the digital pulse sequence is generated by one or more digital circuits, such as a relatively low-power Field-Programmable Gate Array (FPGA) 504. The FPGA 504 may be an integrated circuit designed to be configured by the customer or designer after manufacturing to implement a digital or logical system. As shown in
The common frequency reference generator 604 may be or include the oscillator device 214 shown in
In one embodiment, the reference generator 604 emits a frequency reference signal 216 that is a sinusoidal wave at one half the frequency of the carrier frequency. The reference signal is split equally and delivered to the transmitter 600 and the receiver 602. Although the reference generator 604 may be able to vary the frequency of the reference signal 216 according to an input control voltage, the reference generator 604 can be operated at a fixed control voltage in order to cause the reference generator 604 to output a fixed frequency reference signal 216. This is acceptable since frequency coherence between the transmitter 600 and the receiver 602 may be automatically maintained. Furthermore, this arrangement can allow for coherence between the transmitter 600 and the receiver 602 without the need for a phase locked loop (PLL) or other control structure that may limit the accuracy and/or speed at which the sensing assembly 102 operates. In another embodiment a PLL may be added to for other purposes, such as stabilizing the carrier frequency or otherwise controlling the carrier frequency.
The reference signal 216 can be split and sent to the transmitter 600 and receiver 602. The reference signal 216 drives the transmitter 600 and receiver 602, as described above. The transmitter 600 may drive (e.g., activate to transmit the transmitted signal 106 shown in
In one embodiment, the transmitter 600 can take separate in-phase (I) and quadrature (Q) digital patterns or signals from the pattern generator 604 and/or the pattern code generator 228 (shown in
As described above, the receiver 602 may also receive a copy of the frequency reference signal 216 from the reference generator 604. The returning echoes 108 (shown in
The receiver 602 can down-convert a relatively wide block of frequency spectrum centered on the carrier frequency to produce the baseband signal (e.g., the baseband echo signal 226 shown in
The frequency reference signal 216 may contain or comprise two or more individual signals such as the I and Q components that are phase shifted relative to each other. The phase shifted signals can also be generated internally by the transmitter 600 and the receiver 602. For example, the signal 216 may be generated to include two or more phase shifted components (e.g., I and Q components or channels), or may be generated and later modified to include the two or more phase shifted components.
In one embodiment, the front end 200 provides relatively high isolation between the transmit signal 606 and the echo signal 224. This isolation can be achieved in one or more ways. First, the transmit and receive components (e.g., the transmitter 600 and receiver 602) can be disposed in physically separate chips, circuitry, or other hardware. Second, the reference generator 604 can operate at one half the carrier frequency so that feed-through can be reduced. Third, the transmitter 600 and the receiver 602 can have dedicated (e.g., separate) antennas 204, 206 that are also physically isolated from each other. This isolation can allow for the elimination of a TX/RX switch that may otherwise be included in the system 100. Avoiding the use of the TX/RX switch also can remove the switch-over time between the transmitting of the transmitted signals 106 and the receipt of the echoes 108 shown in
In one embodiment, the system 100 (shown in
The baseband processing system 232 receives the echo signal 226 (e.g., the I component or channel of the echo signal 226A and/or the Q component or channel of the echo signal 226B from the front end receiver 218 (shown in
The digitized echo signal 226 that is received by the system 232 may be conditioned by signal conditioning components of the baseband processing system 232, such as by modifying the signals using a conversion amplifier 704 (e.g., an amplifier that converts the baseband echo signal 226, such as by converting current into a voltage signal). In one embodiment, the conversion amplifier 704 includes or represents a trans-impedance amplifier, or “TIA” in
The second amplifier 706 may be used to determine the sign of the input differential signal 708 and the times at which the sign changes from one value to another. For example, the second amplifier 706 may act as an analog-to-digital converter with only one bit precision in one embodiment. Alternatively, the second amplifier 706 may be a high-speed analog-to-digital converter that periodically samples the differential signal 708 at a relatively fast rate. Alternatively, the second amplifier may act as an amplitude quantizer while preserving timing information of the baseband signal 226. The use of a limiting amplifier as the second amplifier 706 can provide relatively high gain and relatively large input dynamic range. As a result, relatively small differential signals 708 that are supplied to the limiting amplifier can result in a healthy (e.g., relatively high amplitude and/or signal-to-noise ratio) output signal 710. Additionally, larger differential signals 708 (e.g., having relatively high amplitudes and/or energies) that may otherwise result in another amplifier being overdriven instead result in a controlled output condition (e.g., the limiting operation of the limiting amplifier). The second amplifier 706 may have a relatively fast or no recovery time, such that the second amplifier 706 may not go into an error or saturated state and may continue to respond to the differential signals 708 that are input into the second amplifier 706. When the input differential signal 708 returns to an acceptable level (e.g., lower amplitude and/or energy), the second amplifier 706 may avoid the time required by other amplifiers for recovery from an overdrive state (that is caused by the input differential signal 708). The second amplifier 706 may avoid losing incoming input signals during such a recovery time.
A switch device 712 (“Switch” in
The switch device 712 may alternate the direction of flow of the signals (e.g., the output differential signal 710) from the first path 716 to the second path 718. Control of the switch device 712 may be provided by the control unit 112 (shown in
The output differential signals 710 received by the switch device 712 may be communicated to a comparison device 720 in the second path 718. Alternatively, the switch device 712 (or another component) may convert the differential signals 710 into a single-ended signal that is input into the comparison device 720. The comparison device 720 also receives the receive pattern signal 728 from the pattern generator 228 (shown in
The comparison device 720 compares the signals received from the switch device 712 with the receive pattern signal 728 to identify differences between the echo signal 226 and the receive pattern signal 728.
In one embodiment, the receive pattern signal 728 includes a pattern that is delayed by the time delay (e.g., the time of flight) identified by the coarse stage determination. The comparison device 720 may then compare this time-delayed pattern in the pattern signal 728 to the echo signal 226 (e.g., as modified by the amplifiers 704, 710) to identify overlaps or mismatches between the time-delayed pattern signal 728 and the echo signal 226.
In one embodiment, the comparison device 720 may include or represent a limiting amplifier that acts as a relatively high-speed XOR gate. An “XOR gate” includes a device that receives two signals and produces a first output signal (e.g., a “high” signal) when the two signals are different and a second output signal (e.g., a “low” signal) or no signal when the two signals are not different.
In another embodiment, the system may only include the coarse baseband processing circuits 716 or the fine baseband processing circuits 718. In this case, the switch 712 may also be eliminated. For example, this may be to reduce the cost or complexity of the overall system. As another example, the system may not need the fine accuracy and the rapid response of the coarse section 716 is desired. The coarse, fine and ultrafine stages may be used in any combination at different times in order to balance various performance metrics. Intelligent control can be manually provided by an operator or automatically generated by a processor or controller (such as the control unit 112) autonomously controlling the assembly 102 based on one or more sets of instructions (such as software modules or programs) stored on a tangible computer readable storage medium (such as a computer memory). The intelligent control can manually or automatically switch between which stages are used and/or when based on feedback from one or more other stages. For example, based on the determination from the coarse stage (e.g., an estimated time of flight or separation distance), the sensing assembly 102 may manually or automatically switch to the fine and/or ultrafine stage to further refine the time of flight or separation distance and/or to monitor movement of the target object 104.
With continued reference to
In one embodiment, the comparison device 720 generates the output signal 806 based on differences between the portion 800 of the echo signal 226 and the portion 802 of the time-delayed pattern signal 728. For example, when a magnitude or amplitude of both portions 800, 802 is “high” (e.g., has a positive value) or when the magnitude or amplitude of both portions 800, 802 is “low” (e.g., has a zero or negative value), the comparison device 720 may generate the output signal 806 to have a first value. In the illustrated example, this first value is zero. When a magnitude or amplitude of both portions 800, 802 differ (e.g., one has a high value and the other has a zero or low value), the comparison device 720 may generate the output signal 806 with a second value, such as a high value.
In the example of
The output signals 806 generated by the comparison device 720 represent temporal misalignment between the baseband echo signal 226 and the pattern signal 728 that is delayed by the time of flight or time delay measured by the coarse stage determination. The temporal misalignment may be an additional portion (e.g., to be added to) the time of flight of the transmitted signals 106 (shown in
The temporal misalignment between the baseband signal 226 and the pattern signal 728 may be referred to as a time lag. The time lag can be represented by the time periods 808, 810, 904, 906. For example, the time lag of the data stream 226 in
In order to measure the temporal misalignment between the baseband signal 226 and the time-delayed pattern signal, the output signals 806 may be communicated from the conversion device 720 to one or more filters 722. In one embodiment, the filters 722 are low-pass filters. The filters 722 generate energy signals 724 that are proportional to the energy of the output signals 806. The energy of the output signals 806 is represented by the size (e.g., width) of waveforms 812, 910 in the output signals 806. As the temporal misalignment between the baseband signal 226 and the pattern signal 728 increases, the size (and energy) of the waveforms 812, 910 increases. As a result, the amplitude and/or energy conveyed or communicated by the energy signals 724 increases. Conversely, as the temporal misalignment between the baseband signal 226 and the time-delayed pattern signal 728 decreases, the size and/or amplitude and/or energy of the waveforms 812, 910 also decreases. As a result, the energy conveyed or communicated by the energy signals 724 decreases.
As another example, the above system could be implemented using the opposite polarity, such as with an XNOR comparison device that produces “high” signals when the baseband signal 226 and the time-delayed pattern signal 728 are the same and “low” when they are different. In this example, as the temporal misalignment between the baseband signal 226 and the pattern signal 728 increases, the size (and energy) of the waveforms 812, 910 decreases. As a result, the amplitude and/or energy conveyed or communicated by the energy signals 724 decreases. Conversely, as the temporal misalignment between the baseband signal 226 and the time-delayed pattern signal 728 decreases, the size, amplitude, and/or energy of the waveforms 812, 910 also increases. As a result, the energy conveyed or communicated by the energy signals 724 increases.
The energy signals 724 may be communicated to measurement devices 726 (“ADC” in
The control unit 112 (or other component that receives the output signal 710) may examine the measured energy of the energy signals 724 and calculate the additional portion of the time of flight represented by the temporal misalignment between the baseband signal 226 and the time-delayed pattern signal 728. The control unit 112 also may calculate the additional portion of the separation distance 110 that is associated with the temporal misalignment. In one embodiment, the control unit 112 compares the measured energy to one or more energy thresholds. The different energy thresholds may be associated with different amounts of temporal misalignment. Based on the comparison, a temporal misalignment can be identified and added to the time of flight calculated using the coarse stage determination described above. The separation distance 110 may then be calculated based on the combination of the coarse stage determination of the time of flight and the additional portion of the time of flight from the fine stage determination.
The measurement devices 726 may digitize the energy signals 724 to produce the energy data output signals 702. When the output signals 702 are received from the measurement devices 726 (shown in
The different energy thresholds 1106 are associated with different temporal misalignments between the echo signal 226 and the time-delayed pattern signal 728 in one embodiment. For example, the energy threshold 1106A may represent a temporal misalignment of 100 picoseconds, the energy threshold 1106B may represent a temporal misalignment of 150 picoseconds, the energy threshold 1106C may represent a temporal misalignment of 200 picoseconds, the energy threshold 1106D may represent a temporal misalignment of 250 picoseconds, and so on. For example, 724B may be the result of the situation shown in
The measured energy of the output signal 702 can be compared to the thresholds 1106 to determine if the measured energy exceeds one or more of the thresholds 1106. The temporal misalignment associated with the largest threshold 1106 that is approached or reached or represented by the energy of the output signal 702 may be identified as the temporal misalignment between the echo signal 226 and the time-delayed pattern signal 728. In one embodiment, no temporal alignment may be identified for output signals 702 having or representing energies that are less than the threshold 1106A.
The energy thresholds 1106 may be established by positioning target objects 104 (shown in
In addition or as an alternate to performing the fine stage determination of the time of flight, the ultrafine stage may be used to refine (e.g., increase the resolution of) the time of flight measurement, track movement, and/or detect movement of the target object 104 (shown in
As described above, the ultrafine stage determination may alternatively or additionally involve a similar process as the coarse stage determination. For example, the coarse stage determination may examine the I channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a corresponding time-of-flight, as described herein. The ultrafine stage determination can use the Q channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a time-of-flight. The times-of- flight from the I channel and Q channel can be combined (e.g., averaged) to calculate a time of flight and/or separation distance to the target. The correlation values calculated by the ultrafine stage determination can be used to calculate an additional time delay that can be added to the time delays from the coarse stage and/or the fine stage to determine a time of flight and/or separation distance to the target. Alternatively or additionally, the correlation values of the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to calculate separation distance or motion of the target.
In operation, the echo signal 224 is received by the front end receiver 218 and is separated into separate I and Q signals 1206, 1208 (also referred to herein as I and Q channels). Each separate I and Q signal 1206, 1208 includes the corresponding I or Q component of the echo signal 224 and can be processed and analyzed similar to the signals described above in connection with the baseband processing system 232 shown in
Similar to as described above in connection with the switch device 712 (shown in
As described above, the energies of the signals output from the comparison devices 1218 can pass through the filters 1220 and be measured by the measurement devices 1222 to determine each of the temporal misalignments associated with the I and Q components of the echo signal 226 and the receive pattern signal. These temporal misalignments can be added together and added to the time of flight determined by the coarse stage determination. The sum of the temporal misalignments and the time of flight from the coarse stage determination can be used by the baseband processor 232 to calculate the separation distance 110 (shown in
In one embodiment, the ultrafine stage determination described above can be used to determine relatively small movements that change the separation distance 110 (shown in
where ϕ denotes the phase and I is the I projection 1320 and Q is the Q projection 1321. The carrier phase or the change in carrier phase can be used to calculate the distance or change in distance through the equation:
where λ, is the wavelength of the carrier frequency and ϕ is the phase expressed in degrees as calculated from Equation 2 above.
The baseband processor 232 (shown in
The coarse, fine, and/or ultrafine stage determinations described above may be used in a variety of combinations. For example, the coarse stage determination may be used to calculate the separation distance 110 (shown in
As another example, if the separation distance 110 (shown in
Returning to the discussion of the system 100 shown in
A first digitized echo signal 1400 in
A correlation window 1406 includes a sequence 1414 of bits that can be compared to either digitized echo signal 1400, 1402 to determine a subset of interest, such as the subsets of interest 1408, 1410, in order to determine times of flight to the respective target objects 104 (shown in
In one embodiment, a mask 1412 can be applied to the sequence 1414 of bits in the correlation window 1406 to modify the sequence 1414 of bits in the correlation window 1406. The mask 1412 can eliminate or otherwise change the value of one or more of the bits in the correlation window 1406. The mask 1412 can include a sequence 1416 of bits that are applied to the correlation window 1406 (e.g., by multiplying the values of the bits) to create a modified correlation window 1418 having a sequence 1420 of bits that differs from the sequence 1414 of bits in the correlation window 1406. In the illustrated example, the mask 1412 includes a first portion of the first three bits (“101”) and a second portion of the last three bits (“000”). Alternatively, another mask 1412 may be used that has a different sequence of bits and/or a different length of the sequence of bits. Applying the mask 1412 to the correlation window 1406 eliminates the last three bits (“011”) in the sequence 1414 of bits in the correlation window 1406. As a result, the sequence 1420 of bits in the modified correlation window 1418 includes only the first three bits (“101”) of the correlation window 1418. In another embodiment, the mask 1412 adds additional bits to the correlation window 1406 and/or changes values of the bits in the correlation window 1406.
The sequence 1420 of bits in the modified correlation window 1418 can be used to change the sequence of bits in the pattern signal 230 (shown in
The modified correlation window 1418 can then be compared with the additional digitized echo signal 1422 to identify subsets of interest associated with the different target objects 104 (shown in
In operation, when transmitted signals 106 reflect off multiple target objects 104, the pattern transmitted in the signals 106 can be modified relatively quickly between successive bursts of the transmitted signals 106 when one or more of the target objects 104 cannot be identified from examination of the digitized echo signal 226. The modified pattern can then be used to distinguish between the target objects 104 in the digitized echo signal 740 using the correlation window that includes the modified pattern.
In another embodiment, the digital pulse sequence of bits included in a transmitted signal 106 (shown in
Several series-fed arrays 1506 are conductively coupled in parallel to form the array 1502 in the illustrated embodiment. The numbers of unit cells 1504 and series-fed arrays 1506 shown in
The front end 200 of the sensing assembly 102 may be housed in an enclosure 1602, such as a metal or otherwise conductive housing, with radio transmissive windows 1604 over the antennas 1500. Alternatively, the front end 200 may be housed in a non-metallic (e.g., dielectric) enclosure. The windows over the antennas 1500 may not be cut out of the enclosure 1602, but may instead represent portions of the enclosure 1602 that allows the transmitted signals 106 and echoes 108 pass through the windows 1604 from or to the antennas 1500.
The enclosure 1602 may wrap around the antennas 1500 so that the antennas are effectively recessed into the conducting body of the enclosure 1602, which can further improve isolation between the antennas 1500. Alternatively, in the case of a nonconducting enclosure 1602, the antennas 1500 may be completely enclosed by the enclosure 1602 and extra metal foil, and/or absorptive materials, or other measures may be added to improve isolation between the antennas 1500. In one embodiment, if the isolation is sufficiently high, the transmit and receiving antennas 1500 can be operated at the same time if the returning echoes 108 are sufficiently strong. This may be the case when the target is at very close range, and can allow for the sensing assembly 102 to operate without a transmit/receive switch.
The antenna 1500 may be positioned on a surface of a substrate 1706 that supports the antenna 1500. A conductive ground plane 1708 may be disposed on an opposite surface of the substrate 1706, or in another location.
The cover layer 1700 may be separated from the antenna 1500 by an air gap 1704 (“Air” in
This lensing effect can permit transmitted signals 106 and/ or echoes 108 to pass through additional layers 1702 of materials (e.g., insulators such as Teflon, polycarbonate, or other polymers) that are positioned between the antenna 1500 and the target object 104 (shown in
In one embodiment, the substrate 1708 may have a thickness dimension between the opposite surfaces that is thinner than a wavelength of the carrier signal of the transmitted signals 106 and/or echoes 108. For example, the thickness of the substrate 1708 may be on the order of 1/20th of a wavelength. The thicknesses of the air gap 1704 and/or superstrate 1700 may be larger, such as ⅓ of the wavelength. Either one or both of the air gap 1704 and the superstrate 1700 may also be removed altogether.
One or more embodiments of the system 100 and/or sensing assembly 102 described herein may be used for a variety of applications that use the separation distance 110 and/or time of flight that is measured by the sensing assembly 102. Several specific examples of applications of the system 100 and/or sensing assembly 102 are described herein, but not all applications or uses of the system 100 or sensing assembly 102 are limited to those set forth herein. For example, many applications that use the detection of the separation distance 110 (e.g., as a depth measurement) can use or incorporate the system 100 and/or sensing assembly 102.
Alternatively or additionally, the sensing apparatus 102 may direct transmitted signals 106 toward a port (e.g., a filling port through which fluid 1806 is loaded into the containment apparatus 1802) and monitor movement of the fluid 1806 at or near the port. For example, if the separation distance 110 from the sensing assembly 102 to the port is known such that the bit of interest of the echoes 108 is known, the ultrafine stage determination described above maybe used to determine if the fluid 1806 at or near the port is moving (e.g., turbulent). This movement may indicate that fluid 1806 is flowing into or out of the containment apparatus 1802. The sensing assembly 102 can use this determination as an alarm or other indicator of when fluid 1806 is flowing into or out of the containment apparatus 1802. Alternatively, the sensing assembly 102 could be positioned or aimed at other strategically important locations where the presence or absence of turbulence and/or the intensity (e.g., degree or amount of movement) could indicate various operating conditions and parameters (e.g., amounts of fluid, movement of fluid, and the like). The sensing assembly 102 could periodically switch between these measurement modes (e.g., measuring the separation distance 110 being one mode and monitoring for movement being another mode), and then report the data and measurements to the control unit 112 (shown in
For example, the sensing assembly 102 can measure separation distances 110 between the sensing assembly 102 and multiple objects 2104A-D in the vicinity of the mobile apparatus 2102. The mobile apparatus 2102 can use these separation distances 110 to determine how far the mobile apparatus 2102 can travel before needing to turn or change direction to avoid contact with the objects 2104A-D.
In one embodiment, the mobile apparatus 2102 can use multiple sensing assemblies 102 to determine a layout or map of an enclosed vicinity 2106 around the mobile apparatus 2102. The vicinity 2106 may be bounded by the walls of a room, building, tunnel, and the like. A first sensing assembly 102 on the mobile apparatus 2102 may be oriented to measure separation distances 110 to one or more boundaries (e.g., walls or surfaces) of the vicinity 2106 along a first direction, a second sensing assembly 102 may be oriented to measure separation distances 110 to one or more other boundaries of the vicinity 2106 along a different (e.g., orthogonal) direction, and the like. The separation distances 110 to the boundaries of the vicinity 2106 can provide the mobile apparatus 2102 with information on the size of the vicinity 2106 and a current location of the mobile apparatus 2102. The mobile apparatus 2102 may then move in the vicinity 2106 while one or more of the sensing assemblies 102 acquire updated separation distances 110 to one or more of the boundaries of the vicinity 2106. Based on changes in the separation distances 110, the mobile apparatus 2102 may determine where the mobile apparatus 2102 is located in the vicinity 2106. For example, if an initial separation distance 110 to a first wall of a room is measured as ten feet (three meters) and an initial separation distance 110 to a second wall of the room is measured as five feet (1.5 meters), the mobile apparatus 2102 may initially locate itself within the room. If a later separation distance 110 to the first wall is four feet (1.2 meters) and a later separation distance 110 to the second wall is seven feet (2.1 meters), then the mobile apparatus 2102 may determine that it has moved six feet (1.8 meters) toward the first wall and two feet (0.6 meters) toward the second wall.
In one embodiment, the mobile apparatus 2102 can use information generated by the sensing assembly 102 to distinguish between immobile and mobile objects 2104 in the vicinity 2106. Some of the objects 2104A, 2104B, and 2104D may be stationary objects, such as walls, furniture, and the like. Other objects 210C may be mobile objects, such as humans walking through the vicinity 2106, other mobile apparatuses, and the like. The mobile apparatus 2102 can track changes in separation distances 110 between the mobile apparatus 2102 and the objects 2104A, 2104B, 2104C, 2104D as the mobile apparatus 2102 moves. Because the separation distances 110 between the mobile apparatus 2102 and the objects 2104 may change as the mobile apparatus 2102 moves, both the stationary objects 2104A, 2104B, 2104D and the mobile objects 2104C may appear to move to the mobile apparatus 2102. This perceived motion of the stationary objects 2104A, 2104B, 2104D that is observed by the sensing assembly 102 and the mobile apparatus 2102 is due to the motion of the sensing assembly 102 and the mobile apparatus 2102. To compute the motion (e.g., speed) of the mobile apparatus 2102, the mobile apparatus 210 can track changes in separation distances 110 to the objects 2104 and generate object motion vectors associated with the objects 2104 based on the changes in the separation distances 110.
The mobile apparatus 2102 can learn (e.g., store) which objects are part of the environment and that can be used for tracking movement of the mobile apparatus 2102 and may be referred to as persistent objects. Other objects that are observed that do not agree with the known persistent objects are called transient objects. Object motion vectors of the transient objects will have varying trajectories and may not agree well with each other or the persistent objects. The transient objects can be identified by their trajectories as well as their radial distance from the mobile apparatus 2102, e.g. the walls of the tunnel will remain at their distance, whereas transient objects will pass closer to the mobile apparatus 2102.
In another embodiment, multiple mobile apparatuses 2102 may include the sensing system 100 and/or sensing assemblies 102 to communicate information between each other. For example, the mobile apparatuses 2102 may each use the sensing assemblies 102 to detect when the mobile apparatuses 2102 are within a threshold distance from each other. The mobile apparatuses 2102 may then switch from transmitting the transmitted signals 106 in order to measure separation distances 110 and/or detect motion to transmitting the transmitted signals 106 to communicate other information. For example, instead of generating the digital pulse sequence to measure separation distances 110, at least one of the mobile apparatuses 2102 may use the binary code sequence (e.g., of ones and zeros) in a pattern signal that is transmitted toward another mobile apparatus 2102 to communicate information. The other mobile apparatus 2102 may receive the transmitted signal 106 in order to identify the transmitted pattern signal and interpret the information that is encoded in the pattern signal.
As another example, the sensing assembly 102 may communicate transmitted signals 106 that penetrate into the body of the patient 2300 and sense the motion or absolute position of various internal structures, such as the heart. Many of these positions or motions can be relatively small and subtle, and the sensing assembly 102 can use the ultrafine stage determination of motion or the separation distance 110 to sense the motion or absolute position of the internal structures.
Using the non-contact sensing assembly 102 also may be useful for situations where it is impossible or inconvenient to use wired sensors on the patient 2300 (e.g., sensors mounted directly to the test subject, connected by wires back to a medical monitor). For example, in high-activity situations where conventional wired sensors may get in the way, the sensing assembly 102 may monitor the separation distance 110 and/or motion of the patient 2300 from afar.
In another example, the sensing assembly 102 can be used for posture recognition and overall motion or activity sensing. This can be used for long-term observation of the patient 2300 for the diagnosis of chronic conditions, such as depression, fatigue, and overall health of at-risk individuals such as the elderly, among others. In the case of diseases with relatively slow onset, such as depression, the long term observation by the sensing assembly 102 may be used for early detection of the diseases. Also, since the unit can detect the medical parameters or quantities without anything being mounted on the patient 2300, the sensing assembly 102 may be used to make measurements of the patient 2300 without the knowledge or cooperation of the patient 2300. This could be useful in many situations, such as when dealing with children who would be made upset if sensors are attached to them. It may also give an indication of the mental state of a patient 2300, such as their breath becoming rapid and shallow when they become nervous. This would give rise to a remote lie-detector functionality.
In another embodiment, data generated by the sensing assembly 102 may be combined with data generated or obtained by one or more other sensors. For example, calculation of the separation distance 110 by the sensing assembly 102 may be used as a depth measurement that is combined with other sensor data. Such combination of data from different sensors is referred to herein as sensor fusion, and includes the fusing of two or more separate streams of sensor data to form a more complete picture of the phenomena or object or environment that is being sensed.
As one example, separation distances 110 calculated using the sensing assembly 102 may be combined with two-dimensional image data acquired by a camera. For example, without the separation distances 110, a computer or other machine may not be able to determine the actual physical size of the objects in a two-dimensional image.
The sensing assembly 102 (shown in
With this separation distance 110 (shown in
For example, the sensing systems 2500 can acquire or measure information (e.g., light levels, radiation, moisture, heat, and the like) from the target objects 104A, 104B and the separation distances 110A, 110B to the target objects 104A, 104B. The separation distances 110A, 110B can be used to correct or calibrate the measured information. For example, if the target objects 104A, 104B both provide the same light level, radiation, moisture, heat, and the like, the different separation distances 110A, 110B may result in the sensing systems 2500A, 2500B measuring different light levels, radiation, moisture, heat, and the like. With the sensing assembly 102 (shown in
As another example, the sensing system 2500 may include a reflective pulse oximetry sensor and the sensing assembly 102. Two or more different wavelengths of light are directed at the surface of the target object 104 by the system 2500 and a photo detector of the system 2500 examines the scattered light. The ratio of the reflected power can be used to determine the oxygenation level of the blood in the target object 104. Instead of being directly mounted (e.g., engaged to) the body of the patient that is the target object 104, the sensing system 2500 may be spaced apart from the body of the patient.
The surface of the patient body can be illuminated with light sources and the sensing assembly 102 (shown in
In another embodiment, the sensing assembly 102 and/or system 100 shown in
The examples of sensor fusion described herein are not limited to just the combination of the sensing assembly 102 and one other sensor. Additional sensors may be used to aggregate the separation distances 110 and/or motion detected by the sensing assembly 102 with the data streams acquired by two or more additional sensors. For example, audio data (from a microphone), video data (from a camera), and the separation distances 110 and/or motion from the sensing assembly 102 can be aggregated to give a more complete understanding of a physical environment.
The assembly 2602 includes a transmitting antenna 2604 that may be similar to the transmitting antenna 204 (shown in
The antennas 2604, 2606 may be moved to provide for pseudo-bistatic operation of the system 2600. For example, the antennas 2604, 2606 can be moved around to various or arbitrary locations to capture echoes 108 that may otherwise be lost if the antennas 2604, 2606 were fixed in position. In one embodiment, the antennas 2604, 2606 could be positioned on opposite sides of the target object 104 in order to test for the transmission of the transmitted signals 106 through the target object 104. Changes in the transmission of the transmitted signals 106 through the target object 104 can indicate physical changes in the target object 104 being sensed.
This scheme can be used with greater numbers of antennas 2604 and/or 2606. For example, multiple receiving antennas 2606 can be used to detect target objects 104 that may otherwise be difficult to detect. Multiple transmitting antennas 2604 may be used to illuminate target objects 104 with transmitted signals 106 that may otherwise not be detected. Multiple transmitting antennas 2604 and multiple receiving antennas 2606 can be used at the same time. The transmitting antennas 2604 and/or receiving antennas 2606 can be used at the same time, transmitting copies of the transmitted signal 106 or receiving multiple echoes 108, or the sensing assembly 2602 can be switched among the transmitting antennas 2604 and/or among the receiving antennas 2606, with the observations (e.g., separation distances 110 and/or detected motion) built up over time.
At 2702, a determination is made as to whether to use to the coarse stage determination of the time of flight and/or separation distance. For example, an operator of the system 100 (shown in
At 2704, an oscillating signal is mixed with a coarse transmit pattern to create a transmitted signal. For example, the oscillating signal 216 (shown in
At 2706, the transmitted signal is transmitted toward a target object. For example, the transmitting antenna 204 (shown in
At 2708, echoes of the transmitted signal that are reflected off the target object are received. For example, the echoes 108 (shown in
At 2710, the received echoes are down converted to obtain a baseband signal. For example, the echoes 108 (shown in
At 2712, the baseband signal is digitized to obtain the coarse receive data stream. For example, it may pass through the baseband processor 232 including the digitizer 730 to produce the digitized echo signal 740.
At 2714, a correlation window (e.g., a coarse correlation window) and a coarse mask are compared to the data stream to identify a subset of interest. Alternatively, the mask (e.g., a mask to eliminate or change one or more portions of the data stream) may not be used. In one embodiment, the coarse correlation window 320 (shown in
At 2716, a time of flight of the transmitted signal and echo is calculated based on a time delay of the subset of interest. This time of flight can be referred to as a coarse time of flight. As described above, the subset of interest can be associated with a time lag (td) between transmission of the transmitted signal 106 (shown in
At 2718, a determination is made as to whether the fine stage determination of the separation distance is to be used. For example, a determination may be made automatically or manually to use the fine stage determination to further refine the measurement of the separation distance 110 (shown in
At 2720, an oscillating signal is mixed with a digital pulse sequence to create a transmitted signal. As described above, the transmit pattern that is used in the fine stage may be different from the transmit pattern used in the coarse stage. Alternatively, the transmit pattern may be the same for the coarse stage and the fine stage.
At 2722, the transmitted signal is communicated toward the target object, similar to as described above in connection with 2706.
At 2724, echoes of the transmitted signal that are reflected off the target object are received, similar to as described above in connection with 2708.
At 2726, the received echoes are down converted to obtain a baseband signal. For example, the echoes 108 (shown in
At 2728, the baseband signal 226 is compared to a fine receive pattern. The fine receive pattern may be delayed by the coarse time of flight, as described above. For example, instead of comparing the baseband signal with the receive pattern with both the baseband signal and the receive pattern having the same starting or initial time reference, the receive pattern may be delayed by the same time as the time delay measured by the coarse stage determination. This delayed receive pattern also may be referred to as a “coarse delayed fine extraction pattern” 728.
At 2730, a time lag between the fine data stream and the time delayed receive pattern is calculated. This time lag may represent the temporal overlap or mismatch between the waveforms in the fine data stream and the time delayed receive pattern, as described above in connection with
At 2732, the time of flight measured by the coarse stage (e.g., the “time of flight estimate”) is refined by the time lag. For example, the time lag calculated at 2730 can be added to the time of flight calculated at 2716. Alternatively, the time lag may be added to a designated time of flight, such as a time of flight associated with or calculated from a designated or known separation distance 110 (shown in
At 2734, the time of flight (that includes the time lag calculated at 2732) is used to calculate the separation distance from the target object, as described above. Flow of the method 2700 may then return to 2702 in a loop-wise manner. The above methods can be repeated for the I and Q channels separately or in parallel using parallel paths as in
In one embodiment, performance of the fine stage determination (e.g., as described in connection with 2720 through 2732) is performed on one of the I or Q components of channels of the transmit signal and the echo signal, as described above. For example, the I channel of the echo signal 226 (shown in
As described above, the ultrafine stage determination may alternatively or additionally involve a similar process as the coarse stage determination. For example, the coarse stage determination may examine the I channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a corresponding time-of-flight, as described herein. The ultrafine stage determination can use the Q channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a time-of-flight, as described above. The times-of-flight from the I channel and Q channel can be combined (e.g., averaged) to calculate a time of flight and/or separation distance to the target. The correlation values calculated by the ultrafine stage determination can be used to calculate an additional time delay that can be added to the time delays from the coarse stage and/or the fine stage to determine a time of flight and/or separation distance to the target. Alternatively or additionally, the correlation values of the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to calculate separation distance or motion of the target.
In another embodiment, another method (e.g., a method for measuring a separation distance to a target object) is provided. The method includes transmitting an electromagnetic first transmitted signal from a transmitting antenna toward a target object that is separated from the transmitting antenna by a separation distance. The first transmitted signal includes a first transmit pattern representative of a first sequence of digital bits. The method also includes receiving a first echo of the first transmitted signal that is reflected off the target object, converting the first echo into a first digitized echo signal, and comparing a first receive pattern representative of a second sequence of digital bits to the first digitized echo signal to determine a time of flight of the first transmitted signal and the echo.
In another aspect, the method also includes calculating the separation distance to the target object based on the time of flight.
In another aspect, the method also includes generating an oscillating signal and mixing at least a first portion of the oscillating signal with the first transmit pattern to form the first transmitted signal.
In another aspect, converting the first echo into the first digitized echo signal includes mixing at least a second portion of the oscillating signal with an echo signal that is based on the first echo received off the target object.
In another aspect, comparing the first receive pattern includes matching the sequence of digital bits of the first receive pattern to subsets of the first digitized echo signal to calculate correlation values for the subsets. The correlation values are representative of degrees of match between the sequence of digital bits in the first receive pattern and the subsets of the first digitized echo signal.
In another aspect, at least one of the subsets of the digitized echo signal is identified as a subset of interest based on the correlation values. The time of flight can be determined based on a time delay between transmission of the transmitted signals and occurrence of the subset of interest.
In another aspect, the method also includes transmitting an electromagnetic second transmitted signal toward the target object. The second transmitted signal includes a second transmit pattern representative of a second sequence of digital bits. The method also includes receiving a second echo of the second transmitted signal that is reflected off the target object, converting the second echo into a second baseband echo signal, and comparing a second receive pattern representative of a third sequence of digital bits to the second baseband echo signal to determine temporal misalignment between one or more waveforms of the second baseband echo signal and one or more waveforms of the second receive pattern. The temporal misalignment representative of a time lag between the second receive pattern and the second baseband echo signal is extracted and then the time lag is then calculated.
In another aspect, the method also includes adding the time lag to the time of flight.
In another aspect, converting the second echo into the second digitized echo signal includes forming an in-phase (I) channel of the second baseband echo signal and a quadrature (Q) channel of the second baseband echo signal. Comparing the second receive pattern includes comparing an I channel of the second receive pattern to the I channel of the second digitized echo signal to determine an I component of the temporal misalignment and comparing a Q channel of the second receive pattern to the Q channel of the second digitized echo signal to determine a Q component of the temporal misalignment.
In another aspect, the time lag that is added to the time of flight includes the I component of the temporal misalignment and the Q component of the temporal misalignment.
In another aspect, the method also includes resolving phases of the first echo and the second echo by examining the I component of the temporal misalignment and the Q component of the temporal misalignment, where the time of flight calculated based on the phases that are resolved.
In another aspect, at least two of the first transmit pattern, the first receive pattern, the second transmit pattern, or the second receive pattern differ from each other.
In another aspect, at least two of the first transmit pattern, the first receive pattern, the second transmit pattern, or the second receive pattern include a common sequence of digital bits.
In another embodiment, a system (e.g., a sensing system) is provided that includes a transmitter, a receiver, and a baseband processor. The transmitter is configured to generate an electromagnetic first transmitted signal that is communicated from a transmitting antenna toward a target object that is a separated from the transmitting antenna by a separation distance. The first transmitted signal includes a first transmit pattern representative of a sequence of digital bits. The receiver is configured to generate a first digitized echo signal that is based on an echo of the first transmitted signal that is reflected off the target object. The correlator device is configured to compare a first receive pattern representative of a second sequence of digital bits to the first digitized echo signal to determine a time of flight of the first transmitted signal and the echo.
In another aspect, the baseband processor is configured to calculate the separation distance to the target object based on the time of flight.
In another aspect, the system also includes an oscillating device configured to generate an oscillating signal. The transmitter is configured to mix at least a first portion of the oscillating signal with the first transmit pattern to form the first transmitted signal.
In another aspect, the receiver is configured to receive at least a second portion of the oscillating signal and to mix the at least the second portion of the oscillating signal with an echo signal that is representative of the echo to create the first baseband echo signal.
In another aspect, the baseband echo signal may be digitized into a first digitized echo signal and the correlator device is configured to compare the sequence of digital bits of the first receive pattern to subsets of the first digitized echo signal to calculate correlation values for the subsets. The correlation values are representative of degrees of match between the first receive pattern and the digital bits of the digitized echo signal.
In another aspect, at least one of the subsets of the digitized echo signal is identified by the correlator device as a subset of interest based on the correlation values. The time of flight is determined based on a time delay between transmission of the first transmitted signal and occurrence of the subset of interest in the first digitized echo signal.
In another aspect, the transmitter is configured to transmit an electromagnetic second transmitted signal toward the target object. The second transmitted signal includes a second transmit pattern representative of a second sequence of digital bits. The receiver is configured to create a second digitized echo signal based on a second echo of the second transmitted signal that is reflected off the target object. The baseband processor is configured to compare a second receive pattern representative of a third sequence of digital bits to the second digitized echo signal to determine temporal misalignment between one or more waveforms of the second digitized echo signal and one or more waveforms of the second receive pattern. The temporal misalignment is representative of a time lag between the second receive pattern and the second baseband echo signal that is added to the time of flight.
In another aspect, the receiver is configured to form an in-phase (I) channel of the second digitized echo signal and a quadrature (Q) channel of the second digitized echo signal. The system can also include a baseband processing system configured to compare an I channel of the second receive pattern to the I channel of the second digitized echo signal to determine an I component of the temporal misalignment. The baseband processing system also is configured to compare a Q channel of the second receive pattern to the Q channel of the second digitized echo signal to determine a Q component of the temporal misalignment.
In another aspect, the time lag that is added to the time of flight includes the I component of the temporal misalignment and the Q component of the temporal misalignment.
In another aspect, the baseband processing system is configured to resolve phases of the first echo and the second echo based on the I component of the temporal misalignment and the Q component of the temporal misalignment. The time of flight is calculated based on the phases that are resolved. For example, the time of flight may be increased or decreased by a predetermined or designated amount based on an identified or measured difference in the phases that are resolved.
In another embodiment, another method (e.g., for measuring a separation distance to a target object) is provided. The method includes transmitting a first transmitted signal having waveforms representative of a first transmit pattern of digital bits and generating a first digitized echo signal based on a first received echo of the first transmitted signal. The first digitized echo signal includes waveforms representative of a data stream of digital bits. The method also includes comparing a first receive pattern of digital bits to plural different subsets of the data stream of digital bits in the first digitized echo signal to identify a subset of interest that indicates the presence and/or temporal location of the first receive pattern than one or more other subsets. The method further includes identifying a time of flight of the first transmitted signal and the first received echo based on a time delay between a start of the data stream in the first digitized echo signal and the subset of interest.
In another aspect, the method also includes transmitting a second transmitted signal having waveforms representative of a second transmit pattern of digital bits and generating an in-phase (I) component of a second baseband echo signal and a quadrature (Q) component of the second baseband echo signal that is based on a second received echo of the second transmitted signal. The second baseband echo signal includes waveforms representative of a data stream of digital bits. The method also includes comparing a time-delayed second receive pattern of waveforms that are representative of a sequence of digital bits to the second baseband echo signal. The second receive pattern is delayed from a time of transmission of the second transmitted signal by the time delay of the subset of interest. An in-phase (I) component of the second receive pattern is compared to an I component of the second baseband echo signal to identify a first temporal misalignment between the second receive pattern and the second baseband echo signal. A quadrature (Q) component of the second receive pattern is compared to a Q component of the second baseband echo signal to identify a second temporal misalignment between the second receive pattern and the second baseband echo signal. The method also includes increasing the time of flight by the first and second temporal misalignments.
In another aspect, the method also includes identifying motion of the target object based on changes in one or more of the first or second temporal misalignments.
In another aspect, the first transmit pattern differs from the first receive pattern.
The network 3102 can include a wired, wireless, or waveguide network. The network 3102 can include a local area network (LAN), a wide area network (WAN), a personal area network (PAN), a satellite network, or a cellular network. The network 3102 can be a P2P network between the first client 3110, the second client 3112, or the third client 3114.
The server 3104 can include a web server, an application server, or a database server. The server 3104 can be stationary or in motion. The server 3104 can be a physical server or virtual server. The server 3104 can include or be included in a server farm or a data center. The server 3104 may reside inside of or be hosted in connection with the first client 3110, the second client 3112, or the third client 3114.
The defined area 3106 can be stationary or in motion. The defined area 3106 can be indoors or outdoors. The defined area 3106 can include an enclosed area, a physically fenced area, a digitally fenced area, a geo-fenced area, a building (e.g. residential, industrial, commercial), a garage, a room, a bunker, a basement, a vehicle (e.g. land, marine, aerial, space), a mall, a school, a cubicle grid, a utility room, a walk-in refrigerator, a restaurant, a coffee shop, a station (e.g. subway, bus, train), an airport, a barracks, a camp site, a house of worship, a gas station, an oil field, a refinery, a warehouse, a farm, a laboratory, a library, a logistical warehouse (e.g. packaging, shipping, sorting), a long term storage facility, an industrial facility, a post office, a shipping hub or station, a supermarket, a retail store, a home improvement center, a parking lot, a toy store, a manufacturing plant, a processing plant, a pool, a hospital, a medical facility, a medical procedure room, an energy plant, a nuclear reactor, or others including any permutational combinations thereof.
The object 3108 can be stationary or in motion. The object 3108 can include a wall, a floor, a ceiling, a furniture item, an item resting or coupled to a furniture item, a machine, a vehicle, a client, a wearable, a mammal, a human, an animal, a bird, a fish, or others. The object 3108 may be one of the other clients, such as the first client 3110, the second client 3112, or the third client 3114 that can measure at least some distance to each other. For example, the first client 3110, the second client 3112, or the third client 3114 can include, be physically or electrically coupled to, be a component of, or embodied as the object 3108.
The first client 3110, the second client 3112, or the third client 3114 can be stationary or in motion. The first client 3110, the second client 3112, or the third client 3114 can include, be physically or electrically coupled to, be a component of, or embodied as a desktop, a laptop, a tablet, a smartphone, a joystick, a videogame console, a camera, a microphone, a speaker, a keyboard, a mouse, a touchpad, a trackpad, a sensor, a display, a printer, an additive or subtractive manufacturing machine, a wearable, a vehicle, a furniture item, a plumbing tool, a construction tool, a mat, a firearm/rifle, a laser pointer, a scope, a binocular, an electrical tool, a drill, an impact driver, a flashlight, an engine, an actuator, a solenoid, a toy, a pump, or others including any permutational combinations thereof. The wearable includes a hat, a helmet, a earbud, a hearing aid, a headphone, an eyewear frame, an eye lens, a band, a garment (e.g. outer or inner or others including any permutations thereof), a shoe, a jewelry item, a medical device, an activity tracker, a swimsuit, a bathing suit, a snorkel, a scuba breathing apparatus, a swimming leg fin, a handcuff, an implant, or any other device that can be worn on or in a body (including hair) of an animal, such a human, a dog, a cat, a bird, a fish, or any other, whether domesticated or undomesticated, whether male or female, whether elderly, adult, teen, toddler, infant, or others including any permutational combinations thereof. The garment can include a jacket, a shirt, a tie, a belt, a band, a pair of shorts, a pair of pants, a sock, an undershirt, an underwear item, a bra, a jersey, a skirt, a dress, a blouse, a sweater, a scarf, a glove, a bandana, an elbow pad, a kneepad, a pajama, a robe, or others including any permutational combinations thereof. The jewelry item can include an earring, a necklace, a ring, a bracelet, a pin, a brooche, or others including any permutational combinations thereof, whether worn on a body or clothing. The shoe can include a dress shoe, a sneaker, a boot, a heeled shoe, a roller skate, a rollerblade, or others including any permutational combinations thereof.
The host 3202 can host externally, internally, or others including any permutational combinations thereof, such as when the host 3202 is at least physically coupled to such components, such as via fastening, mating, interlocking, adhering, magnetizing, suctioning, stitching, stapling, nailing, or other forms of physical coupling. The processor 3204, the memory 3206, the IMU 3208, the DSU 3210, and the networking unit 3212 are supported or hosted via a housing, an enclosure, a platform, a frame, a chassis, or others including any permutational combinations thereof. The housing, the enclosure, the platform, the frame, the chassis, or others including any permutational combinations thereof can support or host externally, internally, or others including any permutational combinations thereof, such as when the housing, the enclosure, the platform, the frame, the chassis, or others including any permutational combinations thereof is at least physically coupled to such components, such as via fastening, mating, interlocking, adhering, magnetizing, suctioning, stitching, stapling, nailing, or other forms of physical coupling including any permutational combinations thereof. The housing, the enclosure, the platform, the frame, the chassis, or others including any permutational combinations thereof can be rigid, flexible, elastic, solid, perforated, hollow, or others including any permutational combinations thereof. For example, the housing, the enclosure, the platform, the frame, the chassis, or others including any permutational combinations thereof can include a plastic, a metal, a rubber, a wood, a precious metal, a precious stone, a fabric, a rare-earth element, or others including any permutational combinations thereof.
The processor 3204 is in communication with the memory 3206, the IMU 3208, the DSU 3210, and the networking unit 3212. The processor 3204 can include a single core or a multicore processor. The processor 3204 can include a system-on-chip (SOC) or an application-specific-integrated-circuit (ASIC). The processor 3204 is powered via an accumulator, such as a battery or others including any permutational combinations thereof, whether the accumulator is housed or is not housed via the host 202.
The memory 3206 can include a read-only-memory (ROM), a random-access-memory (RAM), a hard disk drive, a flash memory, or others including any permutational combinations thereof. The memory 3206 is powered via an accumulator, such as a battery or others including any permutational combinations thereof, whether the accumulator is housed or is not housed via the host 3202.
The IMU 3208 is optional and can be a micro-electro-mechanical system (MEMS) or others. The IMU 3208 can include an accelerometer, a magnetometer, or a gyroscope. For example, the accelerometer can be configured to sense and then output an amount of force (acceleration) the accelerometer is experiencing along an X-axis, a Y-axis, and a Z-axis. The gyroscope can be configured to sense and then output an angular velocity the gyroscope is experiencing along an X-axis, a Y-axis, and a Z-axis. The magnetometer is configured to sense and then output a magnetism (magnetic field intensity) value the magnetometer is experiencing along an X-axis, a Y-axis, and a Z-axis. As such, the IMU 3208 can be configured to output a roll value, a pitch value, and a yaw value. Consequently, the IMU 3208 can measure and reports a body’s specific force, an angular rate, and a magnetic field surrounding the body using a combination of the accelerometer, the gyroscope, and the magnetometer. The IMU 3208 can output a linear acceleration of the body. Alternatively or additionally, the IMU 3208 may be replaced by or include a hardware device (e.g. housing, frame, wearable, chip, transceiver) to determine a general position of the host 3202. This hardware device may be a GPS unit, a GLONASS unit, a terrestrial signal triangulation geolocation unit, for example. The hardware device may determine a geolocation of the host 3202 with less accuracy than is possible using the DSU 3210 or some techniques described herein. The hardware device may be used to determine that two or more hosts 3202 are in a general vicinity of each other (e.g. within about 10 meters, about 9 meters, about 8 meters, about 7 meters, about 6 meters, about 5 meters, about 4 meters, about 3 meters, about 2 meters, about 1 meter), and the hosts 3202 may therefore observe (e.g. radio, light, sound) each other using at least some techniques, as described herein. In another example, other wireless networks, such as WiFi, Li-FI, Zigbee, cellular, satellite, Bluetooth, or others, may be used to determine the general position of one or more hosts 3202, or to refine a position estimate from the hardware device, such as a GPS unit.
The DSU 3210 is optional and can include a radar unit, a lidar unit, a sonar unit, or others, whether wired or wireless. For example, the radar unit can include a digital radar unit (DRU), as disclosed in U.S. Pat. 9,019,150, which is incorporated by reference herein for all purposes including any DSU or DRU systems, structures, environments, configurations, techniques, algorithms, or others. For example, the DRU can be implemented as a software technique via a DSU, which can avoid extra hardware. Note that the system 3100 can include more than one DSU 3210 up to DSU n, which can be hosted via the host 3202 or distributed among the host 3202 and other structures, whether local to or remote from each other. For example, the system 3100 can include at least two, three, four, five, six, seven, eight, nine, ten, scores, tens, fifties, hundreds, thousands, millions, or more DSU 3210 embodied as DSU n. For example, the system 3100 can include a cluster of DSU 3210. Therefore, in such configurations, the DSUs 3210 are not in sync with each other (but could also be in sync) and do not interfere with each other, but are able to receive echoes or signals from each other, as explained in U.S. Pat. 9,019,150 referenced above and incorporated by reference herein for all purposes including any DSU 3210 or DRU systems, structures, environments, configurations, techniques, algorithms, or others. For example, in such configurations, the DSU 3210 can be identical to or different from each other in structure, function, operation, modality, positioning, materials, or others.
The networking unit 3212 can send or receive a communication packet (e.g. radio, acoustic, light) in a wired, wireless, or waveguide manner. The networking unit 3212 can include a transmitter, a receiver, or a transceiver. The communication packet can include text, audio, video, files, or streams. Alternatively, the networking unit 3212 may transmit non-packetized data or signals in many other ways. For example, the networking unit 3212 may output an analog frequency in a certain range that is proportional to at least some data that is being transmitted.
In block 3302, a first distance reading(s) and a first inertia reading(s) are generated at a first client. For example, the first client can generate a plurality of first distance readings or a plurality of first inertia readings, either in real-time or non-real-time. For example, the first distance readings can be generated concurrently or non-concurrently, such as spaced apart by one or more nanoseconds, microseconds, milliseconds, seconds, minutes, hours, or more. The first distance reading(s) can be generated by a DSU in real-time, as described herein (e.g., off a vertically extending surface, a wall, a column). The first inertia reading(s) can be generated via an IMU in real-time, as described herein. The first client can host the DSU and the IMU, as described herein. The first client is positioned within a defined area, as described herein. The first client can be a mobile device, a wearable, or a vehicle. For example, this first distance reading(s) may include a distance(s) to one or more objects 3108 that may include the second host 3202 or at least one of the first client 3110, the second client 3112, or the third client 3114.
In block 3304, a second distance reading(s) and a second inertia reading(s) are generated at a second client. For example, the second client can generate a plurality of second distance readings or a plurality of second inertia readings, either in real-time or non-real-time. For example, the second distance readings can be generated concurrently or non-concurrently, such as spaced apart by one or more nanoseconds, microseconds, milliseconds, seconds, minutes, hours, or more The second distance reading(s) can be generated by a DSU in real-time, as described herein (e.g., off a vertically extending surface, a wall, a column). For example, the first distance reading and the second distance reading can be off different sections of a same wall (e.g., spaced apart) or other vertically extending surfaces or two different walls (e.g., spaced apart) or other vertically extending surfaces. The second inertia reading(s) can be generated via an IMU in real-time, as described herein. The second client can host the DSU and the IMU, as described herein. The second client is positioned within the defined area, as described herein. The second client can be a mobile device, a wearable, or a vehicle. For example, the second distance reading(s) may include a distance(s) to one or more objects 3108 that may include the first host 3202 or at least one of the first client 3110, the second client 3112, or the third client 3114.
The first distance reading(s) or the second distance reading(s) can be based on the second client or the first client (e.g. the first client senses distance to the second client or vice versa). The first distance reading(s) or the second distance reading(s) can be based on an object (e.g. building feature, furniture) other than the first client and the second client (e.g. the first client or the second client distance senses the object). The object can be stationary or in motion within the defined area. The object can be stationary or in motion outside the defined area.
In block 3306, the first distance reading(s) and the first inertia reading(s) are sent from the first client to a server. The first client sends the first distance reading(s) and the first inertia reading(s) via a first set of data in real-time over a network, as described herein. The server receives the first set of data in real-time from the first client positioned within the defined area.
In block 3308, the second distance(s) reading and the second inertia reading(s) are sent from the second client to the server. The second client sends the second distance reading(s) and the second inertia reading(s) via a second set of data in real-time over the network, as described herein. The server receives the second set of data in real-time from the second client positioned within the defined area.
In block 3310, the server performs a data fusion of the first distance reading(s), the first inertia reading(s), the second distance reading(s), and the second inertia reading(s). The data fusion (e.g. association, linking, relating) can be performed in real-time based on a specific time or a specific time range (e.g. relative to frequency of scene changing as measured in microseconds or milliseconds) that the first distance reading(s), the first inertia reading(s), the second distance reading(s), and the second inertia reading(s) were collected. For example, if the first distance reading(s), the first inertia reading(s), the second distance reading(s), and the second inertia reading(s) are generated, as disclosed herein, and sent to the server 3104 within a short amount of time (e.g. under about 60 seconds, about 45 seconds, about 30 seconds, about 15 seconds, about 10 seconds, about 5 seconds, about 1 seconds, about 0.1 seconds, about 0.001 seconds, about 0.0001 seconds), then the server 3104 can associate (e.g. relate, link) the first distance reading(s), the first inertia reading(s), the second distance reading(s), and the second inertia reading(s) with each other. For example, a short amount of time would be considered an amount of time where at least one of the first client 3110, the second client 3112, or the third client 3114 or other objects 3108 within at least some observation distance (e.g. radio, light, sound) from the first client 3110, the second client 3112, or the third client 3114 do not move substantially (e.g. less than about 10 meters, about 9 meters, about 8 meters, about 7 meters, about 6 meters, about 5 meters, about 4 meters, about 3 meters, about 2 meters, about 1 meter, about 0.5 meter, about 0.1 meter), for example, microseconds, milliseconds, or even seconds, depending on use case. For example, suppose the first client 3110 uses an omnidirectional antenna (or another type of a suitable antenna), and measures a target (e.g., the object 3108) at a distance of 400 cm (called target AA for this example), at an unknown azimuthal angle, and the first client 3110 also measures a target (e.g., another object 3108) at a distance of 300 cm (called target BB for this example), at an unknown azimuthal angle, whether similarly or dissimilarly. Also, the first client 3110 makes a measurement of its inertia (e.g., via an onboard accelerometer, gyroscope, or compass) and determines its bearing and motion, for this example, suppose the first client 3110 measures no acceleration and that the first client 3110 is pointed North. The first client 3110 has an identification code that may be unique thereto (e.g., relative to other clients in communication with the server 3104), either locally or globally. The first client 3110 creates an internal data structure (e.g., an array, a vector, a tree, a linked-list, a hash, a queue, a deck, a stack, a graph) to contain this observed data. The first client 3110 inserts (e.g., writes) the individual observations into the internal data structure as pairs of values, one containing one piece of observed data and one containing the timestamp at the first client 3110 corresponding to the one piece of observed data. Each client 3110 or 3112 may have a local time standard or clock, and these may or may not be synchronized to each other or to the server. At a later time, the first client 3110 then repeats its omnidirectional distance measurement and finds one target at a range of 395 cm, and another at 300 cm. The first client 3110 again creates another data structure (or populates the previous one) and inserts the observed data into the data structure in a manner similar to that described above. At substantially the same time as the first client 3110 is making its measurements of distance, direction and acceleration, the second client 3112 also makes one or more measurements of at least its inertia and bearing, whether similarly or dissimilarly. For this example, the second client 3112 determines that the second client 3112 is pointing South and has travelled 5 cm to its right, or West. The second client 3112 also measures the distance to various targets that are within range and finds two targets, one at 400 cm and another at 550 cm, whether similarly or dissimilarly. The second client 3112 repeats its distance measurements and finds two objects at 395 cm and 547 cm, whether similarly or dissimilarly. The second client 3112 creates a data structure and inserts its observed data into the data structure in a manner similar to that described above. The first client 3110 and the second client 3112 may transfer their data to a server or to each other, or one may collect data from the other. Prior to the transfer, each of the first client 3110 and the second client 3112 also inserts its identification code into another record in its respective data structure, and its local timestamp when preforming its respective transmission. For this example, the server 3104 receives the data structures created by the first client 3110 and the second client 3112 and records its local timestamp when that data is received. The server examines the identification codes that are a part of a data structure to separate the data according to which client created the data. The server then examines the time stamp of the data relative to its local timestamp when the server received its respective transmission to determine which pieces of data were collected by the clients at substantially similar times. The times are deemed substantially similar if those times are within an amount of time less than a threshold, which can be static or dynamic. The threshold can be set according to various criteria. For example, the threshold can be set such that that no physical objects under possible observation could have moved more than a certain amount, such as tens of microns, a millimeter, centimeter, meter, etc. For example, if a maximum speed of any object 3108 is assumed to be 50 meters per second (e.g., preprogrammed) and the timestamps are found to be within 1 millisecond of each other, then no object could have moved more than 5 cm between the measurements, as explained above.
The server determines that Client 1 (e.g., the first client 3110) observed two separate objects (e.g., the objects 3108), one that is stationary and another that moved 5 cm closer. The server also determines that Client 2 (e.g., the second client 3112) observed two stationary clients or objects while Client 2 moved 5 cm. With these observations and the measured orientations of Client 1 and Client 2, the server is then able to deduce the relative positions of the clients: Client 2 is 395 cm to the east of client 1. There is another stationary object that is observed by the Client 1 and Client 2, called BB in this example, and the server is able to use triangulation to determine two possible locations for the objects, one generally North of the Client 1, and one generally South of Client 1. At this point, the server may or may not be able to determine which is the actual location of the client with the information gathered so far. At a later time, Client 1 measures two targets at 300 cm and 396 cm via a radar, as disclosed herein, and its bearing as North and no motion. Client 2 measures two objects, 545 cm and 396 cm, via a radar, as disclosed herein, and that Client 2 moved forward a few centimeters and that its bearing is South. All these observations are again transmitted to the server. The server examines the time stamps of all the newly collected pieces of data, which can include differencing, and determines that they (e.g., the time stamps) are less than the threshold that was set previously. The server updates its model of the relative positions of the clients and is now able to determine the location of the other stationary target, BB, unambiguously. The server determines that target BB is in the positions to the South of the first client 3110.
The threshold can be set in various ways. In the above example, each piece of data receives three time stamps, namely, (1) the time it was collected by the client, (2) the time at which the client transmitted the data to the server, and (3) the time at which the server received the data. But due to the lack of synchronization between the clocks at the client and the server and the time for reception and transmission of the data, in some embodiments, the time threshold cannot be made very tight and allows for more possible motion or uncertainty between measurements. In another embodiment, when Client 1 transmits its signal for radar measurements, Client 1 embeds its local time stamp and its identification number as well as a unique identifier for that particular transmission, such as a UUID. Client 1 performs its radar operation normally, as described herein, but now Client 2 may also receive the radar transmission of Client 1. Client 2 decodes the signal and extracts the identification code of Client 1 and the UUID of the packet and the timestamp of the packet. Client 2 also records its local timestamp when the packet is received. This reception observation is also recorded by Client 2 and transmitted to the server. This observation can be used to refine the estimate of the relative observation times of the pieces of data and therefore reduce the threshold time for determining that observations are close enough in time. This allows for less uncertainty between measurements. Since the distance between Client 1 and Client 2 may also be measured, along with the relative time stamps, the travel time from Client 1 to Client 2 can also be calculated using the speed of light.
In block 3312, the server determines a first position of the first client based on the data fusion or a second position of the second client based on the data fusion or the position of both the first and second clients, or the relative positions of the first and second clients (e.g. to each other). The server can determine the first position of the first client in real-time within the defined area based on the data fusion and the second position of the second client in real-time within the defined area based on the data fusion. The first position or the second position can be expressed as a set of coordinates. For example, the server 3104 can compare an inertia reading (e.g. first inertia reading(s)) to a plurality of distance measurements (e.g. first distance reading(s) and second distance reading(s)) in order determine a relative motion of the second client 3112 based on the second distance reading(s) to or from the first client 110. The server 3104 can then compare the relative motion of the second client 3112 to the distance measurements of another client (e.g. the first client 3110). For example, the server 3104 can determine from the first and second inertia readings that the second client 3110 moved straight ahead a distance of 1 meter, and the server 3104 can determine that the first client 3110 measured an object to its right that moved from 4 meters away to 3 meters away (e.g. a distance moved of 1 meter). The server 3104 can then determine that the second client 3112 is positioned to the right of the first client 3110, and that the first client 3110 and the second client 3112 are 3 meters apart. The server 3104 may also be able to determine at least some relative orientations of the clients 3110 or 3112, based on the data. Continuing the above example, the server 3104 may determine that the second client 3112 is facing the first client 3110, and the second client 3112 is the right side of the first client 3110. For example, as described above, suppose the first client 3110 uses an omnidirectional antenna (or another suitable type of antenna), and measures a target (e.g., the object 3108) at a distance of 400 cm (called target AA for this example), at an unknown azimuthal angle, and the first client 3110 also measures a target (e.g., another object 3108) at a distance of 500 cm (called target BB for this example), at an unknown azimuthal angle. Also, the first client 3110 makes a measurement of its inertia (e.g., via an on-board accelerometer, gyroscope, or compass) and determines its bearing and motion, for this example, suppose the first client 3110 measures no acceleration and that the first client 3110 is pointed North. At a later time, the first client 3110 then repeats its omnidirectional distance measurement and finds that finds one target (e.g., object 3108) at a range of 395 cm, and another target (e.g., object 3108) at 500 cm. At substantially the same time as the first client 3110 is making its measurements of distance, direction, and acceleration, the second client 3112 also makes one or more measurements of at least its inertia and bearing. For this example, the second client 3112 determines that the second client 3112 is pointing South from its compass reading and has travelled 5 cm to its right, or West. The second client 3112 also measures the distance to various targets (e.g., objects 3108) that are within range and finds two targets (e.g. objects 3108), one at 400 cm and another at 850 cm. The second client 3112 repeats its distance measurements and finds two objects (e.g., objects 3108) at 395 cm and 847 cm. The first client 3110 and the second client 3112 may send their data to a server or to each other, or one may collect data from the other. For this example, the server receives all the data and fuses the data, as described above, and the server has other information that the first client 3110 and the second client 3112 are in the same area, and the server has also stored the identification codes of the first client 3110 and the second client 3112. The server extracts the observations from the data structures and matches the observations to each other, performing data fusion, as described above. Since the server knows that the first client 3110 and the second client 3112 and within range of each other, the server looks for symmetric observations that are within range of each other and trajectories that match as well. In this example, both clients 3110 and 3112 measure another target at 400 cm and the first client 3110 observes that the target moves 5 cm at the next measurement, while the first client 3110 was still. The second client 3112 measured its own motion as 5 cm over the same time as the first client 3110 and the ending distance observed by both clients 3110 and 3112 is the same, namely 395 cm. The server performs a motion compensation for the second client 3112 and then provides all these observations to a Kalman filter to determine the locations of all the objects 3108 that are observed, and how many unique objects 3108 there are. Since the first client 3110 and the second client 3112 are also targets observed by the other, the server also attempts to assign identities to the targets. The Kalman filter ultimately tries to assign a physical state vector to each object and client in the scene, this state vector may include components for the three dimensional position, orientation, relative or absolute velocity, rotational rate, or others. In this example, the server determines that the second client 3112 was actually target AA that was observed by the first client 3110, and that the second client 3112 also observed the first client 3110 as one of its targets since the measured distances to the targets match and that the trajectory measured by the IMU of the second client 3112 matches the trajectory measured by the first client 3110. The other target that was observed by both clients 3110 and 3112 is also determined to be in the area and the distance measurements to the target are associated with each other, along with the inertial readings of each client 3110 and 3112. The server also determines that the second client 3112 is to the right, or East of the first client 3110.
In block 3314, the server takes an action based on the first position and the second position. The action can be taken in real-time. The action can be over a network, as described herein. The action can be local or remote. When the first client is unaware of the second client before the data fusion and the second client is unaware of the first client before the data fusion, the action can includes making the first client or the second client aware of the second client or the first client, respectively. The action can include refining the first position or the second position in real-time based on receiving, via the server, a set of data in real-time from the first client or the second client. The set of data can include an inertia reading in real-time from the first IMU or the second IMU. The set of data can include a distance reading(s) in real-time from the first distance sensor or the second distance sensor.
The action can include enabling a content to be output, such as via an output device. The content can be based on the first position or the second position. The content can include an audio containing a warning message, a direction message, a navigational content, an instructional content, or others including any permutational combinations thereof.
The action can include enabling a content to be modified, such as the content stored on a memory or the content output via an output device. The content can be based on the first position or the second position. The content can include a graphic containing a warning message, a direction message, a navigational content, an instructional content, or others including any permutational combinations thereof.
The content that is output via the output device can include an augmented reality content based on the first position or the second position. The augmented reality content can include at least one of images or sound based on the first position or the second position. The augmented reality content can be a navigational content, a warning content, a directional content, an instructional content, a videogame content, an immersive experience content, an educational content, a shopping content, or others including any permutational combinations thereof. The augmented reality content can be modifiable based on the position in real-time.
The content that is output via the output device is a virtual reality content based on the first position or the second position. The virtual reality content can include at least one of images or sound based on the first position or the second position. The virtual reality content can be a navigational content, a warning content, a directional content, an instructional content, a videogame content, an immersive experience content, an educational content, a shopping content, or others including any permutational combinations thereof. The virtual reality content can be modifiable based on the shift measurements and the position in real-time. When the first client or the second client is an eyewear unit, the virtual reality content can help a wearer of the eyewear unit to avoid obstacles, such as via minimize walking into an obstacle.
In block 3316, the action can include the server sending the first position and the second position to the first client and the second client over a network, as described herein. The action can include the server sending the first position or the second to the first client or the second client over a network, as described herein.
The action can include sending the second position to the first client or the second client. The action includes sending the first position and the second position to the first client and the second client
In block 3318, the action can include the server requesting a third client to take an action. The third client can take the action local or remote. The third client can take the action over a network, as described herein. The third client is other than the first client and the second client. The third client can be in motion or stationary within the defined area. The third client can be in motion or stationary outside the defined area. The third client can be embodied as the device 3200. The data can be performed involving a third set of data received from a third client other than the first client and the second client when the third client is stationary or in motion within the defined area or outside the defined area. The third set of data can include a third position and at least one of the first position or the second position can be determined based on the third position. The third position can be of the client or another client or object.
In block 3320, the action can include the server requesting an input device or an output device to take an action. The input device can include a camera, a microphone, a user input interface, a touch-enabled display, a receiver, a transceiver, a sensor, a motor, a valve, a hardware server, or others including any permutational combinations thereof. The output device can include a display, a speaker, a vibrator, an actuator, a valve, a pump, a motor, a transmitter, a transceiver, a hardware server, or others including any permutational combinations thereof.
In block 3322, the action can include the server creating or modifying a data structure. The data structure can include an array, a linked list, a que, a stack, a deck, a tree, a file, a database record, a digital map, a log, or others. The data structure can be modified via an add operation, a remove operation, an edit operation, a deletion operation, a sort operation, an update operation, or others. The data structure can be local or remote from the server. The data structure can be modified to include information about the first position or the second position.
In block 3324, the action can include the server informing a client of an area out of distance sensing range. The area can be outside the defined area or within the defined area. For example, the server may determine that the first client 3110 or second client 3112 may not be able to observe the entire defined area 3106, this could be due to limited sensing range of the client, or occlusion by obstacles, or objects with low reflectivity, or limited scanning angle, or other reasons. The first client 3110 or the second client 3112 may then not be able to observe the defined area 3106 in full. As such, the action can include informing the first client 3110 or second client 3112 about the portion that the first client 3110 or the second client 3112 cannot observe. For example, if the defined area 3106 is a long, rectangular (although other shapes are possible) room (or another defined area), then the first client 3110 and second client 3112 may be located near the opposite ends of the room. The first client 3110 may not be able to observe the far wall of the room behind the second client 3112. When the first client 3110 and the second client 3112 share their observations with the server 3104, the server 3104 can determine that the first client 3110 is not able to observe the far wall of the room. The server 3104 will then inform the first client 3110 of the relevant observations from the second client 3112.
In block 3326, the action can include the server requesting a client to move. The client can be a vehicle (e.g. land, aerial, marine, space). The movement can be rectilinear, curved, arcuate, uniform, accelerating, decelerating, turning, rotating, tilting, pivoting, or others. The movement can involve an electric motor (e.g. brushed, brushless,) a combustion engine, a turbine, an actuator, a pulley, a gear, or others.
Note that the first distance sensor can encode generalized digital data in its transmitted radar signal. The generalized digital data may be a timestamp or an identification code of a client. The generalized digital data can be a unique coded signal that identifies that particular transmission packet, such as a universally unique identifier (UUID) code. The second distance sensor can receive and decode the radar signal transmitted by the first distance sensor to retrieve, decode, or read the generalized digital data and an echo based on the radar signal. The second set of data includes a set of information formed based on the decoded radar signal and the generalized digital data and the echo. The receiver (e.g. DSU 210, IMU 208) may apply a mask (e.g. logic that removes or changes bits in order to control signal processing) to the received signal to select the various parts of the signal for different types of processing as described herein. The data fusion involves the set of information.
Note that although these configurations are disclosed here in the context of clients 3110 and 3112 each having both a DSU and an IMU, other configurations are possible. For example, the first client 3110 can generate at least two distance readings for matching those to at least one inertia reading generated by the second client 3112. For example, in one configuration, the first client 3110 can have at least a DSU 3210 and no IMU 3208 and the second client 3112 can have at least an IMU 3208 and no DSU 3210. For example, in
In block 3402, a first distance sensor of a first client transmits a wireless signal inclusive of a content. For example, the content includes a transmit radar signal. The wireless signal may encode extra information, like a timestamp or an identification code for a distance sensor emitting the wireless signal, or a unique coded signal that identifies that particular packet, such as a UUID code. Also, the wireless signal can enable performance of distance measurement at the first distance sensor. The wireless signal can be a unique coded signal. For example, a server may be programmed to allow, in real-time, the first client 3102 having a first distance sensor 3210 to transmit a distance sensing wireless signal such that the distance sensing wireless signal causes an echo, where the distance sensing wireless signal includes an identification code, wherein the echo includes the identification code.
In block 3404, a second distance sensor of a second client receives and decodes the wireless signal and an echo based on the wireless signal. The echo can be off an object in operational proximity of the first client or the second client. For example, the echo is based on the wireless signal transmitted by the first client 3110. The object 3108 can be other than the first client 3110 and the second client 3112. The first client 3110 or the second client 3112 can be a mobile device, a wearable, or a vehicle. The object 3108 can be stationary or in motion. The echo signal is associated with the transmitted signal from the first client 3110 due to the digital information that the echo contains, such as the identification code or UUID. For example, the server may be programmed to allow, in real-time, the second client 3112 having a second distance sensor 3210 to directly receive the distance sensing wireless signal and the echo such that the second client 3112 real-time generates a set of data based on the distance sensing wireless signal including the identification code and the echo including the identification code, where the second distance sensor has an operational wireless sensing distance and the first client 3110 is positioned within the wireless operational distance.
In block 3406, the second client forms a set of information based on the wireless signal and the echo. The second client can form the set of information in real-time. The second client may receive a direct-path transmission from the first client or the second client may receive an echo from an object, or both. Any of the direct-path transmission or the echo from the object signal that is received by the second client can be associated with a particular transmission from the first client due to the embedded digital data, such as a timestamp or a UUID code or others. The second client (e.g. via a receiver) may decode the received signal to retrieve the encoded bitstream. The second client (e.g., via a receiver) may also record the time according to its own clock, when the signals were received. The second client (e.g., via a receiver) may send the decoded bits directly to the server for further processing. The second client may process the bitstream, such as through a correlation to a known bit sequence, to determine the time at which the second client received the signals, according to its own clock. The second client may also determine the relative difference in the time of flight between the direct path and the echo. The second client may transmit any of the decoded information to the server as well. For example, assume that the first client 3110 transmits a packet that contains a unique identification code, such as a UUID code, as well as a timestamp corresponding to the clock of the first client 3110. The direct transmission is received by the second client 3112 as well as another echo that is delayed by, for example, 10 ns from the direct path. The second client 3112 forms a set of information based on the timestamps and UUID code embedded in the received packets.
In block 3408, the second client sends the set of information to a server. The set of information can be sent over a network, as described herein. The second client can send the set of information in real-time. The second client can send the set of information in an encrypted manner. The set of information can include alphanumerics, image, video, audio, or others. The server receives the set of information from the second client. The first client can generate a set of information based on the wireless signal and send the set of information to the server, as well. The server may be contained within the first or second client. The first client may also send its record of transmitted data sets to the server. The first client may also send its observations of echo signals to the server. For example, the server may be programmed, in real-time, to receive the set of data from the second client 3112, to determine, in real-time, a first real-time position of the first client 3110 based on the set of data and a second real-time position of the second client 3112 based on the set of data.
In block 3410, the server determines a first position of the first client, a second position of the second client, or a third position of an object based on the set of information. The first position or the second position or the third position can be within or outside a defined area, as described herein. The server may also determine the position of the object associated with the echo signals. The server may have a priori knowledge (e.g. read from memory) of the positions or relative positions of the first and second clients. For example, the DSU of the first or second clients may measure a range to an object, but not measure a relative angle of a location of the object. By comparing the received signals of the first and second clients, the server may determine the relative angle of the position of the object. For example, the first client 3110 and the second client 3112 may be located close to one another (e.g., within distance sensing distance of each other). The first client 3110 wirelessly transmits a packet with an embedded timestamp and a UUID code. The second client 3112 is located very close to the first client (e.g., about 30 cm away, about 50 cm away), but does not have direct communication with the first client 3110. The second client 3112 receives a direct path transmission from the first client 3110, delayed only about 1 ns (or another time instance) from its transmission by the first client 3110. The transmission also reflects from an object 3108 and the receiver receives an echo delayed by 21 ns (or another time instance). The receiver then determines that the distance to the target is approximately (20 ns/2)*30 cm=300 cm. In another example, the first client 3110 wirelessly transmits a signal with an encoded timestamp and a UUID code. The first client 3110 determines the range to a target (e.g., object 3108) is about 2 meters. The second client 3112 receives a direct path transmission as well as an echo that is delayed by about 8 ns (or another time instance) relative to the direct path transmission. The first client 3110 and the second client 3112 transmit their data and observations to a server, which fuses or associates the sets of data in a manner similar that is described above. The server then determines that the target that is about 2 meters from the first client 3110 is also (8 ns*30 cm)=2.4 m from the second client 3112. Using triangulation, the server can calculate two points that are about 2 meters from the first client 3110 and about 2.4 meters from the second client 3112, so the target should be at one of these two locations. The server may use other information, such as readings that indicate the orientation of the clients 3110 and 3112 and the directionality of the antennas on the clients 3110 and 3112 to determine which of the two possible locations is the actual location of the target. For example, consider that both clients 3110 and 3112 are facing North and have antennas with half-hemispherical antenna patterns, then only one of the two possible points for the target is within the field of view of the clients 3110 and 3112, so the server is able to assign a unique location for the target. For example, the server may be programmed to send, in real-time, the first real-time position to the first client 3310 or the second client 3112 or the second real-time position to the first client 3310 or the second client 3112. For example, the server can determine the first position of the first client, the second position of the second client, or the third position of the object based on the set of information, as disclosed in block 3310 and block 3312.
In block 3412, the server sends the first position or the second position to the first client or the second client. The first position can be sent to the second client or the first client. The second position can be sent to the first client or the second client. The server may send the position or angle of the object to the first client or the second client. The server can send the first position or the second position over a network, as described herein. The server can request a third client to take an action. The third client can be stationary or in motion within or outside a defined area. The server can request an input device or an output device to take an action. The input device can include a camera, a microphone, a user input interface, a touch-enabled display, a receiver, a transceiver, a sensor, a motor, a valve, a hardware server, or others including any permutational combinations thereof. The output device can include a display, a speaker, a vibrator, an actuator, a valve, a pump, a motor, a transmitter, a transceiver, a hardware server, or others including any permutational combinations thereof. The server can create or modify a data structure. The data structure can include an array, a linked list, a que, a stack, a deck, a tree, a file, a database record, a digital map, or others. The data structure can be modified via add operation, remove operation, an edit operation, a deletion operation, a sort operation, an update operation, or others. The data structure can be local or remote from the server.
The first client or the second client can include a land vehicle, such as an automobile, a motorcycle, a bus, a truck, a skateboard, a moped, a scooter, a bicycle, a tank, a tractor, a rail car, a locomotive, a wheelchair, a vacuum cleaner, or others including any permutational combinations thereof, where the land vehicle hosts a distance sensor as described above. The land vehicle can collect a set of data from the distance sensor and share the set of data, which can be in real-time, with a land vehicle infrastructure item, such as a gas station, a charging station, a toll station, a parking meter, a drive-through-commerce station, an emergency service vehicle, a vehicle, which can be via a V2V protocol, a garage, a parking spot, a hydrant, a street sign, a traffic light, a load cell, an road-based wireless induction charger, a fence, a sprinkler, a beacon, or others including any permutational combinations thereof. When the land vehicle infrastructure item also hosts a distance sensor, then that distance sensor can also collect a set of data and share that set of data, which can be in real-time, with the land vehicle, as explained above. Such configurations can detect discrepancies, such as objects that the land vehicle infrastructure item is not aware of or does not know enough about. Also, as explained above, the land vehicle with the distance sensor can detect and thereby track consumer communication units, whether internal or external the land vehicle, such as Wi-Fi enabled devices, such as smartphones, tablets, wearables, infotainment unit, video gaming consoles, toys, or others including any permutational combinations thereof, in order to determine its position or a position of a consumer communication unit. For example, the land vehicle with the distance sensor can track its position relative to a plurality of consumer communication units based on where the consumer communication units are typically positioned. As such, when density or frequency of the consumer communications units is increased or decreased from a typical amount, then the land vehicle with the distance sensor can take an action or avoid taking an action, such as changing speed, slowing down, accelerating, stopping, operating a component of the vehicle, such as a window, infotainment system, sound a horn, siren, or alarm, opening/closing a door, a trunk, a hood, turn on windshield wipers, turn on regular or high beam lights, activate/deactivate parking/brake, navigate on road, swerve, turn, or others including any permutational combinations thereof.
Some embodiments of cooperative positioning can include a distance estimation using visible light communications for LTE 5G systems (or other radio or light communication networks). As such, these embodiments are described in an attached disclosure marked as Exhibit A. For example, in the various examples discussed above in context of
In some embodiments, there can be a weak reflection (e.g., radar, optical, sound) to Client 1, better to Client 2, and based on such state of being the server can enable a determination of a location of an object (e.g., the object 3108). For example, in context of the method 3400, in block 3410, the server determines a first position of the first client, a second position of the second client, or a third position of an object based on the set of information. The first position or the second position or the third position can be within or outside a defined area, as described herein. The server may also determine the position of the object associated with the echo signals. The server may have a priori knowledge (e.g. read from memory) of the positions or relative positions of the first and second clients. For example, the DSU of the first or second clients may measure a range to an object, but not measure a relative angle of a location of the object. By comparing the received signals of the first and second clients, the server may determine the relative angle of the position of the object. For example, the first client 3110 and the second client 3112 may be located far from one another but still within distance sensing range of each other, or at least within range to receive each other’s wireless transmissions. The first client 3110 wirelessly transmits a packet with an embedded timestamp and a UUID code. The first client measures the distance to two targets, at 1000 cm and 600 cm, but the reflection from the target at 600 cm is weak. The second client 3112 is located relatively remotely in the defined area, in this example it is 1000 cm away. The second client 3112 receives a direct path transmission from the first client 3110, delayed by about (1000 cm/(30 cm/ns))=33.3 ns(or another time instance) from its transmission by the first client 3110, by comparing the timestamp embedded in the packet to its own timestamp. The transmission also reflects from an object 3108 and the receiver receives an echo delayed by 53.3 ns (or another time instance) and due to the material composition and geometry of the object this reflected signal is stronger than the echo signal received by the first client. The receiver then determines that the excess path distance to the target is approximately (20 ns/2)*30 cm=300 cm. Note that the excess path length to the target can be determined accurately by the second client even if its local clock is not synchronized with the first client, since it is only the difference between the direct path and reflected signals that needs to be measured. The first client 3110 and the second client 3112 transmit their data and observations to a server, which fuses or associates the sets of data in a manner similar that is described above. The server deduces that the target is 600 cm away from the first client, and (1000 cm + 300 cm – 600 cm=700 cm) away from the second client. Using triangulation, the server determines that there are two possible locations for the target. The server may use other information, such as readings that indicate the orientation of the clients 3110 and 3112 and the directionality of the antennas on the clients 3110 and 3112 to determine which of the two possible locations is the actual location of the target. For example, consider that both clients 3110 and 3112 are facing North and have antennas with half-hemispherical antenna patterns, then only one of the two possible points for the target is within the field of view of the clients 3110 and 3112, so the server is able to assign a unique location for the target. For example, the server may be programmed to send, in real-time, the first real-time position to the first client 3310 or the second client 3112 or the second real-time position to the first client 3310 or the second client 3112.
Various embodiments of the present disclosure may be implemented in a data processing system suitable for storing and/or executing program code that includes at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements include, for instance, local memory employed during actual execution of the program code, bulk storage, and cache memory which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
I/O devices (including, but not limited to, keyboards, displays, pointing devices, DASD, tape, CDs, DVDs, thumb drives and other memory media, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just a few of the available types of network adapters.
This disclosure may be embodied in a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. A code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, among others. The computer readable program instructions may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Words such as “then,” “next,” etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Although process flow diagrams may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination may correspond to a return of the function to the calling function or the main function.
Features or functionality described with respect to certain example embodiments may be combined and sub-combined in and/or with various other example embodiments. Also, different aspects and/or elements of example embodiments, as disclosed herein, may be combined and sub-combined in a similar manner as well. Further, some example embodiments, whether individually and/or collectively, may be components of a larger system, wherein other procedures may take precedence over and/or otherwise modify their application. Additionally, a number of steps may be required before, after, and/or concurrently with example embodiments, as disclosed herein. Note that any and/or all methods and/or processes, at least as disclosed herein, can be at least partially performed via at least one entity or actor in any manner.
Although preferred embodiments have been depicted and described in detail herein, skilled artisans know that various modifications, additions, substitutions and the like can be made without departing from spirit of this disclosure. As such, these are considered to be within the scope of the disclosure, as defined in the following claims.
This patent application claims a benefit of U.S. Provisional Pat. Application 62/881,303 filed 31 Jul. 2019; which is incorporated by reference herein for all purposes.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/044217 | 7/30/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62881303 | Jul 2019 | US |