With the advent of cheap, bright light emitting diodes (LEDs), LED light arrays may be deployed as overhead lights in buildings, such as stores. LED light arrays have the capability to provide adequate area lighting, while being intensity modulated to communicate information, such as shopping information and the like, in a manner that is virtually imperceptible to humans. Conventional smartphones with built-in cameras provide Internet browsing and offer shopper friendly applications, such as global positioning system (GPS) store locator services. However, such applications fall short when it comes to guiding shoppers inside large superstores, for example, because GPS coverage may be lost indoors. While smartphones can capture pictures and videos, the smartphones are limited in their ability to process modulated light from overhead LED light arrays in a manner that supports intelligent applications, such as indoor position determination and guidance that may augment GPS positioning.
In the drawings, the leftmost digit(s) of a reference number identifies the drawing in which the reference number first appears.
Described below are embodiments directed to light identifier (ID) error detection and correction used in photogrammetric position determination of a light receiver relative to a light transmitter. The light ID error detection and corrections embodiments are described most specifically in connection with
A light receiver, such as a smartphone equipped with a camera, records images of the light beams originating from the neighborhood of lights. The receiver determines positions of the recorded light beams in the images, and demodulates an ID from each of the recorded light beams at its determined image position. The light receiver retrieves (i) a set of neighbor IDs for each demodulated ID, i.e., a list of the neighboring lights in the neighborhood of lights, and (ii) a real-world position of the light corresponding to the demodulated ID. The light receiver cross-references the demodulated IDs against the retrieved sets of neighbor IDs to generate statistics that reveal valid demodulated IDs, and any errors in the demodulated IDs, including invalid demodulated IDs, and missing IDs, if any. The light receiver corrects the errors to produce correct IDs each indexing a real-world position that is correctly matched to one of the determined light beam positions. The light receiver may use Euclidean distance calculations to facilitate correction of the errors. Then the light receiver photogrammetrically determines a position of the receiver relative to the light transmitter based on the correctly matched real-world and determined light beam positions.
The ensuing description is divided into the following sections:
Light Arrays
Light Beam Diagram
Light Communication System Using FSOOK
Light Transmitter
Light Receiver and UFSOOK
Multi-light Transmitter
Implicit Photogrammetric Position Determination
Flowchart
Light ID Error Detection and Correction
Light Neighborhoods
Conceptual Approach based on Light Neighborhoods
Set Notation to Describe Light Neighborhoods
Generalized Treatment
Calculating Euclidean Distance
Illustrative Examples
Summary Flowcharts
Computer Processor System
Wireless Communication Receiver System
General Treatment of Photogrammetric Positioning
Computer Program, Apparatus, and Method Embodiments
Light Arrays
Light Beam Diagram
Light imager 208 may include a multi-dimensional charge coupled device (CCD) array including many sensor pixels or light detectors, as is known in the art. Light beams 206a-206d are sufficiently spatially-separated from one another as to form corresponding beam images 212a-212d, or light spots, on spatially-separated areas of light imager 208. Each of light spots/areas 212i occupies a position, e.g., an x-y position on a light sensor plane of the light imager, corresponding to a cluster of sensor pixels. Over time, light imager 208 repetitively captures or records, simultaneous light beams 206i impinging on areas 212i, to produce a time-ordered sequence 214 of recorded images 216 of light array 202.
Light imager 208 captures the images at a predetermined frame rate of, e.g., approximately 30 frames/second, i.e., every 1/30 seconds. Therefore, sequential images 216 are spaced in time by a frame period equal to an inverse of the frame rate. Sequential images 216 may be processed in accordance with methods described herein.
Light Communication System Using FSOOK
Light Transmitter
Light transmitter 304 includes a light modulator 309 to intensity modulate a light source 310, a data source 312, and a controller 314 to control the transmitter. Data source 312 provides data 316, such as a message in the form of data bits, to controller 314. Controller 314 includes a memory 318 to store protocol control logic, protocol light packet definitions, and a frame rate Ffps in frames per second, which is equal to the inverse of a frame period Tframe in seconds (i.e., Ffps=1/Tframe). The frame rate Ffps is an anticipated rate at which light receiver 308 will sample received light, as will be described more fully below in connection with
Controller 314 also includes a clock and timer module 319 to generate a master timing signal, and derive from the master timing signal timing outputs used by controller 314 to control transmit light packet start times and durations based on the master timing signal. Based on data 316, the contents of memory 318, and the timing outputs from clock and timer module 319, controller 314 generates commands 320 to cause modulator 309 to modulate light source 310 in accordance with examples described herein.
Modulator 309 includes an FSK modulator 326 and an intensity modulator 327 that together generate a modulation signal 330 to FSOOK modulate light source 310. Controller commands 320 include commands that specify (i) a selected frequency at which FSK modulator is to operate, (ii) a start time at which FSK modulator 326 is to begin generating and outputting the selected frequency, and (iii) a duration (or time period) over which the selected frequency is to be generated. The start time and duration may be graduated in fractions of time period Tframe, such as 1/1000 of Tframe. In response to controller commands 320, FSK modulator 326 outputs the selected frequency as an FSK signal 332 beginning at the specified time and duration, such as for an integer number of frame periods, which facilitates detection and demodulation of the frequency at receiver 308. The selected frequencies may include:
a first frequency 328a F0 (e.g., 120 Hz) indicative of a logic 0 of a data bit 316 to be transmitted;
a second frequency 328b F1 (e.g., 105 Hz) indicative of a logic 1 of the data bit to be transmitted;
a third frequency 328c “HiRate” indicative of a first start-frame-delimiter to be transmitted. The HiRate frequency is orders of magnitude greater than frequencies F0, F1, e.g., many KHz or above. An exemplary HiRate frequency is 25 KHz; and
a fourth frequency 328d “Illegal” (e.g., 112.5 Hz, i.e., half-way between frequencies F0, F1) indicative of a second start frame delimiter to be transmitted.
FSK modulator 326 may include a voltage, or digitally, controlled oscillator that generates the above frequency responsive to commands 320. The terms “tone” or “tones” and “frequency” or “frequencies” are used equivalently and interchangeably herein.
FSK modulator 326 may generate each of the frequencies F0, F1, HiRate, and Illegal of FSK signal 332 as a substantially rectangular, or ON-OFF keying, waveform, where ON represents a logic 1 of the FSK waveform, and OFF represents a logic 0 of the FSK waveform. Also, to transmit a data bit, each of frequencies F0 and F1 may extend over multiple frame periods, and may be harmonically related to the frame period such that an integer number, k, of ½ cycles or periods of the rectangular FSK waveform matches the frame period, as is depicted in
representing a logic 0, frequency F0=N×Ffps; and
representing a logic 1, frequency F1=N±0.5Ffps, where N is an integer.
Each of the frequencies F0, F1, HiRate, and Illegal, together with the respective number of frames over which they are transmitted, form a light protocol. More specifically, transmitter 304 combines these parameters into the above mentioned modulated light packets formatted in accordance with the light protocol, and then transmits the light packets.
Intensity modulator 327 intensity modulates light source 310 based on the modulation signal 330, to produce modulated light beam 306. Light source 310 may be an LED that emits light in any of the visible, infrared, or ultraviolet light spectrums. In an embodiment, modulation signal 330 follows the shape of FSK signal 332 and adjusts a current through light source 310 to proportionally adjust an intensity of light 306 emitted by the light source. In this manner, ON-OFF keying of modulation signal 330 causes corresponding ON-OFF keying of the intensity of light 306, such that the intensity closely follows ON-OFF keying waveforms 404, 406 depicted in
Transmitter 304 is depicted with one light 310 for simplicity only. Other embodiments include many lights each driven by a corresponding light modulator, as will be described later in connection with
Transmit Light Packet Definition
Light Receiver
Imager 350 includes a light sensor 356, e.g., including a 2-dimensional array of light detectors, that repetitively samples light impinging on the light sensor at a predetermined receive sample rate equal to the frame rate, Ffps=1/Tframe, of imager 350 to produce a signal 358. Signal 358 includes a time-ordered sequence of 1-dimensional, or alternatively, 2-dimensional light samples, which form images of an image sequence IS (similar to images 216 depicted in
Light Detector Array
Light sensor 356 may include a 2-dimensional light detector array 359, such as a CCD array, including multiple individual light detectors 360 (also referred to as sensor pixels 360) spatially arranged in M rows by N columns, where M and N may each be in the hundreds or thousands. For convenience, exemplary light detector array 359 is depicted in
An exemplary individual light detector 360(i, j) is depicted in expanded view in
IAH circuit 364 operates as an approximated matched filter to recover samples of the FSK light waveform pulses, such as the pulses of waveforms 406, 408, in the light packets of light beam 306. IAH circuit 364 integrates electrical signal 365 for an integration time tint according to enable signal 361(i, j), to produce a peak integrated signal, also referred to herein as light sample 358(i, j) or sampled light 358(i, j), which is held at the output of the IAH circuit. The process of enabling light detector 360(i, j) to sample light 306 in accordance with enable signal 361(i, j), to produce light sample 358(i, j), is also referred to herein as “exposing light detector 360(i, j), to produce light sample 358(i, j).” Integration time tint may be approximately a half-period or less of the waveforms of frequencies F0, F1, so that light detector 360(i, j) approximately maximally samples light that is intensity modulated at frequencies F0, F1 of FSK waveforms 406, 408 (for logic levels 0, 1).
An exemplary enable signal waveform “ES” of enable signal 361(i, j) is depicted at the bottom of
Global and Line Array Exposure Modes
Exposure controller 362 generates enable signals 361 in any number of ways to implement different exposure modes of light detector array 359, as is now described.
Exposure controller 362 may expose array 359 (i.e., enable light detectors 360 to sample light 306 in accordance with enable signals 361, to produce light samples 358) in either a global exposure mode or, alternatively, in a sequential line exposure mode. In the global exposure mode, exposure controller 362 generates enable signals 361 so that their respective series of enable pulses 368, i.e., respective integration periods tint, coincide in time with each other, i.e., occur at the same time. The result is that all of light detectors 360 are exposed at the same time, i.e., they all sample light 306 at the same time, once every frame period Tframe, to produce a time-spaced sequence of 2-D images represented in images 358 (which represents all light samples 358(i, j), i=1 . . . M, j=1 . . . N), as represented in
In the line exposure mode, exposure controller 362 may generate enable signals 361 to expose spatially-successive lines, e.g., successive rows or successive columns, of light detectors 360 one after the other, e.g., one at a time, in a time sequence. For example, exposure controller 361 may generate enables signals 361 so as to expose:
all of light detectors 360 across row i−1 (i.e., all of the N light detectors 360(i−1, 1-N)) at a same time t−τ; then
all of light detectors 360 across row i at a same time t; then
all of light detectors 360 across row i+1 at a same time t+τ, and so on.
This produces spatially-successive lines of sampled light, spaced in time at sequential times t−τ, t, t+τ, corresponding to light detector rows i−1, i, i+1, and so on. This type of exposure is also referred to as “rolling shutter exposure” because the exposure may be thought of as being implemented using a camera shutter one line of light detectors wide (i.e., that is only wide enough to expose one line of light detectors at a time), that “rolls” or scans sequentially across spatially-successive lines (e.g., the rows or columns) of light detectors in a given direction (e.g., up/down, left/right), to thereby sequentially expose the spatially-successive lines of light detectors. In an embodiment, exposure controller 362 sequentially exposes the spatially-successive lines of light detectors at a rate (referred to as a “line exposure rate”) that is greater than both frequencies F0, F1 of the FSK waveforms representing logic levels 0, 1 in transmitted light packets. The line exposure rate is equal to 1/τ.
In a variation of the above-described line exposure mode, the enable signals 361 may be generated to be slightly offset in time but overlapping, so that the exposure of each line time-overlaps the exposure of the spatially-successive line. For example, row i−1 begins its exposure at a time ti-1, and while being exposed (e.g., before time tint expires for row i−1), row i begins its exposure, and while being exposed (e.g., before time tint expires for row i), row i+1 begins its exposure, and so on. This variation of the line exposure mode results in time spaced lines of sampled light corresponding to light detector rows i−1, i, i+1, but with overlapping exposure times for successive rows.
Detector
Detector 352 includes a beam position determiner module 370a, and a SFD detector/demodulator module 370b (collectively referred to as “modules 370” and “modules 370a, 370b”), which cooperate to process the sequence of images stored in memory 355, namely to:
determine a position of each beam recorded in the images, such as an x, y center coordinate of the beam in each image (using beam position determiner 370a); and
from the modulated light recorded at the determined beam positions, both detect any delimiters (SFDs) and demodulate any data bits conveyed by that recorded light (using detector/demodulator 370b).
As described above, light detectors 360 sample FSK waveform pulses in light 306, such as the pulses of waveforms 406, 408 at frequencies F0, F1 (representing logic levels 0, 1), and provide the resulting samples 358 to modules 370a, 370b, e.g., in a sequence of 1-dimensional or 2-dimensional images IS.
To detect a beam position, beam position determiner 370a raster scans the full area of each image in the sequence of images (e.g., in image sequence IS) stored in memory 355, e.g., first, second, third, and fourth sequential images, and so on, in search of recorded light energy that has a correlated position across the sequence of images. In other words, a beam position is determined when beam position determiner 370a detects a spot of modulated light, i.e., modulated light energy, centered on the same position, e.g., an x, y position corresponding to a row, column position, in each of the sequential images. Beam positions for multiple, spatially-separated, simultaneously recorded beams may be determined in this manner
From each determined position, SFD detector/demodulator 370b associates corresponding light samples 358, over multiple recorded images, to one of: a demodulated data bit level, i.e., logic 0 or logic 1; a demodulated data delimiter; and a detected SFD.
During the first bit, or logic 0, period, the frequency/timing relationship between the 120 Hz ON-OFF keying of light 306 and the light sample spacing, i.e., the frame period Tframe, causes consecutive light samples S1 and S2 to be in the same intensity state, i.e., at the same level (either ON/HIGH). In the example of
During the second bit, or logic 1, period, the frequency/timing relationship between the 105 Hz ON-OFF keying of light 306 and the light sample spacing causes successive light samples S3 and S4 to toggle between states either (ON then OFF, or OFF then ON). In the example of
The above-described exemplary demodulation of FSOOK modulated light is based on under-sampling the FSK waveform. Therefore, such demodulation is referred to herein as under-sampled FSOOK (UFSOOK) demodulation.
Modules 370a, 370b also monitor light samples (i.e., images) 358 to detect light modulated with the Illegal frequency, as an indicator of a SFD associated with a light packet. As mentioned above in connection with demodulated data bits, the relationships between the frame period and the frequencies F0, F1 respectively causes detected light in two consecutive images always to be either in the same state, or in different states. However, the relationship between the frame period and the Illegal frequency causes detected light to toggle ON and OFF over four consecutive images in an ON-OFF pattern that cannot occur when the light is modulated at frequencies F0, F1. More specifically, if the light samples indicate any of patterns ON-ON-OFF-OFF, OFF-OFF-ON-ON, ON-OFF-OFF-ON, and OFF-ON-ON-OFF over four consecutive images, then modules 370a, 370b detect the Illegal frequency associated with the data delimiter.
Modules 370a, 370b also monitor light samples 358 to detect light modulated with the HiRate frequency, as an indicator associated with the SFD. An SFD modulated at the HiRate frequency may be more readily detected relative to an SFD modulated at the Illegal frequency when embedded with message data bits (e.g., logic 0, 1) because the HiRate frequency is more easily distinguished from the logic 0, 1 FSK frequencies than the Illegal frequency, which falls between those frequencies.
While light detectors approximately maximally detect frequencies F0, F1 in the modulated light, i.e., produce a near maximum amplitude output in response to the matched frequency, the integration time of the light detectors is too long to respond fully to the much greater HiRate frequency. Therefore, light detectors 360 are suboptimal energy detectors/samplers of the HiRate frequency, and provide an average, e.g., approximately ½ maximum, amplitude output (i.e., sampled output) in response to the HiRate frequency. Therefore, modules 370a, 370b detect the SFD in modulated light beam 306 when light detectors 360 provide the average, lesser amplitude outputs in response to sequential images. Similarly, in a transmit embodiment in which a reduced light intensity serves as an alternative for the HiRate frequency, light detectors 360 provide an average, lesser amplitude indicative of the reduced light intensity.
From recorded sampled light at a determined position in a sequence of images, modules 370a, 370b demodulate frequencies F0, F1 into data bit logic levels, detect the HiRate frequency, and detect the Illegal frequency associated with the SFD. Modules 370a, 370b also detect the number of frames over which each of the above mentioned frequencies extend. In this way, detector 352 deconstructs or determines the modulated light packets conveyed in the recorded light beam(s). Modules 370a, 370b pass such information to controller 354 over a bidirectional interface 374. For example, over interface 374, modules 370a, 370b indicate detected SFDs from recorded light packets to controller 354, and provide demodulated data bits from the light packets to the controller.
Controller
Controller 354 (also referred to herein as a “protocol processor”) includes a memory 376 to store control logic protocol light packet definitions, and a frame period. Controller 354 provides light packet protocol definitions to detector 352 over interface 374. Based on the information from detector 352 and the contents of memory 376, controller 354 operates and controls receiver 308. Controller 354 also controls imager 350 over interface 374, e.g., the controller may command exposure controller 363 to operate in either of the global exposure mode or the line exposure mode.
Multi-Light Transmitter
Transmitter 640 includes light modulators 648, which may be implemented similarly to modulator 309 in
In response to commands 651, modulators 648 modulate their corresponding lights 642 to transmit their respective light packets in spatially-separated light beams 652 according to the light packet definition of
In an alternative embodiment, some of lights 642 may modulate their respective light beams, while others may transmit unmodulated light beams.
Implicit Photogrammetric Position Determination
Implicit photogrammetric position determination of light receiver relative to a light transmitter is now described.
Photogrammetric position determination requires knowledge of both the real-world position and the corresponding image positions of the lights upon which the determination is based. Each light is associated with two positions, namely, its real-world position and its corresponding image position. The real-world positions may be ascertained explicitly in explicit photogrammetric positioning, or implicitly in implicit photogrammetric positioning. In the explicit approach, each light transmits modulated light to indicate a unique light identifier. The light receiver recovers the IDs from the modulated light, and then retrieves real-world positions of the lights from a database of light positions, e.g., <x, y, z> coordinates, indexed by the IDs. In this way, the real-world light positions are said to be explicitly determined because all of the lights provide their IDs explicitly, from whence their positions in the database may be accessed/determined.
In the implicit approach, while some of the lights transmit their IDs, others do not. For example, some of the lights may transmit constant intensity, unmodulated light. Such lights do not explicitly provide their IDs. Therefore, their IDs, and associated real-world positions, must be inferred implicitly.
Returning to
Light receiver 804 samples and records spatially-separated anchor light beams 810 and non-anchor light beams 812 in a sequence of recorded images representing lights A, N of light transmitter 802. Light receiver determines positions (i.e., image positions) of the recorded anchor light beams and the non-anchor light beams in the recorded images. Light receiver 804 detects the unique light IDs from each of the recorded anchor light beams 810 using, e.g., UFSOOK demodulation. Using the detected light IDs as an index into map light database 808, light receiver 804 accesses/retrieves the light map that depicts anchor lights A associated with the detected anchor light IDs, i.e., a light map of lights A, N as positionally arranged in transmitter 802. In an alternative embodiment, light map database 808 may be stored in a local memory of light receiver 804, i.e., the light maps are collocated with the light receiver. In such an embodiment, the light receiver simply accesses its local memory for the relevant light map.
Light receiver 804 rotates and scales the retrieved light map as necessary so as to align the map anchor lights with their counterpart recorded anchor lights (i.e., recorded anchor light beams) in the recorded images. The map anchor lights are aligned with the recorded anchor lights having the same light ID. This also aligns the light map non-anchor lights with their counterpart non-anchor lights (i.e., non-anchor light beams) in the recorded images. The result is aligned pairs of map lights and recorded lights (i.e., light beams), each pair associated with a unique light ID and corresponding real-world position linked to the light map. Therefore, the aligned light map indicates the real-world positions of the lights A, N.
Light receiver 804 photogrammetrically determines a 3-dimensional position of the light receiver relative to light transmitter 802 based on (i) the real-world positions of the lights A, N ascertained from the aligned light map, and (ii) the already known positions of the recorded light beams in the recorded images. This is referred to as implicit photogrammetry because the light IDs and real-world positions of the non-anchor lights N were inferred from the aligned light map. The photogrammetric position determination may be performed in accordance with the equations described below in connection with
Light map database 808 stores the light map depicted in
In connection with
Observed_Anchor_IDs,[number observed (3)],[Anchor IDs (1,5,6)]
In response, the server storing light map database 808 returns light map 1010 along with the following information:
Many different positional arrangements of anchor lights are possible. Preferably, the anchor lights are arranged in the light array and corresponding light map so as to be rotation invariant, which avoids alignment ambiguities.
Flowchart for Implicit Photogrammetric Position Determination
1105 includes, in a light receiver, sampling and recording spatially-separated, modulated anchor (i.e., modulated) light beams from anchor (i.e., modulated) lights and non-anchor (i.e., unmodulated) light beams from non-anchor (i.e., unmodulated) lights of a light array, to produce a sequence of images of the light array. The light receiver may be a camera that “shoots” a short video of the light array, to produce the sequence of images. In an embodiment, the anchor light beams each comprises light modulated to indicate an SFD, followed by a unique light ID that is a series of bits, such as “0110,” etc., each bit represented as light that is intensity modulated, e.g., FSOOK modulated, over a bit period at one of multiple FSK frequencies indicative of the bit. The non-anchor light beams are unmodulated.
1110 includes determining positions in the images where the modulated anchor light beams are recorded, and then demodulating, from the determined positions, the light IDs from the recorded anchor light beams. The demodulating may include UFSOOK demodulating the recorded anchor light beams.
1115 includes accessing a predetermined light map of the light array based on the demodulated light IDs. Such accessing may include transmitting, to a light map database residing in a network, a request for the light map of the light array containing lights having the demodulated light IDs, and receiving the requested light map and real-world light positions (e.g., in a table) associated with the lights in the light map.
In an embodiment, the light map defines a spatial arrangement of map anchor lights and map non-anchor lights that matches a reduced-scale spatial arrangement of the anchor lights and the non-anchor lights in the light array. The map anchor lights may be specifically annotated in a manner detectable by the light receiver to facilitate alignment therewith, as described below. Associated with the light map is a table listing light IDs of the map lights in association with their corresponding real-world positions, e.g., <x, y, z> coordinates, in the light array, as deployed.
1120 includes positionally aligning the light map with the recorded anchor light beams. That is, positionally aligning the map anchor lights with recorded anchor lights having the same IDs (i.e., where the detected IDs match the map light IDs returned from the map light database). Positionally aligning may include rotating and scaling the retrieved light map so as to positionally align the map anchor lights with their corresponding recorded anchor lights.
1125 includes accessing real-world positions of the anchor and the non-anchor lights of the light array based on the aligned map, which implicitly indicates the IDs and corresponding real-world positions of the recorded non-anchor light beams.
1130 includes photogrammetrically determining a 3-dimensional, position of the light receiver relative to the light array based on the real-world light positions accessed in 1125 and the determined positions of the light beams in the recorded images. The photogrammetrically determining may include determining the position according to the photogrammetric technique described below in connection with
Light ID Error Detection and Correction in Photogrammetric Position Determination
Light ID error detection and correction related to photogrammetric position determination is now described. Embodiments include lights that transmit their light IDs using FSOOK modulated light and a light receiver that demodulates the light IDs using UFSOOK. Other types of modulation and demodulation are possible.
As described above in connection with implicit photogrammetric determination, lights transmit their respective light IDs, which may be recovered in a light receiver and then used to index corresponding real-world positions, e.g., <x,y,z> coordinates, of the lights. The lights are imaged, i.e., recorded in images, in the light receiver. Therefore, the real-world positions of the lights have corresponding, or matching, image positions in the light receiver. The real-world positions of the lights and their correctly matching image positions are used to determine a position of the receiver, photogrammetrically.
If, however, the receiver recovers (e.g., demodulates) a particular ID incorrectly, then the incorrect ID indexes an incorrect real-world position, i.e., a real-world position of the wrong light. The incorrect real-world position does not match the image position because the image position is matched to the real-world position of a different light, namely, the light having the ID that should have been demodulated at the image position from which the incorrect ID was demodulated. The incorrectly matched real-world and image position pair result in an incorrectly determined photogrammetric position. Accordingly, the light ID error detection and correction embodiment described herein determines such demodulation errors and then corrects those errors, so that, as a result, the indexed real-world position of each imaged light correctly matches the image position for that light. A correct photogrammetric receiver position may then be determined based on the correctly matched pairs of real-world and image positions.
Light ID error detection and correction is based on establishing predetermined light neighborhoods in which each light has a predetermined set of neighboring or “neighbor” lights. Because a given ID identifies/represents a light, it follows that the given ID has predetermined neighbor IDs, i.e., IDs of lights in the same neighborhood as the light having the given ID. In other words, it is assumed that a “light” and its “ID” both represent the light. Light ID error detection and correction may be performed in system 800 of
Light Neigborhoods
Examples of light neighborhoods and their effectiveness in correcting demodulated ID errors are now described.
A neighborhood of lights may be considered to be an intuitive, or natural, partitioning of lights that may be associated with, e.g., LED lighting. Examples of a neighborhood are an LED light panel and an LED light bar. An exemplary LED light panel may include a 15×15 LED rectangular grid, in which the 225 LEDs comprise a single neighborhood. Each LED may transmit a unique ID, resulting in 225 different IDs all belonging to the same neighborhood. An alternative arrangement segments the same 15×15 LED rectangular grid into 4 quadrants, each quadrant assigned to a single ID, such that the light panel is represented by 4 IDs. In this alternative arrangement, the 4 IDs collectively form a single neighborhood.
Conceptual Approach for Error Detection and Correction Based on Light Neighborhoods
As mentioned above, for the light ID error detection and correction described herein, each demodulated ID indexes (and retrieves) a real-world position for the ID and a set or list of IDs in the same neighborhood (i.e. its neighbors). In the error example of
With reference to position Table 1, cross-referencing all of the lists or sets of neighbor IDs (3rd column) against all of the indexing demodulated IDs (1st column) indicates an error as a statistical inconsistency, namely, the 1st column indexes (demodulated) ID1 and ID2 do not list ID9 in their 3rd column neighbor lists. Likewise, the neighbor list for index ID 9 indicates a similar error in that ID 1 and ID2 are not listed in the neighbor list.
Position Table 2 below indicates how the Neighbor List above would have looked without a demodulation error, i.e., how one would expect it to look. Position Table 2 is referred to as the Expected Position Table.
Without the error, each index ID (in the 1st column) appears in the neighbor list twice. But, as seen in position Table 1, while ID3 appears twice as a neighbor ID in the neighbor lists, it is not listed as an index (demodulated) ID; hence, this reveals that demodulated ID9 is an error; ID3 should have been demodulated instead. The statistical inconsistency reveals an invalid demodulated ID9, and a missing ID3. The error is corrected with a retrieval of the following information set (row) for index ID3, which replaces the erroneous information set (row) for index ID9:
ID3 corresponds to an expected neighbor list (i.e., ID1 and ID2).
Set Notation to Describe Light Neighborhoods
First consider a set of k objects {1 . . . k}. A matrix with dimensions (k by k−1) can be formed by eliminating, from the set, each row's index. A set with 5 objects will be used as an example; that is, the set {1 2 3 4 5} forms the following matrix.
There are 5 sets and elements of the sets are used k−1=4 times, as expected. This represents statistical consistency between the row indexes and the objects listed in the sets. Now, eliminate the 3rd row and modify the matrix accordingly.
As a result, each remaining set references the missing row index 3, but because row three is missing, and the set {1 2 4 5} is absent, there is a lack of corresponding reference to the other row indices; that is, set elements 1, 2, 4, 5 occur k−2=3 times while set element 3 occurs k−1=4 times. It can be concluded that the set with row index 3 is missing. The missing row index is revealed as a statistical inconsistency between the row indexes and their corresponding sets. However, because all the remaining row indices (1, 2, 4, 5) occur k−2=3 times one can also conclude that the remaining row indices and related sets are valid (consistent); that is, they belong to the original set. This represents detecting a missing set index, but there are no erroneous set members.
Now consider the original set matrix.
Replace the third row with an index and set not related to the original set; e.g., row index 6 from the set {6 7 8 9 10}.
A cross-reference of the row indexes (IDs) against the set elements (sets of neighbor IDs) in the above table may take the form of a histogram.
Accordingly, with reference to histogram 1300 an absence of any element at ID6 indicates that ID6 is an invalid demodulated ID. The peak at element 3 indicates ID3 is a missing ID. Therefore, if row index 6 is replaced with row index 3 and its related set of neighbor IDs, then the error is corrected.
Generalized Method and Corresponding Flowcharts for Error Detection and Correction
Let k be the number of elements in the set, n is the number of errors in the set, and m is the number of missing set members (i.e., unobserved, unrecorded) (flowchart block 1404).
First determine which of two cases to use: 1) full set; 2) less than a full set
To determine which case to use, perform the following:
The Euclidean distance between two objects located at <x1,y1,z1> and <x2,y2,x2> is given as
D=√{square root over ((x1−x2)2+(y1−y2)2+(z1−z2)2)}{square root over ((x1−x2)2+(y1−y2)2+(z1−z2)2)}{square root over ((x1−x2)2+(y1−y2)2+(z1−z2)2)}.
The smaller the Euclidean distance D the closer together are the objects. Euclidean distance is used to determine the ID of an observed (i.e., imaged light) given a position table of IDs and their related real-world locations.
Minimum Euclidean Distance Example
In
Using the aforementioned light ID error detection techniques, it is determined that the demodulated ID of light 1510 at position <4,10> is not a part of the neighborhood set 1505. Furthermore, it is determined that IDs for lights at <3,3> and <2,2> are missing from the demodulated IDs for the neighborhood set 1505. It is also determined that m=1 (one light is missing—burnt out) and n=1 (one light has been erroneously decoded). By examining the image depicted in
Position estimation of unidentified lights can be done by interpolation with respect to correctly decoded ID positions (which are known upon the return of the positions from database 808). For example, in
One or more position tables, including an Expected Position Table that one would expect a light receiver to construct in the absence of errors, from which the real-world positions are to be used for photogrammetric determination;
A Constructed Position Table actually constructed based on the presence of the errors stipulated in the given example scenario. The Constructed Position Table may reveal that the IDs and their corresponding real-world positions may need to be updated in accordance with error detection and correction as described herein to result in a set of (i) correct IDs, (ii) real-world positions, and (ii) IDs that are properly placed in the light receiver image, i.e., assigned to their appropriate image positions. Such assigning is referred to as place-identifying the IDs in the image;
Cross-reference statistics, including a Histogram and statistics variables k, L, j, m, n, and p;
Actions to be performed to correct determined errors; and
Subsequently retrieved position information to correct errors.
Examples 1-11 are summarized in the following:
Example 1 (FIG. 16A)—Case 1 with 5 lights, no errors;
Example 2 (FIG. 16B)—Case 1 with 5 lights, 1 error. First peak for ID4 indicates ID4 is a valid missing ID and needs to be retrieved (along with its real-world position) to update the Constructed Position Table used for photogrammetric determination. Second peaks for IDs 1, 2, 3, 5 indicate these IDs are valid demodulated IDs. The absence of any peak at ID9 indicates ID9 is an invalid demodulated ID;
Example 3 (FIG. 16C)—Case 1 with 5 lights, 2 errors. First peaks indicate IDs 2, 3 are valid missing IDs and need to be retrieved to update the position table. Second peaks indicate 1, 3, 4 are valid demodulated IDs. Absences indicate invalid demodulated IDs 7, 14. Invalid IDs 7, 14 were demodulated from image positions in the light receiver. Therefore, Euclidean distances based on real-world positions of retrieved IDs 2, 3 are used to assign the retrieved IDs 2, 3 to correct ones of the image positions from which the invalid IDs 7, 14 were demodulated, so that the image positions for IDs 2, 3 correctly match their real-world positions;
Example 4 (FIG. 16D)—Case 1 with 5 lights, 3 errors;
Example 5 (FIG. 16E)—Case 1 with 4 lights, 1 error;
Example 6 (FIG. 16F)—Case 1 with 4 lights, 2 errors;
Example 7 (FIG. 16G)—Case 2 with 6 lights, 1 not observed;
Example 8 (FIG. 16H)—Case 2 with 5 lights, 1 error;
Example 9 (FIG. 16I)—Case 2 with 5 lights, 2 errors;
Example 10 (FIG. 16J)—Case 2 with 5 lights, 3 errors; and
Example 11 (FIG. 16K)—Case 2 with 6 lights, 2 missing.
1705 includes, in light imager, recording images of spatially-separated light beams originating from a neighborhood of lights, each light beam modulated to indicate an identifier (ID) that identifies, and indexes a real-world position of, its originating light.
1710 includes determining positions of the recorded light beams in the images.
1715 includes demodulating the ID from each recorded light beam at the determined positions.
1720 includes retrieving a set of neighbor IDs for each demodulated ID. This results in a constructed light position table including an indexing ID, a corresponding real-world position, and the corresponding set of neighbor IDs.
1725 includes cross-referencing the demodulated IDs against the sets of neighbor IDs to reveal errors in the demodulated IDs. The cross-referencing reveals statistical consistencies between the demodulated IDs and the sets of the neighbor IDs that indicate valid IDs, and statistical inconsistencies that indicate valid missing IDs and invalid demodulated IDs. The cross-referencing may include generating a histogram having
a first axis to represents a union of the demodulated IDs and the sets of neighbor IDs, and
a second axis to represent how many times each ID on the first axis occurs in the sets of neighbor IDs, wherein histogram first peaks and second peaks, in the number of times each ID occurs, indicate missing IDs and valid IDs, respectively, and any absences of peaks indicate invalid demodulated IDs.
1730 includes correcting the errors to produce correct IDs each indexing a real-world position that is correctly matched to one of the determined light beam positions.
1735 includes photogrammetrically determining a position of the receiver based on the correctly matched real-world and determined light beam positions.
1805 includes retrieving any missing IDs, and corresponding real-world positions, identified by the cross-referencing in 1725. This includes updating the constructed position table constructed in 1720.
Together, 1810 and 1815 represent assigning each retrieved missing ID to one of the determined light beam positions, from which one of the invalid IDs was demodulated, that is a best fit for the retrieved missing ID. Such assigning is also referred to as place-identifying the retrieved IDs in the image based on the real-world Euclidean distance calculations.
More specifically, 1810 includes calculating a set of Euclidean distances for each retrieved missing ID relative to all of the valid IDs based on real-world positions corresponding to the retrieved missing ID and the valid IDs.
1815 includes assigning each retrieved missing ID to one of the determined light beam positions in 1710, from which one of the invalid IDs was demodulated in 1715, which minimizes the set of Euclidean distances to give the best fit.
Computer Processor System
Computer system 1900 may include one or more instruction processing units, illustrated here as a processor 1902, which may include a processor, one or more processor cores, or a micro-controller.
Computer system 1900 may include memory, cache, registers, and/or storage, illustrated here as memory 1904.
Memory 1904 may include one or more non-transitory computer readable mediums encoded with a computer program, including instructions 1906.
Memory 1904 may include data 1908 to be used by processor 1902 in executing instructions 1906, and/or generated by processor 1902 during execution of instructions 1906. Data 1908 includes protocol information 1911, including light packet protocol definitions, frame periods, and so on, and recorded images 1913a from an imager, such as a camera, which may be received through the I/O interface. Data 1908 may also include data 1913b, including: position table information, such as indexing IDs, real-world positions, and neighbor lists; and cross-reference statistics, including histograms and statistical variables.
Instructions 1906 include instructions 1910a for light receiver (RX) processing of recorded images as described in one of the examples above, including photogrammetric position determination. Instructions 1910a include instructions for implementing a detector 1914, a receiver control/protocol processor 1916, and an exposure controller 1924, as described in one or more examples above. Detector instructions 1914 further include instructions for implementing a detector/demodulator 1922 such as a FSOOK or UFSOOK detector/demodulator, and a beam position determiner 1926, as described in one or more examples above. Instruction for implementing controller/processor 1916 include photogrammetric position determiner instructions 1916a to determine receiver positions in accordance with photogrammetric equations, light ID error detector and corrector instructions 1916b to detect and correct light ID errors, and light interface instructions 1916c to request and receive light information from a light database, as described in one or more examples above.
Instructions 1906 may also include instructions 1910b for a light transmitter operating in accordance with one or more multiphase sampling embodiments described above. Instructions 1910b include instructions 1917 for controlling the transmitter, and 1918 for implementing a modulator, such as a FSOOK modulator, as described in one or more examples above.
The instructions described above and depicted in
Wireless Communication Receiver System
System 2002 may be implemented as described in one or more examples herein, including a light receiver. System 2000 may include a processor 2004.
System 2000 may include a communication system, including a transceiver, 2006 to interface between system 2002, processor system 2004, and a communication network over a channel 2008. Communication system 2006 may include a wired and/or wireless communication system. System 2002, such as a light receiver, may retrieve map light information from a remote light map database (not shown in
System 2000 or portions thereof may be implemented within one or more integrated circuit dies, and may be implemented as a system-on-a-chip (SoC).
System 2000 may include a user interface system 2010 to interface system 2010.
User interface system 2010 may include a monitor or display 2032 to display information from processor 2004.
User interface system 2010 may include a human interface device (HID) 2034 to provide user input to processor 2004. HID 2034 may include, for example and without limitation, one or more of a keyboard, a cursor device, a touch-sensitive device, and or a motion and/or imager. HID 2034 may include a physical device and/or a virtual device, such as a monitor-displayed or virtual keyboard.
User interface system 2010 may include an audio system 2036 to receive and/or output audible sound.
System 2000 may further include a transmitter system to transmit signals from system 2000.
System 2000 may correspond to, for example, a computer system, a personal communication device, and/or a television set-top box.
System 2000 may include a housing, and one or more of communication system 2002, digital processor system 2004, user interface system 2010, or portions thereof may be positioned within the housing. The housing may include, without limitation, a rack-mountable housing, a desk-top housing, a lap-top housing, a notebook housing, a net-book housing, a tablet housing, a set-top box housing, a portable housing, and/or other conventional electronic housing and/or future-developed housing. For example, communication system 2002 may be implemented to receive a digital television broadcast signal, and system 2000 may include a set-top box housing or a portable housing, such as a mobile telephone housing. System 2000 may be implemented in a camera-equipped smartphone, or may be implemented as part of a wireless router.
General Treatment of Photogrammetric Positioning
The principle of photogrammetric positioning is observing multiple visual features, assumed to be lights, such as LEDs in an LED constellation or array, with known positions such that the observer can ascertain their position relative to the LED constellation.
With reference to
2-D sensor coordinates
3-D camera coordinates
3-D “world” or “real-world” coordinates.
The basic process is as follows:
map the LED images into sensor coordinates described by vector <u,v>
map the sensor coordinate points into camera coordinates described by vector tcw
translate the origin of the camera coordinate system to real world coordinates described by vector twc.
The mapping of the light features onto the image sensor plane is based upon the collinearity condition given below.
We introduce the notation of
to rewrite equations 1 and 2 as
The si values are related to the rotational inclination matrix, which is obtained as a decomposition of the general rotational matrix into its azimuth and inclination components
Rwc=Rwca·Rwci. Eq. 5
Each element of Rwci is directly determined by reading the inclination sensor which is assumed to be embedded within the image sensor. Because the viewing transformation from the point xw (world coordinates) to point xc (camera coordinates) is given by xc=(Rwci)−1·(Rwca)·xw+tcw, further equation manipulation will require that we utilize the inverses of the compound rotational matrix.
The components of the inverse azimuth rotational matrix, which need to be determined as part of the positioning calculations, are given by
The si values are given by the relationship
where the [rmni] values are determined by the inverse of the inclination matrix as
Equations 3 and 4 can be manipulated into a system of linear equations as
Equations 9 and 10 can be put into matrix form as
For the ith light feature we define
such that Ai·p=bi.
When multiple features are detected, a system of linear simultaneous equations describing p can be obtained that performs a least mean square estimate as
where i>=3 (i.e. >=3 features), with at least 3 features being non-collinear, and the superscript+ notation indicates the pseudo-inverse operation.
The camera origin is then translated and rotated such that its location is in terms of world coordinates, which yields the desired solution of
twc=−Rwc·tcw. Eq. 16
The camera azimuth orientation angle is derived from Eq. 13 as
Methods and systems disclosed herein may be implemented in hardware, software, firmware, and combinations thereof, including discrete and integrated circuit logic, application specific integrated circuit (ASIC) logic, and microcontrollers, and may be implemented as part of a domain-specific integrated circuit package, and/or a combination of integrated circuit packages. Software may include a computer readable medium encoded with a computer program including instructions to cause a processor to perform one or more functions in response thereto. The computer readable medium may include one or more non-transitory mediums. The processor may include a general purpose instruction processor, a controller, a microcontroller, and/or other instruction-based processor.
Methods and systems are disclosed herein with the aid of functional building blocks illustrating functions, features, and relationships thereof. At least some of the boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries may be defined so long as the specified functions and relationships thereof are appropriately performed.
Various computer program, method, apparatus, and system embodiments are described herein.
A CPP embodiment includes a non-transitory computer readable medium encoded with a computer program, including instructions to cause a processor to:
access a recording of images of spatially-separated light beams originating from a neighborhood of lights, each light beam modulated to indicate an identifier (ID) that identifies, and indexes a real-world position of, its originating light;
determine positions of the recorded light beams in the images;
demodulate the ID from each recorded light beam;
retrieve a set of neighbor IDs for each demodulated ID;
cross-reference the demodulated IDs against the sets of neighbor IDs to reveal errors in the demodulated IDs;
correct the errors to produce correct IDs each indexing a real-world position that is correctly matched to one of the determined light beam positions; and
photogrammetrically determine a position of the receiver based on the correctly matched real-world and the determined light beam positions.
The cross-reference may reveal:
valid demodulated IDs and any missing IDs which together comprise the correct IDs; and
invalid demodulated IDs.
The cross-reference may also reveal:
statistical consistencies between the demodulated IDs and the sets of the neighbor IDs that indicate the valid IDs; and
statistical inconsistencies that indicate the missing IDs and invalid demodulated IDs.
The instructions to cross-reference may include instructions to cause the processor to:
generate a histogram having
a first axis to represents a union of the demodulated IDs and the sets of neighbor IDs, and
a second axis to represent how many times each ID on the first axis occurs in the sets of neighbor IDs,
wherein first peaks and second peaks in the number of times each ID occurs indicate missing IDs and valid IDs, respectively.
The instructions to correct may include instructions to cause the processor to:
retrieve the missing IDs; and
assign each retrieved missing ID to one of the determined light beam positions, from which one of the invalid IDs was demodulated, that is a best fit for the retrieved missing ID.
The instructions to assign may include instructions to cause the processor to:
calculate a set of Euclidean distances for each retrieved missing ID relative to all of the valid IDs based on real-world positions corresponding to the retrieved missing ID and the valid IDs; and
assign each retrieved missing ID to one of the determined light beam positions, from which one of the invalid IDs was demodulated, that minimizes the set of Euclidean distances to give the best fit.
The instructions to cause the processor to retrieve may include instructions to cause the processor to retrieve a real-world position, in addition to the set of neighbor IDs, for each demodulated ID.
An apparatus embodiment comprises:
a light imager to record images of spatially-separated light beams originating from a neighborhood of lights, each light beam modulated to indicate an identifier (ID) that identifies, and indexes a real-world position of, its originating light;
and
processing modules to:
cross-reference the demodulated IDs against the sets of neighbor IDs to reveal errors in the demodulated IDs;
correct the errors to produce correct IDs each indexing a real-world position that is correctly matched to one of the determined light beam positions; and
photogrammetrically determine a position of the receiver based on the correctly matched real-world and the determined light beam positions.
The cross-reference may reveal:
valid demodulated IDs and any missing IDs which together comprise the correct IDs; and
invalid demodulated IDs.
The cross-reference may further reveal:
statistical consistencies between the demodulated IDs and the sets of the neighbor IDs that indicate the valid IDs; and
statistical inconsistencies that indicate the missing IDs and invalid demodulated IDs.
The processing modules to cross-reference may be configured to generate a histogram having
a first axis to represents a union of the demodulated IDs and the sets of neighbor IDs, and
a second axis to represent how many times each ID on the first axis occurs in the sets of neighbor IDs,
wherein first peaks and second peaks in the number of times each ID occurs indicate missing IDs and valid IDs, respectively.
The processing modules to correct may be configured to:
retrieve the missing IDs; and
assign each retrieved missing ID to one of the determined light beam positions, from which one of the invalid IDs was demodulated, that is a best fit for the retrieved missing ID.
The processing modules to assign may be configured to:
calculate a set of Euclidean distances for each retrieved missing ID relative to all of the valid IDs based on real-world positions corresponding to the retrieved missing ID and the valid IDs; and
assign each retrieved missing ID to one of the determined light beam positions, from which one of the invalid IDs was demodulated, that minimizes the set of Euclidean distances to give the best fit.
The processing modules to retrieve may further include processing modules to retrieve a real-world position, in addition to the set of neighbor IDs, for each demodulated ID.
The apparatus may further comprise:
a communication system to communicate with a network;
a processor to interface between the communication system and a user interface system; and
a housing,
wherein the processor, the communication system, and the light transmitter are positioned within the housing.
The communication system may include a wireless communication system; and
the housing includes a mobile hand-held housing to house the communication system, the processor, the user interface system, and a battery.
A method embodiment comprises: A method, comprising:
in a receiver, recording images of spatially-separated light beams originating from a neighborhood of lights, each light beam modulated to indicate an identifier (ID) that identifies, and indexes a real-world position of, its originating light;
determining positions of the recorded light beams in the images;
demodulating the ID from each recorded light beam;
retrieving a set of neighbor IDs for each demodulated ID;
cross-referencing the demodulated IDs against the sets of neighbor IDs to reveal errors in the demodulated IDs;
correcting the errors to produce correct IDs each indexing a real-world position that is correctly matched to one of the determined light beam positions; and
photogrammetrically determining a position of the receiver based on the correctly matched real-world and the determined light beam positions.
The cross-referencing may reveals:
valid demodulated IDs and any missing IDs which together comprise the correct IDs; and
invalid demodulated IDs.
The cross-referencing may further reveal:
statistical consistencies between the demodulated IDs and the sets of the neighbor IDs that indicate the valid IDs; and
statistical inconsistencies that indicate the missing IDs and invalid demodulated IDs.
The cross-referencing may includes:
generating a histogram having
a first axis to represents a union of the demodulated IDs and the sets of neighbor IDs, and
a second axis to represent how many times each ID on the first axis occurs in the sets of neighbor IDs,
wherein first peaks and second peaks in the number of times each ID occurs indicate missing IDs and valid IDs, respectively.
The correcting may include:
retrieving the missing IDs; and
assigning each retrieved missing ID to one of the determined light beam positions, from which one of the invalid IDs was demodulated, that is a best fit for the retrieved missing ID.
The assigning may include:
calculating a set of Euclidean distances for each retrieved missing ID relative to all of the valid IDs based on real-world positions corresponding to the retrieved missing ID and the valid IDs; and
assigning each retrieved missing ID to one of the determined image positions image positions, from which one of the invalid IDs was demodulated, that minimizes the set of Euclidean distances to give the best fit.
The retrieving may include retrieving a real-world position, in addition to the set of neighbor IDs, for each demodulated ID.
While various embodiments are disclosed herein, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail may be made therein without departing from the spirit and scope of the methods and systems disclosed herein. Thus, the breadth and scope of the claims should not be limited by any of the examples disclosed herein.
Number | Name | Date | Kind |
---|---|---|---|
4422099 | Wolfe | Dec 1983 | A |
5317635 | Stirling et al. | May 1994 | A |
5394259 | Takahara | Feb 1995 | A |
5531642 | Shiomi et al. | Jul 1996 | A |
5577733 | Downing | Nov 1996 | A |
5600471 | Hirohashi et al. | Feb 1997 | A |
5903373 | Welch et al. | May 1999 | A |
5970208 | Shim | Oct 1999 | A |
6400482 | Lupton et al. | Jun 2002 | B1 |
6570694 | Yegnanarayanan | May 2003 | B1 |
6594050 | Jannson et al. | Jul 2003 | B2 |
6794831 | Leeb et al. | Sep 2004 | B2 |
6819878 | King et al. | Nov 2004 | B1 |
6941076 | Adams et al. | Sep 2005 | B1 |
6954591 | Lupton et al. | Oct 2005 | B2 |
6965464 | Mossberg | Nov 2005 | B2 |
7149256 | Vrazel et al. | Dec 2006 | B2 |
7352972 | Franklin | Apr 2008 | B2 |
7415212 | Matsushita et al. | Aug 2008 | B2 |
7689130 | Ashdown | Mar 2010 | B2 |
7920943 | Campbell et al. | Apr 2011 | B2 |
7949259 | Suzuki | May 2011 | B2 |
8070325 | Zampini et al. | Dec 2011 | B2 |
8238014 | Kucharski et al. | Aug 2012 | B2 |
8260137 | Linnartz | Sep 2012 | B2 |
8334901 | Ganick et al. | Dec 2012 | B1 |
8417124 | Ford | Apr 2013 | B2 |
8488971 | Linnartz et al. | Jul 2013 | B2 |
8494367 | Linnartz | Jul 2013 | B2 |
8520065 | Staats et al. | Aug 2013 | B2 |
8579437 | Su et al. | Nov 2013 | B2 |
8588616 | Langer et al. | Nov 2013 | B2 |
8588621 | Dahan et al. | Nov 2013 | B2 |
8620165 | Kim et al. | Dec 2013 | B2 |
8630549 | Kim et al. | Jan 2014 | B2 |
8665508 | Kucharski et al. | Mar 2014 | B2 |
8693878 | Schenk et al. | Apr 2014 | B2 |
8729835 | Henig et al. | May 2014 | B2 |
8737842 | Schenk et al. | May 2014 | B2 |
8818204 | Roberts | Aug 2014 | B2 |
8861976 | Roberts et al. | Oct 2014 | B2 |
20010055136 | Horiuchi et al. | Dec 2001 | A1 |
20020085094 | Teuchert | Jul 2002 | A1 |
20020089722 | Perkins et al. | Jul 2002 | A1 |
20020145776 | Chow et al. | Oct 2002 | A1 |
20020167701 | Hirata | Nov 2002 | A1 |
20030081287 | Jannson et al. | May 2003 | A1 |
20040120025 | LeHoty | Jun 2004 | A1 |
20040208616 | Melendez et al. | Oct 2004 | A1 |
20050002673 | Okano et al. | Jan 2005 | A1 |
20050036573 | Zhang et al. | Feb 2005 | A1 |
20050047392 | Ashwood Smith | Mar 2005 | A1 |
20050135527 | Masui et al. | Jun 2005 | A1 |
20060204172 | Shahar | Sep 2006 | A1 |
20060239689 | Ashdown | Oct 2006 | A1 |
20060269287 | Bidmead et al. | Nov 2006 | A1 |
20070177161 | Ishii et al. | Aug 2007 | A1 |
20080049685 | Becker et al. | Feb 2008 | A1 |
20080205900 | Cole et al. | Aug 2008 | A1 |
20080298801 | King et al. | Dec 2008 | A1 |
20090196615 | Kauffman | Aug 2009 | A1 |
20090208221 | Sasai | Aug 2009 | A1 |
20090228755 | Franovici | Sep 2009 | A1 |
20090243815 | Tolli | Oct 2009 | A1 |
20090247152 | Manne | Oct 2009 | A1 |
20100060972 | Kucharski et al. | Mar 2010 | A1 |
20100208236 | Damink et al. | Aug 2010 | A1 |
20100209119 | Lee et al. | Aug 2010 | A1 |
20100250125 | Lundberg et al. | Sep 2010 | A1 |
20100271476 | Damink et al. | Oct 2010 | A1 |
20100290516 | Lee et al. | Nov 2010 | A1 |
20100309958 | Lakkis | Dec 2010 | A1 |
20100316380 | de Lind van Wijngaarden | Dec 2010 | A1 |
20110002695 | Choi et al. | Jan 2011 | A1 |
20110069971 | Kim et al. | Mar 2011 | A1 |
20110075581 | Mihota | Mar 2011 | A1 |
20110144941 | Roberts | Jun 2011 | A1 |
20110164881 | Rajagopal et al. | Jul 2011 | A1 |
20110274429 | Caplan et al. | Nov 2011 | A1 |
20120008961 | Chen et al. | Jan 2012 | A1 |
20120033965 | Zhang et al. | Feb 2012 | A1 |
20120076498 | Sayeed et al. | Mar 2012 | A1 |
20120099868 | Fischer et al. | Apr 2012 | A1 |
20120109356 | Kong et al. | May 2012 | A1 |
20120162633 | Roberts et al. | Jun 2012 | A1 |
20120315036 | Kucharski et al. | Dec 2012 | A1 |
20130028609 | Staats et al. | Jan 2013 | A1 |
20130028612 | Ryan et al. | Jan 2013 | A1 |
20130126713 | Haas et al. | May 2013 | A1 |
20130141555 | Ganick et al. | Jun 2013 | A1 |
20130170841 | Liu et al. | Jul 2013 | A1 |
20130247117 | Yamada et al. | Sep 2013 | A1 |
20130287403 | Roberts | Oct 2013 | A1 |
20130301569 | Wang et al. | Nov 2013 | A1 |
20140003817 | Roberts et al. | Jan 2014 | A1 |
20140003823 | Roberts et al. | Jan 2014 | A1 |
20140006907 | Roberts et al. | Jan 2014 | A1 |
20140064739 | Chen et al. | Mar 2014 | A1 |
20140086587 | Roberts et al. | Mar 2014 | A1 |
20140086590 | Ganick et al. | Mar 2014 | A1 |
20140093234 | Roberts et al. | Apr 2014 | A1 |
20140093238 | Roberts | Apr 2014 | A1 |
20140093249 | Roberts et al. | Apr 2014 | A1 |
20140153668 | Xi et al. | Jun 2014 | A1 |
20140219663 | Roberts | Aug 2014 | A1 |
20140280316 | Ganick et al. | Sep 2014 | A1 |
20140308048 | Roberts et al. | Oct 2014 | A1 |
Number | Date | Country |
---|---|---|
1436952 | Jul 2004 | EP |
2106041 | Sep 2009 | EP |
2010014408 | Jan 2010 | JP |
2010-283616 | Dec 2010 | JP |
5031427 | Sep 2012 | JP |
1020060034883 | Apr 2006 | KR |
100761011 | Sep 2007 | KR |
1020100083578 | Jul 2010 | KR |
1020110083961 | Jul 2011 | KR |
1020120006306 | Jan 2012 | KR |
2008113861 | Sep 2008 | WO |
2011064342 | Jun 2011 | WO |
2012037528 | Mar 2012 | WO |
2012087944 | Jun 2012 | WO |
2013048502 | Apr 2013 | WO |
2013165751 | Apr 2013 | WO |
2013074065 | May 2013 | WO |
2013081595 | Jun 2013 | WO |
2013101027 | Jul 2013 | WO |
2014046757 | Mar 2014 | WO |
2014051754 | Apr 2014 | WO |
2014051767 | Apr 2014 | WO |
2014051768 | Apr 2014 | WO |
2014051783 | Apr 2014 | WO |
Entry |
---|
International Search Report and Written Opinion received for Patent Application No. PCT/US2011/060578, mailed on Mar. 15, 2012, 10 pages. |
International Search Report and Written Opinion received for Patent Application No. PCT/US2011/054441, mailed on Apr. 23, 2012, 11 pages. |
Roberts, et al., “Methods and Arrangements for Frequency Shift Communications by Undersampling”, PCT Patent Application No. PCT/US2011/060578, filed on Nov. 14, 2011, 33 Pages. |
Roberts, Richard R., “Methods and Arrangements for Frequency Shift Communications”, PCT Patent Application No. PCT/US2011/054441, filed on Sep. 30, 2011, 23 Pages. |
Roberts, et al., “Methods and Arrangements for Error Correction in Decoding Data From an Electromagnetic Radiator”, U.S. Appl. No. 13/539,354, filed Jun. 30, 2012, 45 Pages. |
Roberts, et al., “Methods and Arrangements for Generating a Waveform for Frequency Shift Keying Communications”, U.S. Appl. No. 13/539,351, filed Jun. 30, 2012, 47 Pages. |
Yoshino, et al., “High-accuracy Positioning System using Visible LED Lights and Image Sensor”, 1-4244-1463-6/08 RWS 2008 IEEE, pp. 439-442. |
Tanaka, et al., “New Position Detection Method using Image Sensor and Visible Light LEDs”, IEEE Second International Conference on Machine Vision, Dec. 28-30, 2009, pp. 150-153. |
Horikawa, et al., “Pervasive Visible Light Positioning System using White LED Lighting”, vol. 103; No. 721(CS2003 178-197), 2004, pp. 93-99. |
Roberts, et al., “Methods and Apparatus for Multiphase Sampling of Modulated Light”, U.S. Appl. No. 13/630,066, filed Sep. 28, 2012, 71 Pages. |
Roberts, Richard R., “Location Detection System”, PCT Patent Application No. PCT/US2011/62578, filed on Nov. 30, 2011, 51 Pages. |
Gopalakrishnan, et al., “Location Based Technology for Smart Shopping Services”, PCT Patent Application No. PCT/US2011/067749, filed on Dec. 29, 2011, 18 Pages. |
International Search Report with Written Opinion received for PCT Patent Application No. PCT/US2013/037787, mailed Aug. 12, 2013, 9 pages. |
International search Report and Written Opinion received for PCT Patent Application No. PCT/US2013/047347, mailed on Aug. 27, 2013, 13 Pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2013/047350, mailed on Sep. 25, 2013, 11 Pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2013/047772 mailed on Sep. 27, 2013, 10 Pages. |
Office Action Received for U.S. Appl. No. 13/539,354, Mailed on Nov. 21, 2013. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2013/047343, mailed on Oct. 7, 2013. |
Notice of Allowance received for U.S. Appl. No. 13/460,224 mailed on Apr. 16, 2014. |
Notice of Allowance received for U.S. Appl. No. 13/539,354, mailed on Apr. 1, 2014. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2011/054441, mailed on Apr. 10, 2014. |
International Preliminary Report on Patentability and Written Opinion Received for PCT Patent Application No. PCT/US2011/060578, mailed on May 20, 2014, 5 pages. |
International Search Report received for PCT Patent Application No. PCT/US2013/046224, mailed on Sep. 16, 2013, 3 pages. |
Extended European Search Report received for European Patent Application No. 11873073.8, mailed on Apr. 8, 2015, 11 pages. |
Non-Final Office Action received for U.S. Appl. No. 13/460,224, mailed on Oct. 11, 2013, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 13/538,888, mailed on Jun. 17, 2014, 11 pages. |
Non-Final Office action received for U.S. Appl. No. 13/539,351, mailed on Feb. 24, 2015, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 13/539,354, mailed on Apr. 30, 2015, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 13/539,354, mailed on Dec. 22, 2014, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 13/539,354, mailed on Aug. 29, 2014, 7 pages. |
Non-Final Office Action received for U.S. Appl. No. 13/539,354, mailed on Nov. 21, 2013, 6 pages. |
Notice of Allowance received for U.S. Appl. No. 13/625,361, mailed on Dec. 17, 2014, 10 pages. |
Non-Final Office Action received for U.S. Appl. No. 13/625,361, mailed on Jul. 31, 2014, 11 pages. |
Non-Final Office Action received for U.S. Appl. No. 13/629,843, mailed on Nov. 7, 2014, 19 pages. |
Notice of Allowance received for U.S. Appl. No. 13/629,843, mailed on Jun. 16, 2015, 9 pages. |
Non-Final Office Action received for U.S. Appl. No. 13/631,909, mailed on Jun. 5, 2015, 13 pages. |
Non-Final Office action received for U.S. Appl. No. 13/977,695, mailed on Apr. 7, 2015, 16 pages. |
Notice of Allowance received for U.S. Appl. No. 13/977,696, mailed on Jun. 22, 2015, 8 pages. |
IEEE, “Part 15.7: PHY and MAC standard for shortrange wireless optical communication using visible light”, IEEE Wireless Mac and Phy Specifications for Vpans, Std 802.15.7 DRAFT, XP017637045, Nov. 14, 2010, pp. 1-296. |
International Preliminary Report on Patentability and Written Opinion Received for PCT Patent Application No. PCT/US2013/037787, mailed on Nov. 13, 2014, 6 pages. |
International Preliminary Report on Patentability and Written Opinion Received for PCT Patent Application No. PCT/US2013/046224, mailed on Apr. 9, 2015, 8 pages. |
International Preliminary Report on Patentability and Written Opinion Received for PCT Patent Application No. PCT/US2013/047343, mailed on Apr. 9, 2015, 8 pages. |
International Preliminary Report on Patentability and Written Opinion Received for PCT Patent Application No. PCT/US2013/047347, mailed on Apr. 9, 2015, 10 pages. |
International Preliminary Report on Patentability and Written Opinion Received for PCT Patent Application No. PCT/US2013/047350, mailed on Apr. 2, 2015, 8 pages. |
International Preliminary Report on Patentability and Written Opinion Received for PCT Patent Application No. PCT/US2013/047772, mailed on Apr. 9, 2015, 7 pages. |
Sha'Ameri, et al., “Detection of Binary Data for FSK Digital Modulation Signals Using Spectrum Estimation Techniques”, IEEE, Telecommunication Technology, NCTT Proceedings, 4th National Conference, XP010636785, ISBN: 978-0-7803-7773-8, Jan. 14, 2013, pp. 155-158. |
Uchiyama, et al., “Photogrammetric System using Visible Light Communication”, IEEE, 2008, pp. 1771-1776. |
Yamazato, et al., “Experimental Results on Simple Distributed Cooperative Transmission Scheme with Visible Light Communication”, IEICE Trans. Commun., vol. E93-B, No. II, XP-001559468, Nov. 1, 2010, pp. 2959-2962. |
Number | Date | Country | |
---|---|---|---|
20140093126 A1 | Apr 2014 | US |