Embodiments described herein generally relate to processing techniques of data from light communication sources, and in particular, to the use of authentication and data interpretation techniques for data obtained from visible light via optical camera communication sources.
Visible light communications are embodied in a variety of emerging wireless communication techniques, such as in communications techniques that utilize light sources such as light-emitting diode (LED) signage and LED lamps to broadcast messages. A variety of applications have been proposed in the area of visible light communication, including for specialized deployments of wireless data networks that serve as a high-speed link for a last mile transmission of a network connection. In many uses of visible light communications, the brightness of the light source is modulated faster than the human eye may observe, allowing a light source to transmit messages without a perceivable flicker.
One implementation of visible light communications, optical camera communications, also known as “CamCom”, uses an image sensor within a camera for receiving and processing visible (human- or camera-visible) light data. One proposal for the standardization of optical camera communications is currently being developed by the Short-Range Optical Wireless Communications Task Group for a revision of the IEEE 802.15.7-2011 specification. For example, this task group is developing enhanced standards for the use of optical camera communications to enable scalable data rate, positioning/localization, and message broadcasting, using optical devices such as a flash, display, and image sensor as a transmitting or receiving device.
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. Some embodiments are illustrated by way of example, and not limitation, in the figures of the accompanying drawings in which:
In the following description, methods, configurations, and related apparatuses are disclosed for the processing and authentication of image data detected from camera image object sources, for image data that indicates modulated light communicated using visible light communications. In particular, the techniques discussed herein are relevant to the application of visible light communication commonly referred to as optical camera communications, which utilizes light emitting objects such as LED signage and LED lights to output (transmit) data to be captured (received) via an image sensor in a camera. Various device-based and system-based techniques for analyzing such image data that includes modulated light data and authenticating the source of modulated light data from the image data are disclosed herein.
Authentication, as used in the contexts discussed herein, refers to providing or determining a proof of identity before a data source associates with (e.g., provides data to) a data sink. As a similar example of authentication, in IEEE 802.11 (Wi-Fi) wireless communication networks, authentication frame exchanges are used to ensure that a station has the correct authentication information (e.g., a pre-shared WEP/WPA encryption key) before being able to establish a connection with the wireless network. In this setting, the assumption is that if the encryption key is known, then the station is authorized to associate with the network. In the field of optical camera communications, there is a similar technical challenge to ensure that a received data stream is provided from an authenticated source before allowing that data to initiate further actions on a receiving device. Because many types of visible light communications are openly broadcasted to any listener in observable range of the light, the ability to obtain data only from desired or trusted locations becomes a complex yet important issue.
In the examples of optical camera communications discussed herein, authentication is performed at a lower layer of processing, by visually identifying a data source in image data to confirm that the data sink desires to receive data from the visually observed data source. The identification of a desired data source may be used to locate, select, access, and process modulated light data from a desired light emitting object, while disregarding modulated light data detected from other light emitting objects. Thus, light sources that are not authenticated may be ignored and disregarded, preventing the use of unknown, unwanted, unverified, unauthorized, or rogue data.
As discussed herein, optical camera communication authentication techniques may include the identification and selection of a modulated light data source, performed using either human input or automated object recognition upon image data of the light emitting object. The use of image data for authentication enables proper verification of modulated light data from the desired source, because the image data obtained by a camera sensor captures light to visually recognize the object as it also captures the light used to transmit the modulated data. Accordingly, the optical camera communication authentication techniques discussed herein provide significant operational and security benefits over existing approaches that choose to consume and process all available modulated light data sources without authentication.
As shown, in
The motor vehicle 110 includes a number of processing components 130 to obtain, process, and evaluate a scene in the field of view observed in front of the motor vehicle. Such processing capabilities operate to capture image data for real-world objects (such as still RGB images of the LED signage) and the modulated light data provided in the visible light communication 120 (such as the modulated light data provided from operation of the LED signage). For example, the processing components 130 may include: a camera sensor 132 (e.g., CMOS/CCD sensor) to capture image data of a scene; camera data processing components 134 (e.g., implemented with programmed circuitry) to process, store, and extract data from the captured image data; and visible light communication processing components 136 (e.g., implemented with programmed circuitry) to detect and interpret modulated light data emitted from an object in the scene.
The processing components 130 may also include: authentication data processing components 138 (e.g., implemented with programmed circuitry) to implement user-interactive or automated authentication of light modulation data from a light emitting source (an object); user interface display processing components 140 (e.g., implemented with programmed circuitry) to receive user-interactive controls, including the generation of an augmented display of the image data; and an interactive display unit 142 (e.g., a touchscreen display hardware) to output a display of the image data and receive user input and commands for the display of the image data.
The processing components 130 or another component integrated with the motor vehicle 110 may also be used to access an external network source 150 (e.g., via the Internet), to obtain supplemental data 160 for use in the authentication or processing of data with the visible light communication 120. For example, the external network source 150 may provide a network-connected data processing server 152 (e.g., a web server) and data-hosting system 154 (e.g., a database) to serve the supplemental data in response to a request or a query from the processing components 130. For example, the visible light communication 120 may include data indicating a uniform request locator (URL) of the external network source 150, with the data processing server 152 and data-hosting system 154 adapted to serve the supplemental data 160 in response to the request or query.
Thus, in the stylized representation 200A of
The information on identified light sources is used in the authentication process, to determine which of the identified light sources provide a data stream available to be consumed by an associated processing system. A manual or automated authentication process then may be performed to select data from an available (identified) light source. For example, as shown in
In an example, the information being sent by the modulated light data may include encoded information in the form of graphical, textual, or other software-interpretable content. As discussed above for
In a manual authentication operation, a human user may provide an indication, such as through an input into a graphical user interface, to indicate which data source that the user wishes to authenticate with and download data from. For example, the user may provide touch input 220 at a representation of the light emitting source (the display of the ice cream shop sign 202) to trigger a user interface command for authentication, as shown in the stylized representation 200B. In response to the touch input 220, the modulated light data from the ice cream shop sign 202 may be parsed and interpreted, to obtain content. In this scenario, a set of content to populate an available contextual menu (a food menu) of the ice cream shop establishment is received from optical camera communications, and is overlaid on the image data (as a contextual message 212) next to the representation of the object that transmitted the data. Thus, the content obtained from a light emitting source may be displayed and overlaid to a user in the form of augmented reality in the stylized representation 200B; it will be understood that the content obtained from the light emitting source may be output with other types of devices and output formats in response to authentication.
In an automatic authentication operation, the authentication may be automatically conducted to access and parse data from a particular data source. Such automatic authentication may occur through an image recognition algorithm that selects the data source for the user, on the basis of the shape, classification, characteristics, or identification of an object or type of object (such as a particular sign, type of business associated with the sign, etc.) For example, in a controlled mode, image recognition algorithms may be used to only allow data to be downloaded and processed from objects that are previously known, such as a pedestrian control light or a traffic signal. As another example, an automatic mode to authenticate with and process data from all identified sources (referred to as a “promiscuous mode”) may be used to obtain a larger set of data from available sources. However, the selection of data from all available sources may be further limited based on the location of the objects in the field of view (such as is further described below with reference to
In certain examples, the type, format, or characteristics of the content that is overlaid in a graphical display may be adapted based on the perspective of the field of view captured by an image. This change to the graphical display may occur when the size and observable characteristics of respective light sources varies, especially when the image of the scene is captured from various distances. In an example, the generation of the overlaid content for graphical display may be adapted to handle scenarios where a light emitting object such as signage is in the field of view but is mixed with other light sources (e.g., when observed at a long distance); when a light emitting object such as signage is visible and separated from other objects in the field of view (e.g., as depicted in
For example, as a motor vehicle travels on a roadway and is a large distance from a light source, an image of a scene may depict multiple light sources to be overlapping and concentrated in an area of the image. (The modulated light data may be detected and processed from these different sources, however.) At a closer location, the respective lights are distinguishable and separated from one another in the field of view. At an even closer location, when an observer is very close or has partially passed the light emitting object, the object may become distorted or not be fully visible. In cases where the light source is obscured, the graphical display may provide alternative graphics, a listing of detected light sources, contextual menus, and other forms of augmented views to allow obscured light sources and objects to be identified and distinguished.
The stylized representation 300 depicts the selection of desired sources based upon the elevation angle of a camera field of view, as shown in respective area of view 310, 320, 330. In the camera field of view, a first area of view 310 is adapted to identify an elevation that is too high, and a second area of view 330 is adapted to identify an elevation that is too low; whereas a third area of view 320 is adapted to identify an elevation of objects most likely to provide modulated light data. For example, the third area of view 330 may be the area that is most likely to provide modulated light data that the vehicle is interested in (such as brake system data or other vehicle-to-vehicle communication). In other examples, other elevations or areas of view may also provide modulated light data. In the scenario depicted by the stylized representation 300, lights from other motor vehicles in the field of view in front of the camera (e.g., lights 322A, 322B, 322C, 322D, 322E, 322F, 322G, 322H) convey modulated light data using the respective vehicles' rear-facing lights (tail lights), with the modulated light data indicating data such as motor vehicle speeds, system events, roadway conditions, and the like.
In an example, authentication of respective light communication sources is based upon angle of arrival. In this fashion, the camera may automatically authenticate with lights that are +−5 degrees elevation, relative to the camera position. For example, in a field of view captured while driving a motor vehicle, this narrowed area eliminates many overhead street lights and reflections from the field of view. Thus, in the area of view 310, the overhead lights 312A, 312B, 312C, 312D, 312E are disregarded; likewise, in the area of view 330, the light reflections 332A, 332B, 332C, 332D, 332E are disregarded.
In still further examples, the field of view, the observed elevation angle, and the area used for automatic authentication may be modified based on the distance, clarity, and observation characteristics of respective light sources. For example, if a light source is obscured or not fully visible because the observer is too far away, too close, or past an observation angle for light emitting objects, the field of view may be modified to include or exclude additional areas of observation.
Although the preceding examples of
As shown, the sequence diagram includes the transmission of a data message in modulated light (operation 411), from the light display 402 to the camera 404. The camera 404 operates to receive, detect, and store the modulated light data (operation 412), such as through the buffering of image data. The camera 404 further operates to provide the image data of the captured scene (operation 413) to the processing system 406, and also providing an indication of the modulated light (operation 414) to the processing system 406.
The processing system 406 operates to generate an output of the image data to include an indication of the light display 402 as an overlay of the image data (e.g., an augmented reality display) (operation 415). From this overlaid image data, a user interface of the image data is generated for output with the user interface device 408 (operation 416). This user interface includes an indication that identifies the location of respective data sources of modulated light to a human user, such as may be highlighted or outlined directly on the user interface screen. The user interface device 408 then receives a user input selection in the user interface to authenticate a light display located at the user input location (operation 417), which causes the processing system 406 to process data corresponding to the user input location (operation 418) (e.g., the modulated light obtained from the light display 402).
In some examples, the data indicated from the user input location (e.g., the modulated light obtained from the light display 402) includes an indication of supplemental data at another source, such as the third party data source 410. In response, the processing system 406 may transmit a request to obtain supplemental data from the third party data source 410 (operation 419), and receive the supplemental data from the third party data source 410 in response to this request (operation 420).
Based on the processed modulated light data obtained from the light display 402, and any supplemental data obtained from the third party data source 410, the processing system operates to generate an updated user interface of the image data for output on the user interface device 408 (operation 421). As discussed above, this may include an augmented reality of the processed content as an overlay over image data; other types of data outputs including simulated content, graphical content, multimedia and interactive content, may also be output via the user interface device 408.
The operations of the flowchart 500 include the optional operation to activate the image sensor or other operational components of a camera (operation 510); in other examples, the image sensor is already activated or activated by another system component. The camera system is operated to capture image data of a scene with the camera (operation 520), with this image data including the capture of modulated light data. Modulated light data is detected from the image data (operation 530), and locations (e.g., sources) of the modulated light data are identified in the image data (operation 540).
Respective indications of the locations of the modulated light data are generated (operation 550), and a display of the image data and the indication of the locations of the modulated light data is output (operation 560). The user authentication may be received in the user interface, through a user selection of the location of the modulated light data (operation 570). In response to the user authentication, the modulated light data that is communicated from the selected location may be processed (operation 580) (e.g., parsed and interpreted), such as through re-processing of the image data, or re-capturing modulated light data from the selected location. The processing of the modulated light data may result in the obtaining of additional content, information, or other data provided from the modulated light data at the selected location, and the display of the image data and the indication of the locations of the modulated light data may be updated to reflect this additional content, information, or data (operation 590).
The operations of the flowchart 600 include the use of a camera system to capture image data of a scene with the camera (operation 610), with this image data including the capture of modulated light data. In an optional example, a narrowed area of evaluation is determined, based on the elevation angle of the imaged area (operation 620). This narrowed area of elevation may be used, for example, to disregard areas in the image data that are unlikely to include (or cannot include) relevant light emitting sources.
Within the area of evaluation, modulated light data is detected in the image data (operation 630), and locations of the modulated light data in the image data are detected (operation 640). The processing system then operates to perform an automatic authentication of one or more locations of modulated light data (operation 650), such as may be based on an image recognition of a particular object, type of object, or the detection of a data signal (e.g., signature, command) communicated from a particular object. The modulated light data from the one or more authenticated locations is then processed (operation 660), and information obtained the modulated light data of the one or more authenticated locations is communicated to another control subsystem (operation 670). This may include the communication of relevant data to a vehicle control subsystem, or the generation of information for output on a display system.
The electronic processing system 710 is depicted as including: circuitry to implement a user interface 712, e.g., to output a display with a user interface hardware device); a communication bus 713 to communicate data among the optical image capture system 720 and other components of the electronic processing system 710; data storage 714 to store image data, authentication data, and control instructions for operation of the electronic processing system; a wireless transceiver 715 to wirelessly communicate with an external network or devices; and processing circuitry 716 (e.g., a CPU) and a memory 717 (e.g., volatile or non-volatile memory) used to host and process the image data, authentication data, and control instructions for operation of the electronic processing system. In an example, the authentication data processing component 730 may be provided from specialized hardware operating independent from the processing circuitry 716 and the memory 717; in other examples, the authentication data processing component 730 may be software-configured hardware that is implemented with use of the processing circuitry 716 and the memory 717 (e.g., by instructions executed by the processing circuitry 716 and the memory 717).
In the electronic processing system 710, the user interface 712 may be used to output a command and control interface for selection and receipt of user input for authentication, such as to authenticate a particular data source. The input of user authentication from the user interface 712 may be used to control operations and initiate actions with the authentication data processing component 730. The authentication data processing component 730 is depicted as including image data processing 732 to perform detection and analysis of image data; automated authentication processing 734 to perform an automatic recognition of modulated light data sources and content operations; user authentication processing 736 to generate the user-controlled interfaces and inputs to perform an manual authentication of image sources identified in images; and image recognition processing 738 to perform automatic identification of particular objects, types of objects, light sources and light types, and the like. The authentication data processing component 730 and the electronic processing system may also include other components, not depicted, for implementation of other forms of authentication and user interaction operations, such as input control components (e.g., buttons, touchscreen input, external peripheral devices), and output components (e.g., a touchscreen display screen, video or audio output, etc.).
The optical image capture system 720 is depicted as including: an image sensor 722 to capture image data of a scene (including modulated light data emitted in respective objects in a scene); storage memory 724 to buffer and store the image data of the scene; processing circuitry 726 to perform image processing of image data for a scene and identify modulated light data in the scene; and communication circuitry 728 to communicate the image data to another location. In an example, the optical image capture system 720 is adapted to capture human-visible light; in some examples, the optical image capture system 720 is additionally adapted to capture aspects of infrared and near-infrared light.
The light source system 740 is depicted as including: a data storage 742 to store commands and content for communication via modulated light output; processing circuitry 744 to control the modulated light output; and a light emitter 746 (e.g., a LED or LED array) to generate the modulated light output.
The external data system 750 is depicted as including: data storage 752 to host supplemental content for access by the electronic processing system 710; a processor 754 and memory 756 to execute software instructions to host and serve the supplemental content in response to a request from the electronic processing system 710; and communication circuitry 758 to transmit the supplemental data in response to the request from the electronic processing system 710.
Example electronic processing system 800 includes at least one processor 802 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both, processor cores, compute nodes, etc.), a main memory 804 and a static memory 806, which communicate with each other via an interconnect 808 (e.g., a link, a bus, etc.). The electronic processing system 800 may further include a video display unit 810, an input device 812 (e.g., an alphanumeric keyboard), and a user interface (UI) control device 814 (e.g., a mouse, button controls, etc.). In one embodiment, the video display unit 810, input device 812 and UI navigation device 814 are incorporated into a touch screen display. The electronic processing system 800 may additionally include a storage device 816 (e.g., a drive unit), a signal generation device 818 (e.g., a speaker), an output controller 832 (e.g., for control of actuators, motors, and the like), a network interface device 820 (which may include or operably communicate with one or more antennas 830, transceivers, or other wireless communications hardware), and one or more sensors 826 (e.g., cameras), such as a global positioning system (GPS) sensor, compass, accelerometer, location sensor, or other sensor.
The storage device 816 includes a machine-readable medium 822 on which is stored one or more sets of data structures and instructions 824 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 824 may also reside, completely or at least partially, within the main memory 804, static memory 806, and/or within the processor 802 during execution thereof by the electronic processing system 800, with the main memory 804, static memory 806, and the processor 802 also constituting machine-readable media.
While the machine-readable medium 822 is illustrated in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions 824. The term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including but not limited to, by way of example, semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and. CD-ROM and DVD-ROM disks.
The instructions 824 may further be transmitted or received over a communications network 828 using a transmission medium via the network interface device 820 utilizing any one of a number of transfer protocols (e.g., HTTP). Examples of communication networks include a local area network (LAN), a wide area network (WAN), the Internet, mobile telephone networks, plain old telephone (POTS) networks, and wireless data networks (e.g., Wi-Fi, 2G/3G, and 4G LTE/LTE-A or WiMAX networks). The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
Embodiments used to facilitate and perform the techniques described herein may be implemented in one or a combination of hardware, firmware, and software. Embodiments may also be implemented as instructions stored on a machine-readable storage device, which may be read and executed by at least one processor to perform the operations described herein. A machine-readable storage device may include any non-transitory mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable storage device may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and other storage devices and media.
It should be understood that the functional units or capabilities described in this specification may have been referred to or labeled as components or modules, in order to more particularly emphasize their implementation independence. Such components may be embodied by any number of software or hardware forms. For example, a component or module may be implemented as a hardware circuit comprising custom very-large-scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A component or module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. Components or modules may also be implemented in software for execution by various types of processors. An identified component or module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified component or module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the component or module and achieve the stated purpose for the component or module.
Indeed, a component or module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices or processing systems. In particular, some aspects of the described process (such as code rewriting and code analysis) may take place on a different processing system (e.g., in a computer in a data center), than that in which the code is deployed (e.g., in a computer embedded in a sensor or robot). Similarly, operational data may be identified and illustrated herein within components or modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. The components or modules may be passive or active, including agents operable to perform desired functions.
Additional examples of the presently described method, system, and device embodiments include the following, non-limiting configurations. Each of the following non-limiting examples may stand on its own, or may be combined in any permutation or combination with any one or more of the other examples provided below or throughout the present disclosure.
Example 1 is a device for performing authentication of optical camera communications from a light emitting object, the device comprising: processing circuitry to: detect, from image data, modulated light data emitted from the light emitting object, wherein the image data depicts the light emitting object, and wherein the image data is captured with an image sensor; identify, from the image data, the light emitting object as a source of the modulated light data; receive an indication to select the light emitting object as an authenticated source of the modulated light data; and perform a command to process the modulated light data from the authenticated source, in response to the indication to select the light emitting object as the authenticated source of the modulated light data.
In Example 2, the subject matter of Example 1 optionally includes wherein the image data indicates multiple sources of available modulated light data, wherein the multiple sources include the authenticated source and another source, and wherein operations to identify the source of the modulated light data are performed with operations to detect the authenticated source as a first source of a first set of available modulated light data and detect the another source as a second source of a second set of available modulated light data.
In Example 3, the subject matter of Example 2 optionally includes wherein operations that perform the command to process the modulated light data, include operations to decode the first set of available modulated light data, and to not decode the second set of available modulated light data.
In Example 4, the subject matter of Example 3 optionally includes the processing circuitry further to enable user authentication of the authenticated source of the modulated light data, with operations to: generate a graphical user interface display, the graphical user interface display including an overlay on output of the image data that provides an identification of the multiple sources of available modulated light data; and receive the indication to select the authenticated source of the modulated light data from user input received in the graphical user interface display, the user input received upon the overlay of the output of the image data in the graphical user interface display; wherein the operations to identify the light emitting object include a generation of the graphical user interface display to indicate the authenticated source and the another source, the indication of the authenticated source and the another source provided as an overlay of an output of the image data in the graphical user interface display.
In Example 5, the subject matter of Example 4 optionally includes the processing circuitry further to output data selected with the user authentication of the authenticated source of the modulated light data, with operations to: decode and interpret content from the modulated light data obtained from the authenticated source; and update the graphical user interface display to output the decoded and interpreted content from the modulated light data.
In Example 6, the subject matter of any one or more of Examples 3-5 optionally include the processing circuitry further to enable automatic authentication of the authenticated source of the modulated light data, with operations to: perform image recognition of the image data; wherein the operations to identify the light emitting object include image recognition of the image data to indicate the authenticated source and the another source; and wherein the indication to select the light emitting object as the authenticated source is provided from an image recognition technique, the image recognition technique automatically performed on an object representing the source of the modulated light data in the image data.
In Example 7, the subject matter of any one or more of Examples 1-6 optionally include the processing circuitry further to obtain supplemental data indicated in the modulated light data, with operations to: decode and parse information obtained from the modulated light data from the authenticated source, wherein the information obtained from the modulated light data indicates an identifier of the supplemental data from another data source; and obtain the supplemental data from the another data source, using the identifier of the supplemental data.
In Example 8, the subject matter of Example 7 optionally includes wherein the identifier is a uniform resource locator (URL), and wherein operations to obtain the supplemental data from the another data source includes access of the URL using a wireless communication network.
In Example 9, the subject matter of any one or more of Examples 1-8 optionally include wherein the image data is obtained from a camera positioned in a motor vehicle to capture an image of a scene in a direction away from the motor vehicle, and wherein the modulated light data is used to generate an automated reality display of information obtained from the modulated light data that overlays the image of the scene.
In Example 10, the subject matter of Example 9 optionally includes the processing circuitry further to identify a limited area of evaluation from the image data for automatically authenticating the authenticated source, with operations to: identify the limited area of evaluation of the image data based on an elevation angle of the scene in the direction away from the motor vehicle, as captured from a position of the camera; wherein operations to detect the modulated light data are performed on the limited area of evaluation, and wherein operations to identify the modulated light data are performed on the limited area of evaluation.
Example 11 is at least one machine readable storage medium, comprising a plurality of instructions adapted for performing authentication of optical camera communications from a light emitting object, wherein the instructions, responsive to being executed with processor circuitry of a machine, cause the machine to perform operations that: detect, from image data, modulated light data emitted from the light emitting object, wherein the image data depicts the light emitting object, and wherein the image data is captured with an image sensor; identify, from the image data, the light emitting object as a source of the modulated light data; receive an indication to select the light emitting object as an authenticated source of the modulated light data; and perform a command to process the modulated light data from the authenticated source, in response to the indication to select the light emitting object as the authenticated source of the modulated light data.
In Example 12, the subject matter of Example 11 optionally includes wherein the image data indicates multiple sources of available modulated light data, wherein the multiple sources include the authenticated source and another source, and wherein operations to identify the source of the modulated light data are performed with operations to detect the authenticated source as a first source of a first set of available modulated light data and detect the another source as a second source of a second set of available modulated light data.
In Example 13, the subject matter of Example 12 optionally includes wherein operations that perform the command to process the modulated light data, include operations to decode the first set of available modulated light data, and to not decode the second set of available modulated light data.
In Example 14, the subject matter of Example 13 optionally includes wherein the instructions further cause the machine to enable user authentication of the authenticated source of the modulated light data, with operations that: generate a graphical user interface display, the graphical user interface display including an overlay on output of the image data that provides an identification of the multiple sources of available modulated light data; and receive the indication to select the authenticated source of the modulated light data from user input received in the graphical user interface display, the user input received upon the overlay of the output of the image data in the graphical user interface display; wherein the operations to identify the light emitting object include a generation of the graphical user interface display to indicate the authenticated source and the another source, the indication of the authenticated source and the another source provided as an overlay of an output of the image data in the graphical user interface display.
In Example 15, the subject matter of Example 14 optionally includes wherein the instructions further cause the machine to output data selected with the user authentication of the authenticated source of the modulated light data, with operations that: decode and interpret content from the modulated light data obtained from the authenticated source; and update the graphical user interface display to output the decoded and interpreted content from the modulated light data.
In Example 16, the subject matter of any one or more of Examples 13-15 optionally include wherein the instructions further cause the machine to enable automatic authentication of the authenticated source of the modulated light data, with operations that: perform image recognition of the image data; wherein the operations to identify the light emitting object include image recognition of the image data to indicate the authenticated source and the another source; and wherein the indication to select the light emitting object as the authenticated source is provided from an image recognition technique, the image recognition technique automatically performed on an object representing the source of the modulated light data in the image data.
In Example 17, the subject matter of any one or more of Examples 11-16 optionally include wherein the instructions further cause the machine to obtain supplemental data indicated in the modulated light data, with operations that: decode and parse information obtained from the modulated light data from the authenticated source, wherein the information obtained from the modulated light data indicates an identifier of the supplemental data from another data source; and obtain the supplemental data from the another data source, using the identifier of the supplemental data.
In Example 18, the subject matter of Example 17 optionally includes wherein the identifier is a uniform resource locator (URL), and wherein operations to obtain the supplemental data from the another data source includes access of the URL using a wireless communication network.
In Example 19, the subject matter of any one or more of Examples 11-18 optionally include wherein the image data is obtained from a camera positioned in a motor vehicle to capture an image of a scene in a direction away from the motor vehicle, and wherein the modulated light data is used to generate an automated reality display of information obtained from the modulated light data that overlays the image of the scene.
In Example 20, the subject matter of Example 19 optionally includes wherein the instructions further cause the machine to identify a limited area of evaluation from the image data for automatically authenticating the authenticated source, with operations that: identify the limited area of evaluation of the image data based on an elevation angle of the scene in the direction away from the motor vehicle, as captured from a position of the camera; wherein operations to detect the modulated light data are performed on the limited area of evaluation, and wherein operations to identify the modulated light data are performed on the limited area of evaluation.
Example 21 is a method of performing authentication of optical camera communications from a light emitting object, the method comprising electronic operations including: detecting, from image data, modulated light data emitted from the light emitting object, wherein the image data depicts the light emitting object, and wherein the image data is captured with an image sensor; identifying, from the image data, the light emitting object as a source of the modulated light data; receiving an indication to select the light emitting object as an authenticated source of the modulated light data; and performing a command to process the modulated light data from the authenticated source, in response to the indication to select the light emitting object as the authenticated source of the modulated light data.
In Example 22, the subject matter of Example 21 optionally includes wherein the image data indicates multiple sources of available modulated light data, wherein the multiple sources include the authenticated source and another source, and wherein identifying the source of the modulated light data is performed by detecting the authenticated source as a first source of a first set of available modulated light data and detect the another source as a second source of a second set of available modulated light data.
In Example 23, the subject matter of Example 22 optionally includes wherein performing the command to process the modulated light data, includes decoding the first set of available modulated light data, and not decoding the second set of available modulated light data.
In Example 24, the subject matter of Example 23 optionally includes the electronic operations further including enabling user authentication of the authenticated source of the modulated light data, by: generating a graphical user interface display, the graphical user interface display including an overlay on output of the image data that provides an identification of the multiple sources of available modulated light data; and receiving the indication to select the authenticated source of the modulated light data from user input received in the graphical user interface display, the user input received upon the overlay of the output of the image data in the graphical user interface display; wherein identifying the light emitting object includes generating the graphical user interface display to indicate the authenticated source and the another source, the indication of the authenticated source and the another source provided as an overlay of an output of the image data in the graphical user interface display.
In Example 25, the subject matter of Example 24 optionally includes the electronic operations further including outputting data selected with the user authentication of the authenticated source of the modulated light data, by: decoding and interpreting content from the modulated light data obtained from the authenticated source; and updating the graphical user interface display to output the decoded and interpreted content from the modulated light data.
In Example 26, the subject matter of any one or more of Examples 23-25 optionally include the electronic operations further including enabling automatic authentication of the authenticated source of the modulated light data, by: performing image recognition of the image data; wherein identifying the light emitting object includes image recognition of the image data to indicate the authenticated source and the another source; and wherein the indication to select the light emitting object as the authenticated source is provided from an image recognition technique, the image recognition technique automatically performed on an object representing the source of the modulated light data in the image data.
In Example 27, the subject matter of any one or more of Examples 21-26 optionally include the electronic operations further including obtaining supplemental data indicated in the modulated light data, by: decoding and parsing information obtained from the modulated light data from the authenticated source, wherein the information obtained from the modulated light data indicates an identifier of the supplemental data from another data source; and obtaining the supplemental data from the another data source, using the identifier of the supplemental data.
In Example 28, the subject matter of Example 27 optionally includes wherein the identifier is a uniform resource locator (URL), and wherein obtaining the supplemental data from the another data source includes access of the URL using a wireless communication network.
In Example 29, the subject matter of any one or more of Examples 21-28 optionally include wherein the image data is obtained from a camera positioned in a motor vehicle to capture an image of a scene in a direction away from the motor vehicle, and wherein the modulated light data is used to generate an automated reality display of information obtained from the modulated light data that overlays the image of the scene.
In Example 30, the subject matter of Example 29 optionally includes the electronic operations further including identifying a limited area of evaluation from the image data for automatically authenticating the authenticated source, by: identifying the limited area of evaluation of the image data based on an elevation angle of the scene in the direction away from the motor vehicle, as captured from a position of the camera; wherein detecting the modulated light data is performed on the limited area of evaluation, and wherein identifying the modulated light data is performed on the limited area of evaluation.
Example 31 is an apparatus comprising means for performing any of the methods of Examples 21-30.
Example 32 is at least one machine readable medium including instructions, which when executed by a computing system, cause the computing system to perform any of the methods of Examples 21-30.
Example 33 is a system for processing and authenticating modulated light data using optical camera communications, comprising: an optical image capture system; a processing system, comprising: processing circuitry; image data processing circuitry to evaluate image data, the image data including an indication of modulated light data from a light source, wherein the image data is captured with an image sensor; authentication data processing circuitry to: detect, from image data, modulated light data emitted from the light source; identify, from the image data, the light source as a source of the modulated light data; receive an indication to select the light source as an authenticated source of the modulated light data; and perform a command to process the modulated light data from the authenticated source, in response to the indication to select the light source as the authenticated source of the modulated light data.
In Example 34, the subject matter of Example 33 optionally includes a light source system, comprising: data storage to store data to be transmitted with a modulated light output; a light emitter to output the data with the modulated light output; and processing circuitry coupled to the data storage and the light emitter, the processing circuitry to control emission of the data with the modulated light output via the light emitter.
In Example 35, the subject matter of any one or more of Examples 33-34 optionally include an external data system, accessible via a network connection, the external data system comprising: data storage to store data; communication circuitry to receive a request for supplemental data; and a processor and memory to process the request to serve the supplemental data and transmit the supplemental data in response to the request; wherein the request for supplemental data is provided from the processing system, in response to reading the modulated light data from the light source, wherein the modulated light data indicates details of the request for supplemental data.
Example 36 is an apparatus, comprising: means for capturing image data; means for detecting, from the image data, modulated light data emitted from a light emitting object; means for identifying, from the image data, the light emitting object as a source of the modulated light data; means for receiving an indication to select the light emitting object as an authenticated source of the modulated light data; and means for performing a command to process the modulated light data from the authenticated source, in response to the indication to select the light emitting object as the authenticated source of the modulated light data.
In Example 37, the subject matter of Example 36 optionally includes wherein the image data indicates multiple sources of available modulated light data, wherein the multiple sources include the authenticated source and another source, the apparatus further comprising: means for detecting the authenticated source as a first source of a first set of available modulated light data and detect the another source as a second source of a second set of available modulated light data.
In Example 38, the subject matter of Example 37 optionally includes means for performing the command to process the modulated light data by decoding the first set of available modulated light data, and not decoding the second set of available modulated light data.
In Example 39, the subject matter of Example 38 optionally includes means for enabling user authentication of the authenticated source of the modulated light data, including: means for generating a graphical user interface display, the graphical user interface display including an overlay on output of the image data that provides an identification of the multiple sources of available modulated light data; and means for receiving the indication to select the authenticated source of the modulated light data from user input received in the graphical user interface display, the user input received upon the overlay of the output of the image data in the graphical user interface display; means for identifying the light emitting object by generating a graphical user interface display to indicate the authenticated source and the another source, the indication of the authenticated source and the another source provided as an overlay of an output of the image data in the graphical user interface display.
In Example 40, the subject matter of Example 39 optionally includes means for outputting data selected with the user authentication of the authenticated source of the modulated light data, including: means for decoding and interpreting content from the modulated light data obtained from the authenticated source; and means for updating the graphical user interface display to output the decoded and interpreted content from the modulated light data.
In Example 41, the subject matter of any one or more of Examples 38-40 optionally include means for enabling automatic authentication of the authenticated source of the modulated light data, including: means for performing image recognition of the image data; means for identifying the light emitting object by image recognition of the image data to indicate the authenticated source and the another source; and means for obtaining the indication to select the light emitting object as the authenticated source an image recognition technique, the image recognition technique automatically performed on an object representing the source of the modulated light data in the image data.
In Example 42, the subject matter of any one or more of Examples 36-41 optionally include means for obtaining supplemental data indicated in the modulated light data, including: means for decoding and parsing information obtained from the modulated light data from the authenticated source, wherein the information obtained from the modulated light data indicates an identifier of the supplemental data from another data source; and means for obtaining the supplemental data from the another data source, using the identifier of the supplemental data.
In Example 43, the subject matter of Example 42 optionally includes means for obtaining the supplemental data from the another data source by access of a uniform resource locator (URL) using a wireless communication network, wherein the identifier indicates the URL.
In Example 44, the subject matter of any one or more of Examples 36-43 optionally include means for obtaining the image data to capture an image of a scene in a direction away from the apparatus; and means for generating an automated reality display of information obtained from the modulated light data that overlays the image of the scene, using the modulated light data.
In Example 45, the subject matter of Example 44 optionally includes means for identifying a limited area of evaluation from the image data for automatically authenticating the authenticated source, including: means for identifying the limited area of evaluation of the image data based on an elevation angle of the scene in the direction away from the apparatus, as captured from a position of the apparatus; means for detecting the modulated light data on the limited area of evaluation, and wherein identifying the modulated light data is performed on the limited area of evaluation.
In the above Detailed Description, various features may be grouped together to streamline the disclosure. However, the claims may not set forth every feature disclosed herein as embodiments may feature a subset of said features. Further, embodiments may include fewer features than those disclosed in a particular example. Thus, the following claims are hereby incorporated into the Detailed Description, with a claim standing on its own as a separate embodiment.