The present disclosure relates to a display method, a display apparatus, and a recording medium, for instance.
In recent years, a home-electric-appliance cooperation function has been introduced for a home network, with which various home electric appliances are connected to a network by a home energy management system (HEMS) having a function of managing power usage for addressing an environmental issue, turning power on/off from outside a house, and the like, in addition to cooperation of AV home electric appliances by internet protocol (IP) connection using Ethernet® or wireless local area network (LAN). However, there are home electric appliances whose computational performance is insufficient to have a communication function, and home electric appliances which do not have a communication function due to a matter of cost.
In order to solve such a problem, Patent Literature (PTL) 1 discloses a technique of efficiently establishing communication between devices among limited optical spatial transmission devices which transmit information to a free space using light, by performing communication using plural single color light sources of illumination light.
[Patent Literature 1] Japanese Unexamined Patent Application Publication No. 2002-290335
However, the conventional method is limited to a case in which a device to which the method is applied has three color light sources such as an illuminator. In addition, a receiver which receives transmitted information cannot display an image useful to a user.
The non-limiting and exemplary embodiments of the present disclosure provide, for instance, a display method which addresses such problems and allows the display of an image useful to a user.
A display method according to an aspect of the present disclosure is a display method for a display apparatus to display an image, the display method including: (a) obtaining a captured display image and a decode target image by an image sensor capturing an image of a subject; (b) obtaining light identification information by decoding the decode target image; (c) transmitting the light identification information to a server; (d) obtaining, from the server, an augmented reality image and recognition information which are associated with the light identification information; (e) recognizing a region according to the recognition information as a target region from the captured display image; and (f) displaying the captured display image in which the augmented reality image is superimposed on the target region.
These general and specific aspects may be implemented using a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM, or any combination of systems, methods, integrated circuits, computer programs, or computer-readable recording media. Furthermore, a computer program for executing a method according to an embodiment may be stored in a recording medium of a server, and may be achieved in a manner that the server distributes the program to a terminal, in response to a request from the terminal.
The written description and the drawings clarify further benefits and advantages provided by the disclosed embodiments. Such benefits and advantages may be individually yielded by various embodiments and features of the written description and the drawings, and all the embodiments and all the features may not necessarily need to be provided in order to obtain one or more benefits and advantages.
The present disclosure achieves a display method which enables display of an image useful to a user.
These and other objects, advantages and features of the disclosure will become apparent from the following description thereof taken in conjunction with the accompanying drawings that illustrate a specific embodiment of the present disclosure.
A display method according to an aspect of the present disclosure is a display method for a display apparatus to display an image, the display method including: (a) obtaining a captured display image and a decode target image by an image sensor capturing an image of a subject; (b) obtaining light identification information by decoding the decode target image; (c) transmitting the light identification information to a server; (d) obtaining, from the server, an augmented reality image and recognition information which are associated with the light identification information; (e) recognizing a region according to the recognition information as a target region from the captured display image; and (f) displaying the captured display image in which the augmented reality image is superimposed on the target region.
Accordingly, an augmented reality image is superimposed on a captured display image, and the images are displayed. Thus, an image useful to a user can be displayed. Furthermore, an augmented reality image (namely, an AR image) can be superimposed on an appropriate target region, while maintaining a processing load light.
Specifically, in typical augmented reality, a captured display image is compared with a huge number of prestored recognition target images, to determine whether the captured display image includes any of the recognition target images. Then, if the captured display image is determined to include a recognition target image, an augmented reality image associated with the recognition target image is superimposed on the captured display image. At this time, the augmented reality image is positioned based on the recognition target image. Accordingly, in such typical augmented reality, a captured display image is compared with a huge number of recognition target images, and furthermore the position of a recognition target image in the captured display image needs to be detected when an augmented reality image is positioned. Thus, a large amount of calculation is involved and a processing load is heavy, which is a problem.
However, according to the display method according to an aspect of the present disclosure, light identification information (namely, a light ID) is obtained by decoding a decode target image obtained by capturing an image of a subject, as illustrated in
With a display method according to an aspect of the present disclosure, recognition information associated with the light identification information is obtained from the server. Recognition information is for recognizing a target region which is a region on which an augmented reality image is superimposed, in a captured display image. The recognition information may indicate that a white quadrilateral is a target region, for example. In this case, a target region can be recognized easily, and a processing load can further reduced. Thus, a processing load can be further reduced depending on the content of the recognition information. The server can arbitrarily set the content of the recognition information according to light identification information, and thus the balance of a processing load and recognition precision can be maintained appropriate.
The recognition information may include reference information for locating a reference region of the captured display image, and target information indicating a relative position of the target region with respect to the reference region, and in (e), the reference region may be located from the captured display image, based on the reference information, and a region in the relative position indicated by the target information may be recognized as the target region from the captured display image, based on a position of the reference region.
Accordingly, as illustrated in
The reference information may indicate that the position of the reference region in the captured display image matches a position of a bright line pattern region in the decode target image, the bright line pattern region including a pattern formed by bright lines which appear due to exposure lines included in the image sensor being exposed.
Accordingly, as illustrated in
The reference information may indicate that the reference region in the captured display image is a region in which a display is shown in the captured display image. For example, the reference region may be an outer frame portion having a predetermined color in an image displayed on the display.
Accordingly, if a station sign is achieved as a display, a target region can be recognized based on a region in which the display is shown, as illustrated in
In (f), a first augmented reality image which is the augmented reality image may be displayed for a predetermined display period, while preventing display of a second augmented reality image different from the first augmented reality image.
Accordingly, as illustrated in
In (f), decoding a decode target image newly obtained may be prohibited during the predetermined display period.
Accordingly, as illustrated in
Moreover, (f) may further include: measuring an acceleration of the display apparatus using an acceleration sensor during the display period; determining whether the measured acceleration is greater than or equal to a threshold; and displaying the second augmented reality image instead of the first augmented reality image by no longer preventing the display of the second augmented reality image, if the measured acceleration is determined to be greater than or equal to the threshold.
Accordingly, as illustrated in
Furthermore, (f) may further include: determining whether a face of a user is approaching the display apparatus, based on image capturing by a face camera included in the display apparatus; and displaying a first augmented reality image which is the augmented reality image while preventing display of a second augmented reality image different from the first augmented reality image, if the face is determined to be approaching. Alternatively, (f) may further include: determining whether a face of a user is approaching the display apparatus, based on an acceleration of the display apparatus measured by an acceleration sensor; and displaying a first augmented reality image which is the augmented reality image while preventing display of a second augmented reality image different from the first augmented reality image, if the face is determined to be approaching.
Accordingly, as illustrated in
In (a), the captured display image and the decode target image may be obtained by the image sensor capturing an image which includes a plurality of displays each showing an image and being the subject, in (e), a region in which, among the plurality of displays, a transmission display that is transmitting the light identification information is shown may be recognized as the target region from the captured display image, and in (f), first subtitles for an image displayed on the transmission display may be superimposed on the target region, as the augmented reality image, and second subtitles obtained by enlarging the first subtitles may further be superimposed on a region larger than the target region of the captured display image.
Accordingly, the first subtitles are superimposed on the image of a transmission display as illustrated in
Moreover, (f) may further include: determining whether information obtained from the server includes sound information; and preferentially outputting sound indicated by the sound information over the first subtitles and the second subtitles, if the sound information is determined to be included.
Accordingly, since sound is preferentially output, a burden on the user to read subtitles is reduced.
The display method may further include obtaining gesture information associated with the light identification information from the server, determining whether the movement of the subject shown by captured display images periodically obtained matches the movement indicated by the gesture information obtained from the server, and if the movement is determined to match, displaying the captured display image on which the augmented reality image is superimposed.
Accordingly, as illustrated in, for example,
The subject may be a microwave which includes a lighting apparatus, and the lighting apparatus may illuminate inside of the microwave, and transmit the light identification information to the outside of the microwave by changing luminance. When the captured display image and the decoding image are to be obtained, the captured display image and the decode target image may be obtained by capturing an image of the microwave transmitting the light identification information. When the target region is to be recognized, a window portion of the microwave shown in the captured display image may be recognized as the target region, and when the captured display image is to be displayed, the captured display image on which the augmented reality image showing a change in the state of the inside of the microwave is superimposed may be displayed.
Accordingly, as illustrated in, for example,
The subject may be an object illuminated by a transmitter which transmits a signal by changing luminance, the augmented reality image may be a video which includes images, and in (f), the video may be displayed, starting with one of, among the images, an image which includes the object and a predetermined number of images which are to be displayed around a time at which the image which includes the object is to be displayed. For example, the predetermined number of images may be ten frames. The object may be a still image, and in (f), the video may be displayed, starting with an image same as the still image.
Accordingly, as illustrated in, for example,
A display method according to an aspect of the present disclosure includes: (a) obtaining a captured image by an image sensor capturing an image of, as a subject, an object illuminated by a transmitter which transmits a signal by changing luminance; (b) decoding the signal from the captured image; and (c) reading a video corresponding to the decoded signal from a memory, superimposing the video on a target region corresponding to the subject in the captured image, and displaying, on a display, the captured image in which the video is superimposed on the target region, wherein in (c), the video is displayed, starting with one of, among images included in the video, an image which includes the object and a predetermined number of images which are to be displayed around a time at which the image which includes the object is to be displayed.
For example, the object may be a still image, and in (c), the video may be displayed, starting with an image same as the still image. The image sensor and the captured image are the image sensor and the entire captured image in Embodiment 23, for example. The illuminated still image may be a still image displayed on the display panel of the image display apparatus, and also may be a poster, a guideboard, or a signboard illuminated with light from the transmitter.
Accordingly, as illustrated in, for example,
The still image may include an outer frame having a predetermined color, and the display method may further include: recognizing the target region from the captured image, based on the predetermined color, wherein in (c), the video may be resized to a size of the recognized target region, the resized video may be superimposed on the target region in the captured image, and the captured image in which the resized video may be superimposed on the target region is displayed on the display. For example, the outer frame of a predetermined color is a white or black quadrilateral frame surrounding a still image, and is indicated by the recognition information in Embodiment 23. The AR image in Embodiment 23 is resized as a video, and superimposed.
Accordingly, a video can be displayed more realistically as if the video were actually present as a subject.
These general and specific aspects may be implemented using an apparatus, a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM, or any combination of apparatuses, systems, methods, integrated circuits, computer programs, or computer-readable recording media.
The following describes the embodiments with reference to the drawings.
Each of the embodiments described below shows a general or specific example. The numerical values, shapes, materials, structural elements, the arrangement and connection of the structural elements, steps, the processing order of the steps, for instance, shown in the following embodiments are mere examples, and therefore do not limit the scope of the present disclosure. Therefore, among the structural elements in the following embodiments, structural elements not recited in any one of the independent claims representing the broadest concepts are described as arbitrary structural elements.
The following describes Embodiment 1.
(Observation of Luminance of Light Emitting Unit)
The following proposes an imaging method in which, when capturing one image, all imaging elements are not exposed simultaneously but the times of starting and ending the exposure differ between the imaging elements.
In the case of capturing a blinking light source shown on the entire imaging elements using this imaging method, bright lines (lines of brightness in pixel value) along exposure lines appear in the captured image as illustrated in
By this method, information transmission is performed at a speed higher than the imaging frame rate.
In the case where the number of exposure lines whose exposure times do not overlap each other is 20 in one captured image and the imaging frame rate is 30 fps, it is possible to recognize a luminance change in a period of 1.67 milliseconds. In the case where the number of exposure lines whose exposure times do not overlap each other is 1000, it is possible to recognize a luminance change in a period of 1/30000 second (about 33 microseconds). Note that the exposure time is set to less than 10 milliseconds, for example.
In this situation, when transmitting information based on whether or not each exposure line receives at least a predetermined amount of light, information transmission at a speed of fl bits per second at the maximum can be realized where f is the number of frames per second (frame rate) and l is the number of exposure lines constituting one image.
Note that faster communication is possible in the case of performing time-difference exposure not on a line basis but on a pixel basis.
In such a case, when transmitting information based on whether or not each pixel receives at least a predetermined amount of light, the transmission speed is flm bits per second at the maximum, where m is the number of pixels per exposure line.
If the exposure state of each exposure line caused by the light emission of the light emitting unit is recognizable in a plurality of levels as illustrated in
In the case where the exposure state is recognizable in Elv levels, information can be transmitted at a speed of flElv bits per second at the maximum.
Moreover, a fundamental period of transmission can be recognized by causing the light emitting unit to emit light with a timing slightly different from the timing of exposure of each exposure line.
In this situation, the exposure time is calculated from the brightness of each exposure line, to recognize the light emission state of the light emitting unit.
Note that, in the case of determining the brightness of each exposure line in a binary fashion of whether or not the luminance is greater than or equal to a threshold, it is necessary for the light emitting unit to continue the state of emitting no light for at least the exposure time of each line, to enable the no light emission state to be recognized.
If the number of samples mentioned above is small, or in other words, the sample interval (the time difference tD illustrated in
As described with reference to
Here, the structure in which the exposure times of adjacent exposure lines partially overlap each other does not need to be applied to all exposure lines, and part of the exposure lines may not have the structure of partially overlapping in exposure time. Moreover, the structure in which the predetermined non-exposure blank time (predetermined wait time) is provided from when the exposure of one exposure line ends to when the exposure of the next exposure line starts does not need to be applied to all exposure lines, and part of the exposure lines may have the structure of partially overlapping in exposure time. This makes it possible to take advantage of each of the structures. Furthermore, the same reading method or circuit may be used to read a signal in the normal imaging mode in which imaging is performed at the normal frame rate (30 fps, 60 fps) and the visible light communication mode in which imaging is performed with the exposure time less than or equal to 1/480 second for visible light communication. The use of the same reading method or circuit to read a signal eliminates the need to employ separate circuits for the normal imaging mode and the visible light communication mode. The circuit size can be reduced in this way.
The information communication method in this embodiment is an information communication method of obtaining information from a subject, and includes Steps SK91 to SK93.
In detail, the information communication method includes: a first exposure time setting step SK91 of setting a first exposure time of an image sensor so that, in an image obtained by capturing the subject by the image sensor, a plurality of bright lines corresponding to a plurality of exposure lines included in the image sensor appear according to a change in luminance of the subject; a first image obtainment step SK92 of obtaining a bright line image including the plurality of bright lines, by capturing the subject changing in luminance by the image sensor with the set first exposure time; and an information obtainment step SK93 of obtaining the information by demodulating data specified by a pattern of the plurality of bright lines included in the obtained bright line image, wherein in the first image obtainment step SK92, exposure starts sequentially for the plurality of exposure lines each at a different time, and exposure of each of the plurality of exposure lines starts after a predetermined blank time elapses from when exposure of an adjacent exposure line adjacent to the exposure line ends.
An information communication device K90 in this embodiment is an information communication device that obtains information from a subject, and includes structural elements K91 to K93.
In detail, the information communication device K90 includes: an exposure time setting unit K91 that sets an exposure time of an image sensor so that, in an image obtained by capturing the subject by the image sensor, a plurality of bright lines corresponding to a plurality of exposure lines included in the image sensor appear according to a change in luminance of the subject; an image obtainment unit K92 that includes the image sensor, and obtains a bright line image including the plurality of bright lines by capturing the subject changing in luminance with the set exposure time; and an information obtainment unit K93 that obtains the information by demodulating data specified by a pattern of the plurality of bright lines included in the obtained bright line image, wherein exposure starts sequentially for the plurality of exposure lines each at a different time, and exposure of each of the plurality of exposure lines starts after a predetermined blank time elapses from when exposure of an adjacent exposure line adjacent to the exposure line ends.
In the information communication method and the information communication device K90 illustrated in
It should be noted that in the above embodiment, each of the constituent elements may be constituted by dedicated hardware, or may be obtained by executing a software program suitable for the constituent element. Each constituent element may be achieved by a program execution unit such as a CPU or a processor reading and executing a software program stored in a recording medium such as a hard disk or semiconductor memory. For example, the program causes a computer to execute the information communication method illustrated in the flowchart of
This embodiment describes each example of application using a receiver such as a smartphone which is the information communication device D90 and a transmitter for transmitting information as a blink pattern of the light source such as an LED or an organic EL device in Embodiment 1 described above.
In the following description, the normal imaging mode or imaging in the normal imaging mode is referred to as “normal imaging”, and the visible light communication mode or imaging in the visible light communication mode is referred to as “visible light imaging” (visible light communication). Imaging in the intermediate mode may be used instead of normal imaging and visible light imaging, and the intermediate image may be used instead of the below-mentioned synthetic image.
The receiver 8000 switches the imaging mode in such a manner as normal imaging, visible light communication, normal imaging, . . . . The receiver 8000 synthesizes the normal captured image and the visible light communication image to generate a synthetic image in which the bright line pattern, the subject, and its surroundings are clearly shown, and displays the synthetic image on the display. The synthetic image is an image generated by superimposing the bright line pattern of the visible light communication image on the signal transmission part of the normal captured image. The bright line pattern, the subject, and its surroundings shown in the synthetic image are clear, and have the level of clarity sufficiently recognizable by the user. Displaying such a synthetic image enables the user to more distinctly find out from which position the signal is being transmitted.
The receiver 8000 includes a camera Ca1 and a camera Ca2. In the receiver 8000, the camera Ca1 performs normal imaging, and the camera Ca2 performs visible light imaging. Thus, the camera Ca1 obtains the above-mentioned normal captured image, and the camera Ca2 obtains the above-mentioned visible light communication image. The receiver 8000 synthesizes the normal captured image and the visible light communication image to generate the above-mentioned synthetic image, and displays the synthetic image on the display.
In the receiver 8000 including two cameras, the camera Ca1 switches the imaging mode in such a manner as normal imaging, visible light communication, normal imaging, . . . . Meanwhile, the camera Ca2 continuously performs normal imaging. When normal imaging is being performed by the cameras Ca1 and Ca2 simultaneously, the receiver 8000 estimates the distance (hereafter referred to as “subject distance”) from the receiver 8000 to the subject based on the normal captured images obtained by these cameras, through the use of stereoscopy (triangulation principle). By using such estimated subject distance, the receiver 8000 can superimpose the bright line pattern of the visible light communication image on the normal captured image at the appropriate position. The appropriate synthetic image can be generated in this way.
The receiver 8000 switches the imaging mode in such a manner as visible light communication, normal imaging, visible light communication, . . . , as mentioned above. Upon performing visible light communication first, the receiver 8000 starts an application program. The receiver 8000 then estimates its position based on the signal received by visible light communication. Next, when performing normal imaging, the receiver 8000 displays AR (Augmented Reality) information on the normal captured image obtained by normal imaging. The AR information is obtained based on, for example, the position estimated as mentioned above. The receiver 8000 also estimates the change in movement and direction of the receiver 8000 based on the detection result of the 9-axis sensor, the motion detection in the normal captured image, and the like, and moves the display position of the AR information according to the estimated change in movement and direction. This enables the AR information to follow the subject image in the normal captured image.
When switching the imaging mode from normal imaging to visible light communication, in visible light communication the receiver 8000 superimposes the AR information on the latest normal captured image obtained in immediately previous normal imaging. The receiver 8000 then displays the normal captured image on which the AR information is superimposed. The receiver 8000 also estimates the change in movement and direction of the receiver 8000 based on the detection result of the 9-axis sensor, and moves the AR information and the normal captured image according to the estimated change in movement and direction, in the same way as in normal imaging. This enables the AR information to follow the subject image in the normal captured image according to the movement of the receiver 8000 and the like in visible light communication, as in normal imaging. Moreover, the normal image can be enlarged or reduced according to the movement of the receiver 8000 and the like.
For example, the receiver 8000 may display the synthetic image in which the bright line pattern is shown, as illustrated in (a) in
As another alternative, the receiver 8000 may display, as the synthetic image, the normal captured image in which the signal transmission part is indicated by a dotted frame and an identifier (e.g. ID: 101, ID: 102, etc.), as illustrated in (c) in
For example, in the case of receiving the signal by visible light communication, the receiver 8000 may output a sound for notifying the user that the transmitter has been discovered, while displaying the normal captured image. In this case, the receiver 8000 may change the type of output sound, the number of outputs, or the output time depending on the number of discovered transmitters, the type of received signal, the type of information specified by the signal, or the like.
For example, when the user touches the bright line pattern shown in the synthetic image, the receiver 8000 generates an information notification image based on the signal transmitted from the subject corresponding to the touched bright line pattern, and displays the information notification image. The information notification image indicates, for example, a coupon or a location of a store. The bright line pattern may be the signal specification object, the signal identification object, or the dotted frame illustrated in
For example, when the user touches the bright line pattern shown in the synthetic image, the receiver 8000 generates an information notification image based on the signal transmitted from the subject corresponding to the touched bright line pattern, and displays the information notification image. The information notification image indicates, for example, the current position of the receiver 8000 by a map or the like.
For example, when the user swipes on the receiver 8000 on which the synthetic image is displayed, the receiver 8000 displays the normal captured image including the dotted frame and the identifier like the normal captured image illustrated in (c) in
When the user taps information included in the list, the receiver 8000 may display an information notification image (e.g. an image showing a coupon) indicating the information in more detail.
For example, when the user swipes on the receiver 8000 on which the synthetic image is displayed, the receiver 8000 superimposes an information notification image on the synthetic image, to follow the swipe operation. The information notification image indicates the subject distance with an arrow so as to be easily recognizable by the user. The swipe may be, for example, an operation of moving the user's finger from outside the display of the receiver 8000 on the bottom side into the display. The swipe may be an operation of moving the user's finger from the left, top, or right side of the display into the display.
For example, the receiver 8000 captures, as a subject, a transmitter which is a signage showing a plurality of stores, and displays the normal captured image obtained as a result. When the user taps a signage image of one store included in the subject shown in the normal captured image, the receiver 8000 generates an information notification image based on the signal transmitted from the signage of the store, and displays an information notification image 8001. The information notification image 8001 is, for example, an image showing the availability of the store and the like.
A transmitter 8012 as a television transmits a signal to a receiver 8011 by way of luminance change. The signal includes information prompting the user to buy content relating to a program being viewed. Having received the signal by visible light communication, the receiver 8011 displays an information notification image prompting the user to buy content, based on the signal. When the user performs an operation for buying the content, the receiver 8011 transmits at least one of information included in a SIM (Subscriber Identity Module) card inserted in the receiver 8011, a user ID, a terminal ID, credit card information, charging information, a password, and a transmitter ID, to a server 8013. The server 8013 manages a user ID and payment information in association with each other, for each user. The server 8013 specifies a user ID based on the information transmitted from the receiver 8011, and checks payment information associated with the user ID. By this check, the server 8013 determines whether or not to permit the user to buy the content. In the case of determining to permit the user to buy the content, the server 8013 transmits permission information to the receiver 8011. Having received the permission information, the receiver 8011 transmits the permission information to the transmitter 8012. Having received the permission information, the transmitter 8012 obtains the content via a network as an example, and reproduces the content.
The transmitter 8012 may transmit information including the ID of the transmitter 8012 to the receiver 8011, by way of luminance change. In this case, the receiver 8011 transmits the information to the server 8013. Having obtained the information, the server 8013 can determine that, for example, the television program is being viewed on the transmitter 8012, and conduct television program rating research.
The receiver 8011 may include information of an operation (e.g. voting) performed by the user in the above-mentioned information and transmit the information to the server 8013, to allow the server 8013 to reflect the information on the television program. An audience participation program can be realized in this way. Besides, in the case of receiving a post from the user, the receiver 8011 may include the post in the above-mentioned information and transmit the information to the server 8013, to allow the server 8013 to reflect the post on the television program, a network message board, or the like.
Furthermore, by the transmitter 8012 transmitting the above-mentioned information, the server 8013 can charge for television program viewing by paid broadcasting or on-demand TV. The server 8013 can also cause the receiver 8011 to display an advertisement, or the transmitter 8012 to display detailed information of the displayed television program or an URL of a site showing the detailed information. The server 8013 may also obtain the number of times the advertisement is displayed on the receiver 8011, the price of a product bought from the advertisement, or the like, and charge the advertiser according to the number of times or the price. Such price-based charging is possible even in the case where the user seeing the advertisement does not buy the product immediately. When the server 8013 obtains information indicating the manufacturer of the transmitter 8012 from the transmitter 8012 via the receiver 8011, the server 8013 may provide a service (e.g. payment for selling the product) to the manufacturer indicated by the information.
For example, a receiver 8030 is a head-mounted display including a camera. When a start button is pressed, the receiver 8030 starts imaging in the visible light communication mode, i.e. visible light communication. In the case of receiving a signal by visible light communication, the receiver 8030 notifies the user of information corresponding to the received signal. The notification is made, for example, by outputting a sound from a speaker included in the receiver 8030, or by displaying an image. Visible light communication may be started not only when the start button is pressed, but also when the receiver 8030 receives a sound instructing the start or when the receiver 8030 receives a signal instructing the start by wireless communication. Visible light communication may also be started when the change width of the value obtained by a 9-axis sensor included in the receiver 8030 exceeds a predetermined range or when a bright line pattern, even if only slightly, appears in the normal captured image.
The receiver 8030 displays the synthetic image 8034 in the same way as above. The user performs an operation of moving his or her fingertip so as to encircle the bright line pattern in the synthetic image 8034. The receiver 8030 receives the operation, specifies the bright line pattern subjected to the operation, and displays an information notification image 8032 based on a signal transmitted from the part corresponding to the bright line pattern.
The receiver 8030 displays the synthetic image 8034 in the same way as above. The user performs an operation of placing his or her fingertip at the bright line pattern in the synthetic image 8034 for a predetermined time or more. The receiver 8030 receives the operation, specifies the bright line pattern subjected to the operation, and displays an information notification image 8032 based on a signal transmitted from the part corresponding to the bright line pattern.
The transmitter alternately transmits signals 1 and 2, for example in a predetermined period. The transmission of the signal 1 and the transmission of the signal 2 are each carried out by way of luminance change such as blinking of visible light. A luminance change pattern for transmitting the signal 1 and a luminance change pattern for transmitting the signal 2 are different from each other.
When repeatedly transmitting the signal sequence including the blocks 1, 2, and 3 as described above, the transmitter may change, for each signal sequence, the order of the blocks included in the signal sequence. For example, the blocks 1, 2, and 3 are included in this order in the first signal sequence, and the blocks 3, 1, and 2 are included in this order in the next signal sequence. A receiver that requires a periodic blanking interval can therefore avoid obtaining only the same block.
A receiver 7510a such as a smartphone captures a light source 7510b by a back camera (out camera) 7510c to receive a signal transmitted from the light source 7510b, and obtains the position and direction of the light source 7510b from the received signal. The receiver 7510a estimates the position and direction of the receiver 7510a, from the state of the light source 7510b in the captured image and the sensor value of the 9-axis sensor included in the receiver 7510a. The receiver 7510a captures a user 7510e by a front camera (face camera, in camera) 7510f, and estimates the position and direction of the head and the gaze direction (the position and direction of the eye) of the user 7510e by image processing. The receiver 7510a transmits the estimation result to the server. The receiver 7510a changes the behavior (display content or playback sound) according to the gaze direction of the user 7510e. The imaging by the back camera 7510c and the imaging by the front camera 7510f may be performed simultaneously or alternately.
A receiver displays a bright line pattern using the above-mentioned synthetic image, intermediate image, or the like. Here, the receiver may be incapable of receiving a signal from a transmitter corresponding to the bright line pattern. When the user performs an operation (e.g. a tap) on the bright line pattern to select the bright line pattern, the receiver displays the synthetic image or intermediate image in which the bright line pattern is enlarged by optical zoom. Through such optical zoom, the receiver can appropriately receive the signal from the transmitter corresponding to the bright line pattern. That is, even when the captured image is too small to obtain the signal, the signal can be appropriately received by performing optical zoom. In the case where the displayed image is large enough to obtain the signal, too, faster reception is possible by optical zoom.
An information communication method in this embodiment is an information communication method of obtaining information from a subject, the information communication method including: setting an exposure time of an image sensor so that, in an image obtained by capturing the subject by the image sensor, a bright line corresponding to an exposure line included in the image sensor appears according to a change in luminance of the subject; obtaining a bright line image by capturing the subject that changes in luminance by the image sensor with the set exposure time, the bright line image being an image including the bright line; displaying, based on the bright line image, a display image in which the subject and surroundings of the subject are shown, in a form that enables identification of a spatial position of a part where the bright line appears; and obtaining transmission information by demodulating data specified by a pattern of the bright line included in the obtained bright line image.
In this way, a synthetic image or an intermediate image illustrated in, for instance,
For example, the information communication method may further include: setting a longer exposure time than the exposure time; obtaining a normal captured image by capturing the subject and the surroundings of the subject by the image sensor with the longer exposure time; and generating a synthetic image by specifying, based on the bright line image, the part where the bright line appears in the normal captured image, and superimposing a signal object on the normal captured image, the signal object being an image indicating the part, wherein in the displaying, the synthetic image is displayed as the display image.
In this way, the signal object is, for example, a bright line pattern, a signal specification object, a signal identification object, a dotted frame, or the like, and the synthetic image is displayed as the display image as illustrated in
For example, in the setting of an exposure time, the exposure time may be set to 1/3000 second, in the obtaining of a bright line image, the bright line image in which the surroundings of the subject are shown may be obtained, and in the displaying, the bright line image may be displayed as the display image.
In this way, the bright line image is obtained and displayed as an intermediate image. This eliminates the need for a process of obtaining a normal captured image and a visible light communication image and synthesizing them, thus contributing to a simpler process.
For example, the image sensor may include a first image sensor and a second image sensor, in the obtaining of the normal captured image, the normal captured image may be obtained by image capture by the first image sensor, and in the obtaining of a bright line image, the bright line image may be obtained by image capture by the second image sensor simultaneously with the first image sensor.
In this way, the normal captured image and the visible light communication image which is the bright line image are obtained by the respective cameras, for instance as illustrated in
For example, the information communication method may further include presenting, in the case where the part where the bright line appears is designated in the display image by an operation by a user, presentation information based on the transmission information obtained from the pattern of the bright line in the designated part. Examples of the operation by the user include: a tap; a swipe; an operation of continuously placing the user's fingertip on the part for a predetermined time or more; an operation of continuously directing the user's gaze to the part for a predetermined time or more; an operation of moving a part of the user's body according to an arrow displayed in association with the part; an operation of placing a pen tip that changes in luminance on the part; and an operation of pointing to the part with a pointer displayed in the display image by touching a touch sensor.
In this way, the presentation information is displayed as an information notification image, for instance as illustrated in
For example, the image sensor may be included in a head-mounted display, and in the displaying, the display image may be displayed by a projector included in the head-mounted display.
In this way, the information can be easily presented to the user, for instance as illustrated in
For example, an information communication method of obtaining information from a subject may include: setting an exposure time of an image sensor so that, in an image obtained by capturing the subject by the image sensor, a bright line corresponding to an exposure line included in the image sensor appears according to a change in luminance of the subject; obtaining a bright line image by capturing the subject that changes in luminance by the image sensor with the set exposure time, the bright line image being an image including the bright line; and obtaining the information by demodulating data specified by a pattern of the bright line included in the obtained bright line image, wherein in the obtaining of a bright line image, the bright line image including a plurality of parts where the bright line appears is obtained by capturing a plurality of subjects in a period during which the image sensor is being moved, and in the obtaining of the information, a position of each of the plurality of subjects is obtained by demodulating, for each of the plurality of parts, the data specified by the pattern of the bright line in the part, and the information communication method may further include estimating a position of the image sensor, based on the obtained position of each of the plurality of subjects and a moving state of the image sensor.
In this way, the position of the receiver including the image sensor can be accurately estimated based on the changes in luminance of the plurality of subjects such as lightings.
For example, an information communication method of obtaining information from a subject may include: setting an exposure time of an image sensor so that, in an image obtained by capturing the subject by the image sensor, a bright line corresponding to an exposure line included in the image sensor appears according to a change in luminance of the subject; obtaining a bright line image by capturing the subject that changes in luminance by the image sensor with the set exposure time, the bright line image being an image including the bright line; obtaining the information by demodulating data specified by a pattern of the bright line included in the obtained bright line image; and presenting the obtained information, wherein in the presenting, an image prompting to make a predetermined gesture is presented to a user of the image sensor as the information.
In this way, user authentication and the like can be conducted according to whether or not the user makes the gesture as prompted. This enhances convenience.
For example, an information communication method of obtaining information from a subject may include: setting an exposure time of an image sensor so that, in an image obtained by capturing the subject by the image sensor, a bright line corresponding to an exposure line included in the image sensor appears according to a change in luminance of the subject; obtaining a bright line image by capturing the subject that changes in luminance by the image sensor with the set exposure time, the bright line image being an image including the bright line; and obtaining the information by demodulating data specified by a pattern of the bright line included in the obtained bright line image, wherein in the obtaining of a bright line image, the bright line image is obtained by capturing a plurality of subjects reflected on a reflection surface, and in the obtaining of the information, the information is obtained by separating a bright line corresponding to each of the plurality of subjects from bright lines included in the bright line image according to a strength of the bright line and demodulating, for each of the plurality of subjects, the data specified by the pattern of the bright line corresponding to the subject.
In this way, even in the case where the plurality of subjects such as lightings each change in luminance, appropriate information can be obtained from each subject.
For example, an information communication method of obtaining information from a subject may include: setting an exposure time of an image sensor so that, in an image obtained by capturing the subject by the image sensor, a bright line corresponding to an exposure line included in the image sensor appears according to a change in luminance of the subject; obtaining a bright line image by capturing the subject that changes in luminance by the image sensor with the set exposure time, the bright line image being an image including the bright line; and obtaining the information by demodulating data specified by a pattern of the bright line included in the obtained bright line image, wherein in the obtaining of a bright line image, the bright line image is obtained by capturing the subject reflected on a reflection surface, and the information communication method may further include estimating a position of the subject based on a luminance distribution in the bright line image.
In this way, the appropriate position of the subject can be estimated based on the luminance distribution.
For example, an information communication method of transmitting a signal using a change in luminance may include: determining a first pattern of the change in luminance, by modulating a first signal to be transmitted; determining a second pattern of the change in luminance, by modulating a second signal to be transmitted; and transmitting the first signal and the second signal by a light emitter alternately changing in luminance according to the determined first pattern and changing in luminance according to the determined second pattern.
In this way, the first signal and the second signal can each be transmitted without a delay, for instance as illustrated in
For example, in the transmitting, a buffer time may be provided when switching the change in luminance between the change in luminance according to the first pattern and the change in luminance according to the second pattern.
In this way, interference between the first signal and the second signal can be suppressed.
For example, an information communication method of transmitting a signal using a change in luminance may include: determining a pattern of the change in luminance by modulating the signal to be transmitted; and transmitting the signal by a light emitter changing in luminance according to the determined pattern, wherein the signal is made up of a plurality of main blocks, each of the plurality of main blocks includes first data, a preamble for the first data, and a check signal for the first data, the first data is made up of a plurality of sub-blocks, and each of the plurality of sub-blocks includes second data, a preamble for the second data, and a check signal for the second data.
In this way, data can be appropriately obtained regardless of whether or not the receiver needs a blanking interval.
For example, an information communication method of transmitting a signal using a change in luminance may include: determining, by each of a plurality of transmitters, a pattern of the change in luminance by modulating the signal to be transmitted; and transmitting, by each of the plurality of transmitters, the signal by a light emitter in the transmitter changing in luminance according to the determined pattern, wherein in the transmitting, the signal of a different frequency or protocol is transmitted.
In this way, interference between signals from the plurality of transmitters can be suppressed.
For example, an information communication method of transmitting a signal using a change in luminance may include: determining, by each of a plurality of transmitters, a pattern of the change in luminance by modulating the signal to be transmitted; and transmitting, by each of the plurality of transmitters, the signal by a light emitter in the transmitter changing in luminance according to the determined pattern, wherein in the transmitting, one of the plurality of transmitters receives a signal transmitted from a remaining one of the plurality of transmitters, and transmits an other signal in a form that does not interfere with the received signal.
In this way, interference between signals from the plurality of transmitters can be suppressed.
This embodiment describes each example of application using a receiver such as a smartphone and a transmitter for transmitting information as a blink pattern of an LED, an organic EL device, or the like in Embodiment 1 or 2 described above.
A receiver 8142 such as a smartphone obtains position information indicating the position of the receiver 8142, and transmits the position information to a server 8141. For example, the receiver 8142 obtains the position information when using a GPS or the like or receiving another signal. The server 8141 transmits an ID list associated with the position indicated by the position information, to the receiver 8142. The ID list includes each ID such as “abcd” and information associated with the ID.
The receiver 8142 receives a signal from a transmitter 8143 such as a lighting device. Here, the receiver 8142 may be able to receive only a part (e.g. “b”) of an ID as the above-mentioned signal. In such a case, the receiver 8142 searches the ID list for the ID including the part. In the case where the unique ID is not found, the receiver 8142 further receives a signal including another part of the ID, from the transmitter 8143. The receiver 8142 thus obtains a larger part (e.g. “bc”) of the ID. The receiver 8142 again searches the ID list for the ID including the part (e.g. “bc”). Through such search, the receiver 8142 can specify the whole ID even in the case where the ID can be obtained only partially. Note that, when receiving the signal from the transmitter 8143, the receiver 8142 receives not only the part of the ID but also a check portion such as a CRC (Cyclic Redundancy Check).
A transmitter 8165 such as a television obtains an image and an ID (ID 1000) associated with the image, from a control unit 8166. The transmitter 8165 displays the image, and also transmits the ID (ID 1000) to a receiver 8167 by changing in luminance. The receiver 8167 captures the transmitter 8165 to receive the ID (ID 1000), and displays information associated with the ID (ID 1000). The control unit 8166 then changes the image output to the transmitter 8165, to another image. The control unit 8166 also changes the ID output to the transmitter 8165. That is, the control unit 8166 outputs the other image and the other ID (ID 1001) associated with the other image, to the transmitter 8165. The transmitter 8165 displays the other image, and transmits the other ID (ID 1001) to the receiver 8167 by changing in luminance. The receiver 8167 captures the transmitter 8165 to receive the other ID (ID 1001), and displays information associated with the other ID (ID 1001).
A transmitter 8185 such as a smartphone transmits information indicating “Coupon 100 yen off” as an example, by causing a part of a display 8185a except a barcode part 8185b to change in luminance, i.e. by visible light communication. The transmitter 8185 also causes the barcode part 8185b to display a barcode without changing in luminance. The barcode indicates the same information as the above-mentioned information transmitted by visible light communication. The transmitter 8185 further causes the part of the display 8185a except the barcode part 8185b to display the characters or pictures, e.g. the characters “Coupon 100 yen off”, indicating the information transmitted by visible light communication. Displaying such characters or pictures allows the user of the transmitter 8185 to easily recognize what kind of information is being transmitted.
A receiver 8186 performs image capture to obtain the information transmitted by visible light communication and the information indicated by the barcode, and transmits these information to a server 8187. The server 8187 determines whether or not these information match or relate to each other. In the case of determining that these information match or relate to each other, the server 8187 executes a process according to these information. Alternatively, the server 8187 transmits the determination result to the receiver 8186 so that the receiver 8186 executes the process according to these information.
The transmitter 8185 may transmit a part of the information indicated by the barcode, by visible light communication. Moreover, the URL of the server 8187 may be indicated in the barcode. Furthermore, the transmitter 8185 may obtain an ID as a receiver, and transmit the ID to the server 8187 to thereby obtain information associated with the ID. The information associated with the ID is the same as the information transmitted by visible light communication or the information indicated by the barcode. The server 8187 may transmit an ID associated with information (visible light communication information or barcode information) transmitted from the transmitter 8185 via the receiver 8186, to the transmitter 8185.
For example, the receiver 8183 captures a subject including a plurality of persons 8197 and a street lighting 8195. The street lighting 8195 includes a transmitter 8195a that transmits information by changing in luminance. By capturing the subject, the receiver 8183 obtains an image in which the image of the transmitter 8195a appears as the above-mentioned bright line pattern. The receiver 8183 obtains an AR object 8196a associated with an ID indicated by the bright line pattern, from a server or the like. The receiver 8183 superimposes the AR object 8196a on a normal captured image 8196 obtained by normal imaging, and displays the normal captured image 8196 on which the AR object 8196a is superimposed.
An information communication method in this embodiment is an information communication method of transmitting a signal using a change in luminance, the information communication method including: determining a pattern of the change in luminance by modulating the signal to be transmitted; and transmitting the signal by a light emitter changing in luminance according to the determined pattern, wherein the pattern of the change in luminance is a pattern in which one of two different luminance values occurs in each arbitrary position in a predetermined duration, and in the determining, the pattern of the change in luminance is determined so that, for each of different signals to be transmitted, a luminance change position in the duration is different and an integral of luminance of the light emitter in the duration is a same value corresponding to preset brightness, the luminance change position being a position at which the luminance rises or a position at which the luminance falls.
In this way, the luminance change pattern is determined so that, for each of the different signals “00”, “01”, “10”, and “11” to be transmitted, the position at which the luminance rises (luminance change position) is different and also the integral of luminance of the light emitter in the predetermined duration (unit duration) is the same value corresponding to the preset brightness (e.g. 99% or 1%). Thus, the brightness of the light emitter can be maintained constant for each signal to be transmitted, with it being possible to suppress flicker. In addition, a receiver that captures the light emitter can appropriately demodulate the luminance change pattern based on the luminance change position. Furthermore, since the luminance change pattern is a pattern in which one of two different luminance values (luminance H (High) or luminance L (Low)) occurs in each arbitrary position in the unit duration, the brightness of the light emitter can be changed continuously.
For example, the information communication method may include sequentially displaying a plurality of images by switching between the plurality of images, wherein in the determining, each time an image is displayed in the sequentially displaying, the pattern of the change in luminance for identification information corresponding to the displayed image is determined by modulating the identification information as the signal, and in the transmitting, each time the image is displayed in the sequentially displaying, the identification information corresponding to the displayed image is transmitted by the light emitter changing in luminance according to the pattern of the change in luminance determined for the identification information.
In this way, each time an image is displayed, the identification information corresponding to the displayed image is transmitted, for instance as illustrated in
For example, in the transmitting, each time the image is displayed in the sequentially displaying, identification information corresponding to a previously displayed image may be further transmitted by the light emitter changing in luminance according to the pattern of the change in luminance determined for the identification information.
In this way, even in the case where, as a result of switching the displayed image, the receiver cannot receive the identification signal transmitted before the switching, the receiver can appropriately receive the identification information transmitted before the switching because the identification information corresponding to the previously displayed image is transmitted together with the identification information corresponding to the currently displayed image.
For example, in the determining, each time the image is displayed in the sequentially displaying, the pattern of the change in luminance for the identification information corresponding to the displayed image and a time at which the image is displayed may be determined by modulating the identification information and the time as the signal, and in the transmitting, each time the image is displayed in the sequentially displaying, the identification information and the time corresponding to the displayed image may be transmitted by the light emitter changing in luminance according to the pattern of the change in luminance determined for the identification information and the time, and the identification information and a time corresponding to the previously displayed image may be further transmitted by the light emitter changing in luminance according to the pattern of the change in luminance determined for the identification information and the time.
In this way, each time an image is displayed, a plurality of sets of ID time information (information made up of identification information and a time) are transmitted. The receiver can easily select, from the received plurality of sets of ID time information, a previously transmitted identification signal which the receiver cannot be received, based on the time included in each set of ID time information.
For example, the light emitter may have a plurality of areas each of which emits light, and in the transmitting, in the case where light from adjacent areas of the plurality of areas interferes with each other and only one of the plurality of areas changes in luminance according to the determined pattern of the change in luminance, only an area located at an edge from among the plurality of areas may change in luminance according to the determined pattern of the change in luminance.
In this way, only the area (light emitting unit) located at the edge changes in luminance. The influence of light from another area on the luminance change can therefore be suppressed as compared with the case where only an area not located at the edge changes in luminance. As a result, the receiver can capture the luminance change pattern appropriately.
For example, in the transmitting, in the case where only two of the plurality of areas change in luminance according to the determined pattern of the change in luminance, the area located at the edge and an area adjacent to the area located at the edge from among the plurality of areas may change in luminance according to the determined pattern of the change in luminance.
In this way, the area (light emitting unit) located at the edge and the area (light emitting unit) adjacent to the area located at the edge change in luminance. The spatially continuous luminance change range has a wide area, as compared with the case where areas apart from each other change in luminance. As a result, the receiver can capture the luminance change pattern appropriately.
An information communication method in this embodiment is an information communication method of obtaining information from a subject, the information communication method including: transmitting position information indicating a position of an image sensor used to capture the subject; receiving an ID list that is associated with the position indicated by the position information and includes a plurality of sets of identification information; setting an exposure time of the image sensor so that, in an image obtained by capturing the subject by the image sensor, a bright line corresponding to an exposure line included in the image sensor appears according to a change in luminance of the subject; obtaining a bright line image including the bright line, by capturing the subject that changes in luminance by the image sensor with the set exposure time; obtaining the information by demodulating data specified by a pattern of the bright line included in the obtained bright line image; and searching the ID list for identification information that includes the obtained information.
In this way, since the ID list is received beforehand, even when the obtained information “bc” is only a part of identification information, the appropriate identification information “abcd” can be specified based on the ID list, for instance as illustrated in
For example, in the case where the identification information that includes the obtained information is not uniquely specified in the searching, the obtaining of a bright line image and the obtaining of the information may be repeated to obtain new information, and the information communication method may further include searching the ID list for the identification information that includes the obtained information and the new information.
In this way, even in the case where the obtained information “b” is only a part of identification information and the identification information cannot be uniquely specified with this information alone, the new information “c” is obtained and so the appropriate identification information “abcd” can be specified based on the new information and the ID list, for instance as illustrated in
An information communication method in this embodiment is an information communication method of obtaining information from a subject, the information communication method including: setting an exposure time of an image sensor so that, in an image obtained by capturing the subject by the image sensor, a bright line corresponding to an exposure line included in the image sensor appears according to a change in luminance of the subject; obtaining a bright line image including the bright line, by capturing the subject that changes in luminance by the image sensor with the set exposure time; obtaining identification information by demodulating data specified by a pattern of the bright line included in the obtained bright line image; transmitting the obtained identification information and position information indicating a position of the image sensor; and receiving error notification information for notifying an error, in the case where the obtained identification information is not included in an ID list that is associated with the position indicated by the position information and includes a plurality of sets of identification information.
In this way, the error notification information is received in the case where the obtained identification information is not included in the ID list. Upon receiving the error notification information, the user of the receiver can easily recognize that information associated with the obtained identification information cannot be obtained.
This embodiment describes each example of application using a receiver such as a smartphone and a transmitter for transmitting information as a blink pattern of an LED, an organic EL device, or the like in Embodiments 1 to 4 described above.
The transmitter includes an ID storage unit 8361, a random number generation unit 8362, an addition unit 8363, an encryption unit 8364, and a transmission unit 8365. The ID storage unit 8361 stores the ID of the transmitter. The random number generation unit 8362 generates a different random number at regular time intervals. The addition unit 8363 combines the ID stored in the ID storage unit 8361 with the latest random number generated by the random number generation unit 8362, and outputs the result as an edited ID. The encryption unit 8364 encrypts the edited ID to generate an encrypted edited ID. The transmission unit 8365 transmits the encrypted edited ID to the receiver by changing in luminance.
The receiver includes a reception unit 8366, a decryption unit 8367, and an ID obtainment unit 8368. The reception unit 8366 receives the encrypted edited ID from the transmitter, by capturing the transmitter (visible light imaging). The decryption unit 8367 decrypts the received encrypted edited ID to restore the edited ID. The ID obtainment unit 8368 extracts the ID from the edited ID, thus obtaining the ID.
For instance, the ID storage unit 8361 stores the ID “100”, and the random number generation unit 8362 generates a new random number “817” (example 1). In this case, the addition unit 8363 combines the ID “100” with the random number “817” to generate the edited ID “100817”, and outputs it. The encryption unit 8364 encrypts the edited ID “100817” to generate the encrypted edited ID “abced”. The decryption unit 8367 in the receiver decrypts the encrypted edited ID “abced” to restore the edited ID “100817”. The ID obtainment unit 8368 extracts the ID “100” from the restored edited ID “100817”. In other words, the ID obtainment unit 8368 obtains the ID “100” by deleting the last three digits of the edited ID.
Next, the random number generation unit 8362 generates a new random number “619” (example 2). In this case, the addition unit 8363 combines the ID “100” with the random number “619” to generate the edited ID “100619”, and outputs it. The encryption unit 8364 encrypts the edited ID “100619” to generate the encrypted edited ID “difia”. The decryption unit 8367 in the receiver decrypts the encrypted edited ID “difia” to restore the edited ID “100619”. The ID obtainment unit 8368 extracts the ID “100” from the restored edited ID “100619”. In other words, the ID obtainment unit 8368 obtains the ID “100” by deleting the last three digits of the edited ID.
Thus, the transmitter does not simply encrypt the ID but encrypts its combination with the random number changed at regular time intervals, with it being possible to prevent the ID from being easily cracked from the signal transmitted from the transmission unit 8365. That is, in the case where the simply encrypted ID is transmitted from the transmitter to the receiver a plurality of times, even though the ID is encrypted, the signal transmitted from the transmitter to the receiver is the same if the ID is the same, so that there is a possibility of the ID being cracked. In the example illustrated in
Note that the receiver illustrated in
(Station Guide)
(Coupon Popup)
(Start of Operation Application)
(Database)
The database includes an ID-data table holding data provided in response to an inquiry using an ID as a key, and an access log table holding each record of inquiry using an ID as a key. The ID-data table includes an ID transmitted from a transmitter, data provided in response to an inquiry using the ID as a key, a data provision condition, the number of times access is made using the ID as a key, and the number of times the data is provided as a result of clearing the condition. Examples of the data provision condition include the date and time, the number of accesses, the number of successful accesses, terminal information of the inquirer (terminal model, application making inquiry, current position of terminal, etc.), and user information of the inquirer (age, sex, occupation, nationality, language, religion, etc.). By using the number of successful accesses as the condition, a method of providing such a service that “1 yen per access, though no data is returned after 100 yen as upper limit” is possible. When access is made using an ID as a key, the log table records the ID, the user ID of the requester, the time, other ancillary information, whether or not data is provided as a result of clearing the condition, and the provided data.
(Communication Protocol Different According to Zone)
A receiver 8420a receives zone information form a base station 8420h, recognizes in which position the receiver 8420a is located, and selects a reception protocol. The base station 8420h is, for example, a mobile phone communication base station, a W-Fi access point, an IMES transmitter, a speaker, or a wireless transmitter (Bluetooth®, ZigBee, specified low power radio station, etc.). The receiver 8420a may specify the zone based on position information obtained from GPS or the like. As an example, it is assumed that communication is performed at a signal frequency of 9.6 kHz in zone A, and communication is performed at a signal frequency of 15 kHz by a ceiling light and at a signal frequency of 4.8 kHz by a signage in zone B. At a position 8420j, the receiver 8420a recognizes that the current position is zone A from information from the base station 8420h, and performs reception at the signal frequency of 9.6 kHz, thus receiving signals transmitted from transmitters 8420b and 8420c. At a position 84201, the receiver 8420a recognizes that the current position is zone B from information from a base station 8420i, and also estimates that a signal from a ceiling light is to be received from the movement of directing the in camera upward. The receiver 8420a performs reception at the signal frequency of 15 kHz, thus receiving signals transmitted from transmitters 8420e and 8420f. At a position 8420m, the receiver 8420a recognizes that the current position is zone B from information from the base station 8420i, and also estimates that a signal transmitted from a signage is to be received from the movement of sticking out the out camera. The receiver 8420a performs reception at the signal frequency of 4.8 kHz, thus receiving a signal transmitted from a transmitter 8420g. At a position 8420k, the receiver 8420a receives signals from both of the base stations 8420h and 8420i and cannot determine whether the current position is zone A or zone B. The receiver 8420a accordingly performs reception at both 9.6 kHz and 15 kHz. The part of the protocol different according to zone is not limited to the frequency, and may be the transmission signal modulation scheme, the signal format, or the server inquired using an ID. The base station 8420h or 8420i may transmit the protocol in the zone to the receiver, or transmit only the ID indicating the zone to the receiver so that the receiver obtains protocol information from a server using the zone ID as a key.
Transmitters 8420b to 8420f each receive the zone ID or protocol information from the base station 8420h or 8420i, and determine the signal transmission protocol. The transmitter 8420d that can receive the signals from both the base stations 8420h and 8420i uses the protocol of the zone of the base station with a higher signal strength, or alternately use both protocols.
(Recognition of Zone and Service for Each Zone)
A receiver 8421a recognizes a zone to which the position of the receiver 8421a belongs, from a received signal. The receiver 8421a provides a service (coupon distribution, point assignment, route guidance, etc.) determined for each zone. As an example, the receiver 8421a receives a signal transmitted from the left of a transmitter 8421b, and recognizes that the receiver 8421a is located in zone A. Here, the transmitter 8421b may transmit a different signal depending on the transmission direction. Moreover, the transmitter 8421b may, through the use of a signal of the light emission pattern such as 2217a, transmit a signal so that a different signal is received depending on the distance to the receiver. The receiver 8421a may recognize the position relation with the transmitter 8421b from the direction and size in which the transmitter 8421b is captured, and recognize the zone in which the receiver 8421a is located.
Signals indicating the same zone may have a common part. For example, the first half of an ID indicating zone A, which is transmitted from each of the transmitters 8421b and 8421c, is common. This enables the receiver 8421a to recognize the zone where the receiver 8421a is located, merely by receiving the first half of the signal.
An information communication method in this embodiment is an information communication method of transmitting a signal using a change in luminance, the information communication method including: determining a plurality of patterns of the change in luminance, by modulating each of a plurality of signals to be transmitted; and transmitting, by each of a plurality of light emitters changing in luminance according to any one of the plurality of determined patterns of the change in luminance, a signal corresponding to the pattern, wherein in the transmitting, each of two or more light emitters of the plurality of light emitters changes in luminance at a different frequency so that light of one of two types of light different in luminance is output per a time unit determined for the light emitter beforehand and that the time unit determined for each of the two or more light emitters is different.
In this way, two or more light emitters (e.g. transmitters as lighting devices) each change in luminance at a different frequency. Therefore, a receiver that receives signals (e.g. light emitter IDs) from these light emitters can easily obtain the signals separately from each other.
For example, in the transmitting, each of the plurality of light emitters may change in luminance at any one of at least four types of frequencies, and the two or more light emitters of the plurality of transmitters may change in luminance at the same frequency. For example, in the transmitting, the plurality of light emitters each change in luminance so that a luminance change frequency is different between all light emitters which, in the case where the plurality of light emitters are projected on a light receiving surface of an image sensor for receiving the plurality of signals, are adjacent to each other on the light receiving surface.
In this way, as long as there are at least four types of frequencies used for luminance changes, even in the case where two or more light emitters change in luminance at the same frequency, i.e. in the case where the number of types of frequencies is smaller than the number of light emitters, it can be ensured that the luminance change frequency is different between all light emitters adjacent to each other on the light receiving surface of the image sensor based on the four color problem or the four color theorem. As a result, the receiver can easily obtain the signals transmitted from the plurality of light emitters, separately from each other.
For example, in the transmitting, each of the plurality of light emitters may transmit the signal, by changing in luminance at a frequency specified by a hash value of the signal.
In this way, each of the plurality of light emitters changes in luminance at the frequency specified by the hash value of the signal (e.g. light emitter ID). Accordingly, upon receiving the signal, the receiver can determine whether or not the frequency specified from the actual change in luminance and the frequency specified by the hash value match. That is, the receiver can determine whether or not the received signal (e.g. light emitter ID) has an error.
For example, the information communication method may further include: calculating, from a signal to be transmitted which is stored in a signal storage unit, a frequency corresponding to the signal according to a predetermined function, as a first frequency; determining whether or not a second frequency stored in a frequency storage unit and the calculated first frequency match; and in the case of determining that the first frequency and the second frequency do not match, reporting an error, wherein in the case of determining that the first frequency and the second frequency match, in the determining, a pattern of the change in luminance is determined by modulating the signal stored in the signal storage unit, and in the transmitting, the signal stored in the signal storage unit is transmitted by any one of the plurality of light emitters changing in luminance at the first frequency according to the determined pattern.
In this way, whether or not the frequency stored in the frequency storage unit and the frequency calculated from the signal stored in the signal storage unit (ID storage unit) match is determined and, in the case of determining that the frequencies do not match, an error is reported. This eases abnormality detection on the signal transmission function of the light emitter.
For example, the information communication method may further include: calculating a first check value from a signal to be transmitted which is stored in a signal storage unit, according to a predetermined function; determining whether or not a second check value stored in a check value storage unit and the calculated first check value match; and in the case of determining that the first check value and the second check value do not match, reporting an error, wherein in the case of determining that the first check value and the second check value match, in the determining, a pattern of the change in luminance is determined by modulating the signal stored in the signal storage unit, and in the transmitting, the signal stored in the signal storage unit is transmitted by any one of the plurality of light emitters changing in luminance at the first frequency according to the determined pattern.
In this way, whether or not the check value stored in the check value storage unit and the check value calculated from the signal stored in the signal storage unit (ID storage unit) match is determined and, in the case of determining that the check values do not match, an error is reported. This eases abnormality detection on the signal transmission function of the light emitter.
An information communication method in this embodiment is an information communication method of obtaining information from a subject, the information communication method including: setting an exposure time of an image sensor so that, in an image obtained by capturing the subject by the image sensor, a plurality of bright lines corresponding to a plurality of exposure lines included in the image sensor appear according to a change in luminance of the subject; obtaining a bright line image including the plurality of bright lines, by capturing the subject that changes in luminance by the image sensor with the set exposure time; obtaining the information by demodulating data specified by a pattern of the plurality of bright lines included in the obtained image; and specifying a luminance change frequency of the subject, based on the pattern of the plurality of bright lines included in the obtained bright line image. For example, in the specifying, a plurality of header patterns that are included in the pattern of the plurality of bright lines and are a plurality of patterns each determined beforehand to indicate a header are specified, and a frequency corresponding to the number of pixels between the plurality of header patterns is specified as the luminance change frequency of the subject.
In this way, the luminance change frequency of the subject is specified. In the case where a plurality of subjects that differ in luminance change frequency are captured, information from these subjects can be easily obtained separately from each other.
For example, in the obtaining of a bright line image, the bright line image including a plurality of patterns represented respectively by the plurality of bright lines may be obtained by capturing a plurality of subjects each of which changes in luminance, and in the obtaining of the information, in the case where the plurality of patterns included in the obtained bright line image overlap each other in a part, the information may be obtained from each of the plurality of patterns by demodulating the data specified by a part of each of the plurality of patterns other than the part.
In this way, data is not demodulated from the overlapping part of the plurality of patterns (the plurality of bright line patterns). Obtainment of wrong information can thus be prevented.
For example, in the obtaining of a bright line image, a plurality of bright line images may be obtained by capturing the plurality of subjects a plurality of times at different timings from each other, in the specifying, for each bright line image, a frequency corresponding to each of the plurality of patterns included in the bright line image may be specified, and in the obtaining of the information, the plurality of bright line images may be searched for a plurality of patterns for which the same frequency is specified, the plurality of patterns searched for may be combined, and the information may be obtained by demodulating the data specified by the combined plurality of patterns.
In this way, the plurality of bright line images are searched for the plurality of patterns (the plurality of bright line patterns) for which the same frequency is specified, the plurality of patterns searched for are combined, and the information is obtained from the combined plurality of patterns. Hence, even in the case where the plurality of subjects are moving, information from the plurality of subjects can be easily obtained separately from each other.
For example, the information communication method may further include: transmitting identification information of the subject included in the obtained information and specified frequency information indicating the specified frequency, to a server in which a frequency is registered for each set of identification information; and obtaining related information associated with the identification information and the frequency indicated by the specified frequency information, from the server.
In this way, the related information associated with the identification information (ID) obtained based on the luminance change of the subject (transmitter) and the frequency of the luminance change is obtained. By changing the luminance change frequency of the subject and updating the frequency registered in the server with the changed frequency, a receiver that has obtained the identification information before the change of the frequency is prevented from obtaining the related information from the server. That is, by changing the frequency registered in the server according to the change of the luminance change frequency of the subject, it is possible to prevent a situation where a receiver that has previously obtained the identification information of the subject can obtain the related information from the server for an indefinite period of time.
For example, the information communication method may further include: obtaining identification information of the subject, by extracting a part from the obtained information; and specifying a number indicated by the obtained information other than the part, as a luminance change frequency set for the subject.
In this way, the identification information of the subject and the luminance change frequency set for the subject can be included independently of each other in the information obtained from the pattern of the plurality of bright lines. This contributes to a higher degree of freedom of the identification information and the set frequency.
This embodiment describes each example of application using a receiver such as a smartphone and a transmitter for transmitting information as a blink pattern of an LED or an organic EL device in each of the embodiments described above.
(Notification of Visible Light Communication to Humans)
A light emitting unit in a transmitter 8921a repeatedly performs blinking visually recognizable by humans and visible light communication, as illustrated in (a) in
Thus, the transmitter in this embodiment repeatedly alternates between a step of a light emitter transmitting a signal by changing in luminance and a step of the light emitter blinking so as to be visible to the human eye.
The transmitter may include a visible light communication unit and a blinking unit (communication state display unit) separately, as illustrated in (b) in
The transmitter may operate as illustrated in (c) in
(Example of Application to Route Guidance)
A receiver 8955a receives a transmission ID of a transmitter 8955b such as a guide sign, obtains data of a map displayed on the guide sign from a server, and displays the map data. Here, the server may transmit an advertisement suitable for the user of the receiver 8955a, so that the receiver 8955a displays the advertisement information, too. The receiver 8955a displays the route from the current position to the location designated by the user.
(Example of Application to Use Log Storage and Analysis)
A receiver 8957a receives an ID transmitted from a transmitter 8957b such as a sign, obtains coupon information from a server, and displays the coupon information. The receiver 8957a stores the subsequent behavior of the user such as saving the coupon, moving to a store displayed in the coupon, shopping in the store, or leaving without saving the coupon, in the server 8957c. In this way, the subsequent behavior of the user who has obtained information from the sign 8957b can be analyzed to estimate the advertisement value of the sign 8957b.
(Example of Application to Screen Sharing)
A transmitter 8960b such as a projector or a display transmits information (an SSID, a password for wireless connection, an IP address, a password for operating the transmitter) for wirelessly connecting to the transmitter 8960b, or transmits an ID which serves as a key for accessing such information. A receiver 8960a such as a smartphone, a tablet, a notebook computer, or a camera receives the signal transmitted from the transmitter 8960b to obtain the information, and establishes wireless connection with the transmitter 8960b. The wireless connection may be made via a router, or directly made by Wi-Fi Direct, Bluetooth®, Wireless Home Digital Interface, or the like. The receiver 8960a transmits a screen to be displayed by the transmitter 8960b. Thus, an image on the receiver can be easily displayed on the transmitter.
When connected with the receiver 8960a, the transmitter 8960b may notify the receiver 8960a that not only the information transmitted from the transmitter but also a password is needed for screen display, and refrain from displaying the transmitted screen if a correct password is not obtained. In this case, the receiver 8960a displays a password input screen 8960d or the like, and prompts the user to input the password.
As described above, according to this embodiment, the position estimation accuracy can be enhanced by employing both the position estimation by visible light communication and the position estimation by wireless communication.
Though the information communication method according to one or more aspects has been described by way of the embodiments above, the present disclosure is not limited to these embodiments. Modifications obtained by applying various changes conceivable by those skilled in the art to the embodiments and any combinations of structural elements in different embodiments are also included in the scope of one or more aspects without departing from the scope of the present disclosure.
An information communication method according to an aspect of the present disclosure may also be applied as illustrated in
A camera serving as a receiver in the visible light communication captures an image in a normal imaging mode (Step 1). Through this imaging, the camera obtains an image file in a format such as an exchangeable image file format (EXIF). Next, the camera captures an image in a visible light communication imaging mode (Step 2). The camera obtains, based on a pattern of bright lines in an image obtained by this imaging, a signal (visible light communication information) transmitted from a subject serving as a transmitter by visible light communication (Step 3). Furthermore, the camera accesses a server by using the signal (reception information) as a key and obtains, from the server, information corresponding to the key (Step 4). The camera stores each of the following as metadata of the above image file: the signal transmitted from the subject by visible light communication (visible light reception data); the information obtained from the server; data indicating a position of the subject serving as the transmitter in the image represented by the image file; data indicating the time at which the signal transmitted by visible light communication is received (time in the moving image); and others. Note that in the case where a plurality of transmitters are shown as subjects in a captured image (an image file), the camera stores, for each of the transmitters, pieces of the metadata corresponding to the transmitter into the image file.
When displaying an image represented by the above-described image file, a display or projector serving as a transmitter in the visible light communication transmits, by visible light communication, a signal corresponding to the metadata included in the image file. For example, in the visible light communication, the display or the projector may transmit the metadata itself or transmit, as a key, the signal associated with the transmitter shown in the image.
The mobile terminal (the smartphone) serving as the receiver in the visible light communication captures an image of the display or the projector, thereby receiving a signal transmitted from the display or the projector by visible light communication. When the received signal is the above-described key, the mobile terminal uses the key to obtain, from the display, the projector, or the server, metadata of the transmitter associated with the key. When the received signal is a signal transmitted from a really existing transmitter by visible light communication (visible light reception data or visible light communication information), the mobile terminal obtains information corresponding to the visible light reception data or the visible light communication information from the display, the projector, or the server.
An information communication method in this embodiment is an information communication method of obtaining information from a subject, the information communication method including: setting a first exposure time of an image sensor so that, in an image obtained by capturing a first subject by the image sensor, a plurality of bright lines corresponding to exposure lines included in the image sensor appear according to a change in luminance of the first subject, the first subject being the subject; obtaining a first bright line image which is an image including the plurality of bright lines, by capturing the first subject changing in luminance by the image sensor with the set first exposure time; obtaining first transmission information by demodulating data specified by a pattern of the plurality of bright lines included in the obtained first bright line image; and causing an opening and closing drive device of a door to open the door, by transmitting a control signal after the first transmission information is obtained.
In this way, the receiver including the image sensor can be used as a door key, thus eliminating the need for a special electronic lock. This enables communication between various devices including a device with low computational performance.
For example, the information communication method may further include: obtaining a second bright line image which is an image including a plurality of bright lines, by capturing a second subject changing in luminance by the image sensor with the set first exposure time; obtaining second transmission information by demodulating data specified by a pattern of the plurality of bright lines included in the obtained second bright line image; and determining whether or not a reception device including the image sensor is approaching the door, based on the obtained first transmission information and second transmission information, wherein in the causing of an opening and closing drive device, the control signal is transmitted in the case of determining that the reception device is approaching the door.
In this way, the door can be opened at appropriate timing, i.e. only when the reception device (receiver) is approaching the door.
For example, the information communication method may further include: setting a second exposure time longer than the first exposure time; and obtaining a normal image in which a third subject is shown, by capturing the third subject by the image sensor with the set second exposure time, wherein in the obtaining of a normal image, electric charge reading is performed on each of a plurality of exposure lines in an area including optical black in the image sensor, after a predetermined time elapses from when electric charge reading is performed on an exposure line adjacent to the exposure line, and in the obtaining of a first bright line image, electric charge reading is performed on each of a plurality of exposure lines in an area other than the optical black in the image sensor, after a time longer than the predetermined time elapses from when electric charge reading is performed on an exposure line adjacent to the exposure line, the optical black not being used in electric charge reading.
In this way, electric charge reading (exposure) is not performed on the optical black when obtaining the first bright line image, so that the time for electric charge reading (exposure) on an effective pixel area, which is an area in the image sensor other than the optical black, can be increased. As a result, the time for signal reception in the effective pixel area can be increased, with it being possible to obtain more signals.
For example, the information communication method may further include: determining whether or not a length of the pattern of the plurality of bright lines included in the first bright line image is less than a predetermined length, the length being perpendicular to each of the plurality of bright lines; changing a frame rate of the image sensor to a second frame rate lower than a first frame rate used when obtaining the first bright line image, in the case of determining that the length of the pattern is less than the predetermined length; obtaining a third bright line image which is an image including a plurality of bright lines, by capturing the first subject changing in luminance by the image sensor with the set first exposure time at the second frame rate; and obtaining the first transmission information by demodulating data specified by a pattern of the plurality of bright lines included in the obtained third bright line image.
In this way, in the case where the signal length indicated by the bright line pattern (bright line area) included in the first bright line image is less than, for example, one block of the transmission signal, the frame rate is decreased and the bright line image is obtained again as the third bright line image. Since the length of the bright line pattern included in the third bright line image is longer, one block of the transmission signal is successfully obtained.
For example, the information communication method may further include setting an aspect ratio of an image obtained by the image sensor, wherein the obtaining of a first bright line image includes: determining whether or not an edge of the image perpendicular to the exposure lines is clipped in the set aspect ratio; changing the set aspect ratio to a non-clipping aspect ratio in which the edge is not clipped, in the case of determining that the edge is clipped; and obtaining the first bright line image in the non-clipping aspect ratio, by capturing the first subject changing in luminance by the image sensor.
In this way, in the case where the aspect ratio of the effective pixel area in the image sensor is 4:3 but the aspect ratio of the image is set to 16:9 and horizontal bright lines appear, i.e. the exposure lines extend along the horizontal direction, it is determined that top and bottom edges of the image are clipped, i.e. edges of the first bright line image is lost. In such a case, the aspect ratio of the image is changed to an aspect ratio that involves no clipping, for example, 4:3. This prevents edges of the first bright line image from being lost, as a result of which a lot of information can be obtained from the first bright line image.
For example, the information communication method may further include: compressing the first bright line image in a direction parallel to each of the plurality of bright lines included in the first bright line image, to generate a compressed image; and transmitting the compressed image.
In this way, the first bright line image can be appropriately compressed without losing information indicated by the plurality of bright lines.
For example, the information communication method may further include: determining whether or not a reception device including the image sensor is moved in a predetermined manner; and activating the image sensor, in the case of determining that the reception device is moved in the predetermined manner.
In this way, the image sensor can be easily activated only when needed. This contributes to improved power consumption efficiency.
This embodiment describes each example of application using a receiver such as a smartphone and a transmitter for transmitting information as a blink pattern of an LED or an organic EL device in each of the embodiments described above.
A robot 8970 has a function as, for example, a self-propelled vacuum cleaner and a function as a receiver in each of the above embodiments. Lighting devices 8971a and 8971b each have a function as a transmitter in each of the above embodiments.
For instance, the robot 8970 cleans a room and also captures the lighting device 8971a illuminating the interior of the room, while moving in the room. The lighting device 8971a transmits the ID of the lighting device 8971a by changing in luminance. The robot 8970 accordingly receives the ID from the lighting device 8971a, and estimates the position (self-position) of the robot 8970 based on the ID, as in each of the above embodiments. That is, the robot 8970 estimates the position of the robot 8970 while moving, based on the result of detection by a 9-axis sensor, the relative position of the lighting device 8971a shown in the captured image, and the absolute position of the lighting device 8971a specified by the ID.
When the robot 8970 moves away from the lighting device 8971a, the robot 8970 transmits a signal (turn off instruction) instructing to turn off, to the lighting device 8971a. For example, when the robot 8970 moves away from the lighting device 8971a by a predetermined distance, the robot 8970 transmits the turn off instruction. Alternatively, when the lighting device 8971a is no longer shown in the captured image or when another lighting device is shown in the image, the robot 8970 transmits the turn off instruction to the lighting device 8971a. Upon receiving the turn off instruction from the robot 8970, the lighting device 8971a turns off according to the turn off instruction.
The robot 8970 then detects that the robot 8970 approaches the lighting device 8971b based on the estimated position of the robot 8970, while moving and cleaning the room. In detail, the robot 8970 holds information indicating the position of the lighting device 8971b and, when the distance between the position of the robot 8970 and the position of the lighting device 8971b is less than or equal to a predetermined distance, detects that the robot 8970 approaches the lighting device 8971b. The robot 8970 transmits a signal (turn on instruction) instructing to turn on, to the lighting device 8971b. Upon receiving the turn on instruction, the lighting device 8971b turns on according to the turn on instruction.
In this way, the robot 8970 can easily perform cleaning while moving, by making only its surroundings illuminated.
A lighting device 8974 has a function as a transmitter in each of the above embodiments. The lighting device 8974 illuminates, for example, a line guide sign 8975 in a train station, while changing in luminance. A receiver 8973 pointed at the line guide sign 8975 by the user captures the line guide sign 8975. The receiver 8973 thus obtains the ID of the line guide sign 8975, and obtains information associated with the ID, i.e. detailed information of each line shown in the line guide sign 8975. The receiver 8973 displays a guide image 8973a indicating the detailed information. For example, the guide image 8973a indicates the distance to the line shown in the line guide sign 8975, the direction to the line, and the time of arrival of the next train on the line.
When the user touches the guide image 8973a, the receiver 8973 displays a supplementary guide image 8973b. For instance, the supplementary guide image 8973b is an image for displaying any of a train timetable, information about lines other than the line shown by the guide image 8973a, and detailed information of the station, according to selection by the user.
This embodiment describes each example of application using a receiver such as a smartphone and a transmitter for transmitting information as a blink pattern of an LED or an organic EL device in each of the embodiments described above.
(Signal Reception from a Plurality of Directions by a Plurality of Light Receiving Units)
A receiver 9020a such as a wristwatch includes a plurality of light receiving units. For example, the receiver 9020a includes, as illustrated in
When these light receiving units 9020b and 9020c have directivity, the signal can be received without interference even in the case where a plurality of transmitters are located nearby.
(Route Guidance by Wristwatch-Type Display)
A receiver 9023b such as a wristwatch is connected to a smartphone 9022a via wireless communication such as Bluetooth®. The receiver 9023b has a watch face composed of a display such as a liquid crystal display, and is capable of displaying information other than the time. The smartphone 9022a recognizes the current position from a signal received by the receiver 9023b, and displays the route and distance to the destination on the display surface of the receiver 9023b.
The signal transmission and reception system includes a smartphone which is a multifunctional mobile phone, an LED light emitter which is a lighting device, a home appliance such as a refrigerator, and a server. The LED light emitter performs communication using BTLE (Bluetooth® Low Energy) and also performs visible light communication using a light emitting diode (LED). For example, the LED light emitter controls a refrigerator or communicates with an air conditioner by BTLE. In addition, the LED light emitter controls a power supply of a microwave, an air cleaner, or a television (TV) by visible light communication.
For example, the television includes a solar power device and uses this solar power device as a photosensor. Specifically, when the LED light emitter transmits a signal using a change in luminance, the television detects the change in luminance of the LED light emitter by referring to a change in power generated by the solar power device. The television then demodulates the signal represented by the detected change in luminance, thereby obtaining the signal transmitted from the LED light emitter. When the signal is an instruction to power ON, the television switches a main power thereof to ON, and when the signal is an instruction to power OFF, the television switches the main power thereof to OFF.
The server is capable of communicating with an air conditioner via a router and a specified low-power radio station (specified low-power). Furthermore, the server is capable of communicating with the LED light emitter because the air conditioner is capable of communicating with the LED light emitter via BTLE. Therefore, the server is capable of switching the power supply of the TV between ON and OFF via the LED light emitter. The smartphone is capable of controlling the power supply of the TV via the server by communicating with the server via wireless fidelity (Wi-Fi), for example.
As illustrated in
(Reception in which Interference is Eliminated)
In Step 9001a, the process starts. In Step 9001b, the receiver determines whether or not there is a periodic change in the intensity of received light. In the case of Yes, the process proceeds to Step 9001c. In the case of No, the process proceeds to Step 9001d, and the receiver receives light in a wide range by setting the lens of the light receiving unit at wide angle. The process then returns to Step 9001b. In Step 9001c, the receiver determines whether or not signal reception is possible. In the case of Yes, the process proceeds to Step 9001e, and the receiver receives a signal. In Step 9001g, the process ends. In the case of No, the process proceeds to Step 9001f, and the receiver receives light in a narrow range by setting the lens of the light receiving unit at telephoto. The process then returns to Step 9001c.
With this method, a signal from a transmitter in a wide direction can be received while eliminating signal interference from a plurality of transmitters.
(Transmitter Direction Estimation)
In Step 9002a, the process starts. In Step 9002b, the receiver sets the lens of the light receiving unit at maximum telephoto. In Step 9002c, the receiver determines whether or not there is a periodic change in the intensity of received light. In the case of Yes, the process proceeds to Step 9002d. In the case of No, the process proceeds to Step 9002e, and the receiver receives light in a wide range by setting the lens of the light receiving unit at wide angle. The process then returns to Step 9002c. In Step 9002d, the receiver receives a signal. In Step 9002f, the receiver sets the lens of the light receiving unit at maximum telephoto, changes the light reception direction along the boundary of the light reception range, detects the direction in which the light reception intensity is maximum, and estimates that the transmitter is in the detected direction. In Step 9002d, the process ends.
With this method, the direction in which the transmitter is present can be estimated. Here, the lens may be initially set at maximum wide angle, and gradually changed to telephoto.
(Reception Start)
In Step 9003a, the process starts. In Step 9003b, the receiver determines whether or not a signal is received from a base station of Wi-Fi, Bluetooth®, IMES, or the like. In the case of Yes, the process proceeds to Step 9003c. In the case of No, the process returns to Step 9003b. In Step 9003c, the receiver determines whether or not the base station is registered in the receiver or the server as a reception start trigger. In the case of Yes, the process proceeds to Step 9003d, and the receiver starts signal reception. In Step 9003e, the process ends. In the case of No, the process returns to Step 9003b.
With this method, reception can be started without the user performing a reception start operation. Moreover, power can be saved as compared with the case of constantly performing reception.
(Generation of ID Additionally Using Information of Another Medium)
In Step 9004a, the process starts. In Step 9004b, the receiver transmits either an ID of a connected carrier communication network, Wi-Fi, Bluetooth®, etc. or position information obtained from the ID or position information obtained from GPS, etc., to a high order bit ID index server. In Step 9004c, the receiver receives the high order bits of a visible light ID from the high order bit ID index server. In Step 9004d, the receiver receives a signal from a transmitter, as the low order bits of the visible light ID. In Step 9004e, the receiver transmits the combination of the high order bits and the low order bits of the visible light ID, to an ID solution server. In Step 9004f, the process ends.
With this method, the high order bits commonly used in the neighborhood of the receiver can be obtained. This contributes to a smaller amount of data transmitted from the transmitter, and faster reception by the receiver.
Here, the transmitter may transmit both the high order bits and the low order bits. In such a case, a receiver employing this method can synthesize the ID upon receiving the low order bits, whereas a receiver not employing this method obtains the ID by receiving the whole ID from the transmitter.
(Reception Scheme Selection by Frequency Separation)
In Step 9005a, the process starts. In Step 9005b, the receiver applies a frequency filter circuit to a received light signal, or performs frequency resolution on the received light signal by discrete Fourier series expansion. In Step 9005c, the receiver determines whether or not a low frequency component is present. In the case of Yes, the process proceeds to Step 9005d, and the receiver decodes the signal expressed in a low frequency domain of frequency modulation or the like. The process then proceeds to Step 9005e. In the case of No, the process proceeds to Step 9005e. In Step 9005e, the receiver determines whether or not the base station is registered in the receiver or the server as a reception start trigger. In the case of Yes, the process proceeds to Step 9005f, and the receiver decodes the signal expressed in a high frequency domain of pulse position modulation or the like. The process then proceeds to Step 9005g. In the case of No, the process proceeds to Step 9005g. In Step 9005g, the receiver starts signal reception. In Step 9005h, the process ends.
With this method, signals modulated by a plurality of modulation schemes can be received.
(Signal Reception in the Case of Long Exposure Time)
In Step 9030a, the process starts. In Step 9030b, in the case where the sensitivity is settable, the receiver sets the highest sensitivity. In Step 9030c, in the case where the exposure time is settable, the receiver sets the exposure time shorter than in the normal imaging mode. In Step 9030d, the receiver captures two images, and calculates the difference in luminance. In the case where the position or direction of the imaging unit changes while capturing two images, the receiver cancels the change, generates an image as if the image is captured in the same position and direction, and calculates the difference. In Step 9030e, the receiver calculates the average of luminance values in the direction parallel to the exposure lines in the captured image or the difference image. In Step 9030f, the receiver arranges the calculated average values in the direction perpendicular to the exposure lines, and performs discrete Fourier transform. In Step 9030g, the receiver recognizes whether or not there is a peak near a predetermined frequency. In Step 9030h, the process ends.
With this method, signal reception is possible even in the case where the exposure time is long, such as when the exposure time cannot be set or when a normal image is captured simultaneously.
In the case where the exposure time is automatically set, when the camera is pointed at a transmitter as a lighting, the exposure time is set to about 1/60 second to 1/480 second by an automatic exposure compensation function. If the exposure time cannot be set, signal reception is performed under this condition. In an experiment, when a lighting blinks periodically, stripes are visible in the direction perpendicular to the exposure lines if the period of one cycle is greater than or equal to about 1/16 of the exposure time, so that the blink period can be recognized by image processing. Since the part in which the lighting is shown is too high in luminance and the stripes are hard to be recognized, the signal period may be calculated from the part where light is reflected.
In the case of using a scheme, such as frequency shift keying or frequency multiplex modulation, that periodically turns on and off the light emitting unit, flicker is less visible to humans even with the same modulation frequency and also flicker is less likely to appear in video captured by a video camera, than in the case of using pulse position modulation. Hence, a low frequency can be used as the modulation frequency. Since the temporal resolution of human vision is about 60 Hz, a frequency not less than this frequency can be used as the modulation frequency.
When the modulation frequency is an integer multiple of the imaging frame rate of the receiver, bright lines do not appear in the difference image between pixels at the same position in two images and so reception is difficult, because imaging is performed when the light pattern of the transmitter is in the same phase. Since the imaging frame rate of the receiver is typically 30 fps, setting the modulation frequency to other than an integer multiple of 30 Hz eases reception. Moreover, given that there are various imaging frame rates of receivers, two relatively prime modulation frequencies may be assigned to the same signal so that the transmitter transmits the signal alternately using the two modulation frequencies. By receiving at least one signal, the receiver can easily reconstruct the signal.
The ratio between a high luminance section and a low luminance section is adjusted to change the average luminance. Thus, brightness adjustment is possible. Here, when the period T1 in which the luminance changes between HIGH and LOW is maintained constant, the frequency peak can be maintained constant. For example, in each of (a), (b), and (c) in
It may be that the average luminance is changed by changing luminance in the high luminance section, luminance in the low luminance section, or luminance values in the both sections.
Since there is a limitation in component precision, the brightness of one transmitter will be slightly different from that of another even with the same setting of light adjustment. In the case where transmitters are arranged side by side, a difference in brightness between adjacent ones of the transmitters produces an unnatural impression. Hence, a user adjusts the brightness of the transmitters by operating a light adjustment correction/operation unit. A light adjustment correction unit holds a correction value. A light adjustment control unit controls the brightness of the light emitting unit according to the correction value. When the light adjustment level is changed by a user operating a light adjustment operation unit, the light adjustment control unit controls the brightness of the light emitting unit based on a light adjustment setting value after the change and the correction value held in the light adjustment correction unit. The light adjustment control unit transfers the light adjustment setting value to another transmitter through a cooperative light adjustment unit. When the light adjustment setting value is transferred from another transmitter through the cooperative light adjustment unit, the light adjustment control unit controls the brightness of the light emitting unit based on the light adjustment setting value and the correction value held in the light adjustment correction unit.
The control method of controlling an information communication device that transmits a signal by causing a light emitter to change in luminance according to an embodiment of the present disclosure may cause a computer of the information communication device to execute: determining, by modulating a signal to be transmitted that includes a plurality of different signals, a luminance change pattern corresponding to a different frequency for each of the different signals; and transmitting the signal to be transmitted, by causing the light emitter to change in luminance to include, in a time corresponding to a single frequency, only a luminance change pattern determined by modulating a single signal.
For example, when luminance change patterns determined by modulating more than one signal are included in the time corresponding to a single frequency, the waveform of changes in luminance with time will be complicated, making it difficult to appropriately receive signals. However, when only a luminance change pattern determined by modulating a single signal is included in the time corresponding to a single frequency, it is possible to more appropriately receive signals upon reception.
According to one embodiment of the present disclosure, the number of transmissions may be determined in the determining so as to make a total number of times one of the plurality of different signals is transmitted different from a total number of times a remaining one of the plurality of different signals is transmitted within a predetermined time.
When the number of times one signal is transmitted is different from the number of times another signal is transmitted, it is possible to prevent flicker at the time of transmission.
According to one embodiment of the present disclosure, in the determining, a total number of times a signal corresponding to a high frequency is transmitted may be set greater than a total number of times another signal is transmitted within a predetermined time.
At the time of frequency conversion at a receiver, a signal corresponding to a high frequency results in low luminance, but an increase in the number of transmissions makes it possible to increase a luminance value at the time of frequency conversion.
According to one embodiment of the present disclosure, changes in luminance with time in the luminance change pattern have a waveform of any of a square wave, a triangular wave, and a sawtooth wave.
With a square wave or the like, it is possible to more appropriately receive signals.
According to one embodiment of the present disclosure, when an average luminance of the light emitter is set to have a large value, a length of time for which luminance of the light emitter is greater than a predetermined value during the time corresponding to the single frequency may be set to be longer than when the average luminance of the light emitter is set to have a small value.
By adjusting the length of time for which the luminance of the light emitter is greater than the predetermined value during the time corresponding to a single frequency, it is possible to adjust the average luminance of the light emitter while transmitting signals. For example, when the light emitter is used as a lighting, signals can be transmitted while the overall brightness is decreased or increased.
Using an application programming interface (API) (indicating a unit for using OS functions) on which the exposure time is set, the receiver can set the exposure time to a predetermined value and stably receive the visible light signal. Furthermore, using the API on which sensitivity is set, the receiver can set sensitivity to a predetermined value, and even when the brightness of a transmission signal is low or high, can stably receive the visible light signal.
This embodiment describes each example of application using a receiver such as a smartphone and a transmitter for transmitting information as a blink pattern of an LED or an organic EL device in each of the embodiments described above.
EX zoom is described below.
The zoom, that is, the way to obtain a magnified image, includes optical zoom which adjusts the focal length of a lens to change the size of an image formed on an imaging element, digital zoom which interpolates an image formed on an imaging element through digital processing to obtain a magnified image, and EX zoom which changes imaging elements that are used for imaging, to obtain a magnified image. The EX zoom is applicable when the number of imaging elements included in an image sensor is great relative to a resolution of a captured image.
For example, an image sensor 10080a illustrated in
When capturing an image of a wide range to search for a transmitter or to receive information from many transmitters, a receiver including the above image sensor 10080a captures an image using only a part of the imaging elements evenly dispersed as a whole in the image sensor 10080a.
When using the EX zoom, the receiver captures an image by only a part of the imaging elements that is locally dense in the image sensor 10080a (e.g. the 16 by 12 image sensors indicated by black squares in the image sensor 1080a in (b) in
In the digital zoom, it is not possible to increase the number of exposure lines that receive visible light signals, and the length of time for which the visible light signals are received does not increase; therefore, it is preferable to use other kinds of zoom as much as possible. The optical zoom requires time for physical movement of a lens, an image sensor, or the like; in this regard, the EX zoom requires only a digital setting change and is therefore advantageous in that it takes a short time to zoom. From this perspective, the order of priority of the zooms is as follows: (1) the EX zoom; (2) the optical zoom; and (3) the digital zoom. The receiver may use one or more of these zooms selected according to the above order of priority and the need of zoom magnification. Note that the imaging elements that are not used in the imaging methods represented in (a) and (b) in
This embodiment describes each example of application using a receiver such as a smartphone and a transmitter for transmitting information as a blink pattern of an LED or an organic EL device in each of the embodiments described above.
In this embodiment, the exposure time is set for each exposure line or each imaging element.
As illustrated in
As illustrated in
This image sensor 10011a is capable of using all the exposure lines for visible light imaging unlike the image sensor 10010a. Consequently, the visible light captured image 10011c obtained by the image sensor 10011a includes a larger number of bright lines than in the visible light captured image 10010c, and therefore allows the visible light signal to be received with increased accuracy.
As illustrated in
The normal captured image 10012b obtained by the image sensor 10012a has data of the plurality of the imaging elements arranged in a grid or evenly arranged, and therefore interpolation and resizing thereof can be more accurate than those of the normal captured image 10010b and the normal captured image 10011b. The visible light captured image 10012c is generated by imaging that uses all the exposure lines of the image sensor 10012a. Thus, this image sensor 10012a is capable of using all the exposure lines for visible light imaging unlike the image sensor 10010a. Consequently, as with the visible light captured image 10011c, the visible light captured image 10012c obtained by the image sensor 10012a includes a larger number of bright lines than in the visible light captured image 10010c, and therefore allows the visible light signal to be received with increased accuracy.
Interlaced display of the preview image is described below.
The receiver including the above-described image sensor 10010a illustrated in
The receiver obtains Image 1 which includes captured images obtained from the plurality of the odd lines (hereinafter referred to as odd-line images) and captured images obtained from the plurality of the even lines (hereinafter referred to as even-line images). At this time, the exposure time for each of the even lines is short, resulting in the subject failing to appear clear in each of the even-line images. Therefore, the receiver generates interpolated line images by interpolating even-line images with pixel values. The receiver then displays a preview image including the interpolated line images instead of the even-line images. Thus, the odd-line images and the interpolated line images are alternately arranged in the preview image.
At time t2, the receiver obtains Image 2 which includes captured odd-line images and even-line images. At this time, the exposure time for each of the odd lines is short, resulting in the subject failing to appear clear in each of the odd-line images. Therefore, the receiver displays a preview image including the odd-line images of the Image 1 instead of the odd-line images of the Image 2. Thus, the odd-line images of the Image 1 and the even-line images of the Image 2 are alternately arranged in the preview image.
At time t3, the receiver obtains Image 3 which includes captured odd-line images and even-line images. At this time, the exposure time for each of the even lines is short, resulting in the subject failing to appear clear in each of the even-line images, as in the case of time t1. Therefore, the receiver displays a preview image including the even-line images of the Image 2 instead of the even-line images of the Image 3. Thus, the even-line images of the Image 2 and the odd-line images of the Image 3 are alternately arranged in the preview image. At time t4, the receiver obtains Image 4 which includes captured odd-line images and even-line images. At this time, the exposure time for each of the odd lines is short, resulting in the subject failing to appear clear in each of the odd-line images, as in the case of time t2. Therefore, the receiver displays a preview image including the odd-line images of the Image 3 instead of the odd-line images of the Image 4. Thus, the odd-line images of the Image 3 and the even-line images of the Image 4 are alternately arranged in the preview image.
In this way, the receiver displays the image including the even-line images and the odd-line images obtained at different times, that is, displays what is called an interlaced image.
The receiver is capable of displaying a high-definition preview image while performing visible light imaging. Note that the imaging elements for which the same exposure time is set may be imaging elements arranged along a direction horizontal to the exposure line as in the image sensor 10010a, or imaging elements arranged along a direction perpendicular to the exposure line as in the image sensor 10011a, or imaging elements arranged in a checkered pattern as in the image sensor 10012a. The receiver may store the preview image as captured image data.
Next, a spatial ratio between normal imaging and visible light imaging is described.
In an image sensor 10014b included in the receiver, a long exposure time or a short exposure time is set for each exposure line as in the above-described image sensor 10010a. In this image sensor 10014b, the ratio between the number of imaging elements for which the long exposure time is set and the number of imaging elements for which the short exposure time is set is one to one. This ratio is a ratio between normal imaging and visible light imaging and hereinafter referred to as a spatial ratio.
In this embodiment, however, this spatial ratio does not need to be one to one. For example, the receiver may include an image sensor 10014a. In this image sensor 10014a, the number of imaging elements for which a short exposure time is set is greater than the number of imaging elements for which a long exposure time is set, that is, the spatial ratio is one to N (N>1). Alternatively, the receiver may include an image sensor 10014c. In this image sensor 10014c, the number of imaging elements for which a short exposure time is set is less than the number of imaging elements for which a long exposure time is set, that is, the spatial ratio is N (N>1) to one. It may also be that the exposure time is set for each vertical line described above, and thus the receiver includes, instead of the image sensors 10014a to 10014c, any one of image sensors 10015a to 10015c having spatial ratios one to N, one to one, and N to one, respectively.
These image sensors 10014a and 10015a are capable of receiving the visible light signal with increased accuracy or speed because they include a large number of imaging elements for which the short exposure time is set. These image sensors 10014c and 10015c are capable of displaying a high-definition preview image because they include a large number of imaging elements for which the long exposure time is set.
Furthermore, using the image sensors 10014a, 10014c, 10015a, and 10015c, the receiver may display an interlaced image as illustrated in
Next, a temporal ratio between normal imaging and visible light imaging is described.
The receiver may switch the imaging mode between a normal imaging mode and a visible light imaging mode for each frame as illustrated in (a) in
Note that in the case of determining a long exposure time by the automatic exposure, the receiver may ignore an image captured with a short exposure time so as to perform the automatic exposure based on only brightness of an image captured with a long exposure time. By doing so, it is possible to determine an appropriate long exposure time.
Alternatively, the receiver may switch the imaging mode between the normal imaging mode and the visible light imaging mode for each set of frames as illustrated in (b) in
The ratio between the number of frames continuously generated by imaging in the normal imaging mode using a long exposure time and the number of frames continuously generated by imaging in the visible light imaging mode using a short exposure time (hereinafter referred to as a temporal ratio) does not need to be one to one. That is, although the temporal ratio is one to one in the case illustrated in (a) and (b) of
For example, the receiver can make the number of frames in the visible light imaging mode greater than the number of frames in the normal imaging mode as illustrated in (c) in
Alternatively, the receiver can make the number of frames in the normal imaging mode greater than the number of frames in the visible light imaging mode as illustrated in (d) in
It may also be possible that, as illustrated in (e) in
The receiver starts visible light reception which is processing of receiving a visible light signal (Step S10017a) and sets a preset long/short exposure time ratio to a value specified by a user (Step S10017b). The preset long/short exposure time ratio is at least one of the above spatial ratio and temporal ratio. A user may specify only the spatial ratio, only the temporal ratio, or values of both the spatial ratio and the temporal ratio. Alternatively, the receiver may automatically set the preset long/short exposure time ratio without depending on a ratio specified by a user.
Next, the receiver determines whether or not the reception performance is no more than a predetermined value (Step S10017c). When determining that the reception performance is no more than the predetermined value (Y in Step S10017c), the receiver sets the ratio of the short exposure time high (Step S10017d). By doing so, it is possible to increase the reception performance. Note that the ratio of the short exposure time is, when the spatial ratio is used, a ratio of the number of imaging elements for which the short exposure time is set to the number of imaging elements for which the long exposure time is set, and is, when the temporal ratio is used, a ratio of the number of frames continuously generated in the visible light imaging mode to the number of frames continuously generated in the normal imaging mode.
Next, the receiver receives at least part of the visible light signal and determines whether or not at least part of the visible light signal received (hereinafter referred to as a received signal) has a priority assigned (Step S10017e). The received signal that has a priority assigned contains an identifier indicating a priority. When determining that the received signal has a priority assigned (Step S10017e: Y), the receiver sets the preset long/short exposure time ratio according to the priority (Step S10017f). Specifically, the receiver sets the ratio of the short exposure time high when the priority is high. For example, an emergency light as a transmitter transmits an identifier indicating a high priority by changing in luminance. In this case, the receiver can increase the ratio of the short exposure time to increase the reception speed and thereby promptly display an escape route and the like.
Next, the receiver determines whether or not the reception of all the visible light signals has been completed (Step S10017g). When determining that the reception has not been completed (Step S10017g: N), the receiver repeats the processes following Step S10017c. In contrast, when determining that the reception has been completed (Step S10017g: Y), the receiver sets the ratio of the long exposure time high and effects a transition to a power saving mode (Step S10017h). Note that the ratio of the long exposure time is, when the spatial ratio is used, a ratio of the number of imaging elements for which the long exposure time is set to the number of imaging elements for which the short exposure time is set, and is, when the temporal ratio is used, a ratio of the number of frames continuously generated in the normal imaging mode to the number of frames continuously generated in the visible light imaging mode. This makes it possible to display a smooth preview image without performing unnecessary visible light reception.
Next, the receiver determines whether or not another visible light signal has been found (Step S10017i). When another visible light signal has been found (Step S10017i: Y), the receiver repeats the processes following Step S10017b.
Next, simultaneous operation of visible light imaging and normal imaging is described.
The receiver may set two or more exposure times in the image sensor. Specifically, as illustrated in (a) in
For example, when two exposure times are set, the receiver reads out visible light imaging data generated by exposure for a short exposure time that includes a visible light signal, and subsequently reads out normal imaging data generated by exposure for a long exposure time as illustrated in (a) in
By doing so, visible light imaging which is imaging for receiving a visible light signal and normal imaging can be performed at the same time, that is, it is possible to perform the normal imaging while receiving the visible light signal. Furthermore, the use of data across exposure times allows a signal of no less than the frequency indicated by the sampling theorem to be recognized, making it possible to receive a high frequency signal, a high-density modulated signal, or the like.
When outputting captured image data, the receiver outputs a data sequence that contains the captured image data as an imaging data body as illustrated in (b) in
This reception program is a program for causing a computer included in a receiver to execute the processing illustrated in
In other words, this reception program is a reception program for receiving information from a light emitter changing in luminance. In detail, this reception program causes a computer to execute Step SA31, Step SA32, and Step SA33. In Step SA31, a first exposure time is set for a plurality of imaging elements which are a part of K imaging elements (where K is an integer of 4 or more) included in an image sensor, and a second exposure time shorter than the first exposure time is set for a plurality of imaging elements which are a remainder of the K imaging elements. In Step SA32, the image sensor captures a subject, i.e., a light emitter changing in luminance, with the set first exposure time and the set second exposure time, to obtain a normal image according to output from the plurality of the imaging elements for which the first exposure time is set, and obtain a bright line image according to output from the plurality of the imaging elements for which the second exposure time is set. The bright light image includes a plurality of bright lines each of which corresponds to a different one of a plurality of exposure lines included in the image sensor. In Step SA33, a pattern of the plurality of the bright lines included in the obtained bright line image is decoded to obtain information.
With this, imaging is performed by the plurality of the imaging elements for which the first exposure time is set and the plurality of the imaging elements for which the second exposure time is set, with the result that a normal image and a bright line image can be obtained in a single imaging operation by the image sensor. That is, it is possible to capture a normal image and obtain information by visible light communication at the same time.
Furthermore, in the exposure time setting step SA31, a first exposure time is set for a plurality of imaging element lines which are a part of L imaging element lines (where L is an integer of 4 or more) included in the image sensor, and the second exposure time is set for a plurality of imaging element lines which are a remainder of the L imaging element lines. Each of the L imaging element lines includes a plurality of imaging elements included in the image sensor and arranged in a line.
With this, it is possible to set an exposure time for each imaging element line, which is a large unit, without individually setting an exposure time for each imaging element, which is a small unit, so that the processing load can be reduced.
For example, each of the L imaging element lines is an exposure line included in the image sensor as illustrated in
It may be that in the exposure time setting step SA31, one of the first exposure time and the second exposure time is set for each of odd-numbered imaging element lines of the L imaging element lines included in the image sensor, to set the same exposure time for each of the odd-numbered imaging element lines, and a remaining one of the first exposure time and the second exposure time is set for each of even-numbered imaging element lines of the L imaging element lines, to set the same exposure time for each of the even-numbered imaging element lines, as illustrated in
With this, at every operation to obtain a normal image, the plurality of the imaging element lines that are to be used in the obtainment can be switched between the odd-numbered imaging element lines and the even-numbered imaging element lines. As a result, each of the sequentially obtained normal images can be displayed in an interlaced format. Furthermore, by interpolating two continuously obtained normal images with each other, it is possible to generate a new normal image that includes an image obtained by the odd-numbered imaging element lines and an image obtained by the even-numbered imaging element lines.
It may be that in the exposure time setting step SA31, a preset mode is switched between a normal imaging priority mode and a visible light imaging priority mode, and when the preset mode is switched to the normal imaging priority mode, the total number of the imaging elements for which the first exposure time is set is greater than the total number of the imaging elements for which the second exposure time is set, and when the preset mode is switched to the visible light imaging priority mode, the total number of the imaging elements for which the first exposure time is set is less than the total number of the imaging elements for which the second exposure time is set, as illustrated in
With this, when the preset mode is switched to the normal imaging priority mode, the quality of the normal image can be improved, and when the preset mode is switched to the visible light imaging priority mode, the reception efficiency for information from the light emitter can be improved.
It may be that in the exposure time setting step SA31, an exposure time is set for each imaging element included in the image sensor, to distribute, in a checkered pattern, the plurality of the imaging elements for which the first exposure time is set and the plurality of the imaging elements for which the second exposure time is set, as illustrated in
This results in uniform distribution of the plurality of the imaging elements for which the first exposure time is set and the plurality of the imaging elements for which the second exposure time is set, so that it is possible to obtain the normal image and the bright line image, the quality of which is not unbalanced between the horizontal direction and the vertical direction.
This reception device A30 is the above-described receiver that performs the processing illustrated in
In detail, this reception device A30 is a reception device that receives information from a light emitter changing in luminance, and includes a plural exposure time setting unit A31, an imaging unit A32, and a decoding unit A33. The plural exposure time setting unit A31 sets a first exposure time for a plurality of imaging elements which are a part of K imaging elements (where K is an integer of 4 or more) included in an image sensor, and sets a second exposure time shorter than the first exposure time for a plurality of imaging elements which are a remainder of the K imaging elements. The imaging unit A32 causes the image sensor to capture a subject, i.e., a light emitter changing in luminance, with the set first exposure time and the set second exposure time, to obtain a normal image according to output from the plurality of the imaging elements for which the first exposure time is set, and obtain a bright line image according to output from the plurality of the imaging elements for which the second exposure time is set. The bright line image includes a plurality of bright lines each of which corresponds to a different one of a plurality of exposure lines included in the image sensor. The decoding unit A33 obtains information by decoding a pattern of the plurality of the bright lines included in the obtained bright line image. This reception device A30 can produce the same advantageous effects as the above-described reception program.
Next, displaying of content related to a received visible light signal is described.
The receiver captures an image of a transmitter 10020d and then displays an image 10020a including the image of the transmitter 10020d as illustrated in (a) in
Upon displaying this obtained data image 10020f, the receiver displays the obtained data image 10020f in a speech balloon extending from the transmitter 10020d as illustrated in (a) in
Alternatively, the receiver may display the obtained data image 10020f in such a way that the obtained data image 10020f can be displayed gradually closer to the transmitter 10020d as illustrated in (b) of
Next, Augmented Reality (AR) is described.
When the image of the transmitter moves on the display, the receiver moves the obtained data image 10020f according to the movement of the image of the transmitter. This allows a user to recognize that the obtained data image 10020f is associated with the transmitter. The receiver may alternatively display the obtained data image 10020f in association with something different from the image of the transmitter. With this, data can be displayed in AR.
Next, storing and discarding the obtained data is described.
For example, when a user swipes the obtained data image 10020f down as illustrated in (a) in
When a user swipes the obtained data image 10020f to the right as illustrated in (b) in
Next, browsing of obtained data is described.
In the receiver, obtained data images of a plurality of pieces of obtained data stored are displayed on top of each other, appearing small, in a bottom area of the display as illustrated in (a) in
When a user taps the obtained data image that is desired to be displayed in a state illustrated in (b) in
Next, turning off of an image stabilization function upon self-position estimation is described.
By disabling (turning off) the image stabilization function or converting a captured image according to an image stabilization direction and an image stabilization amount, the receiver is capable of obtaining an accurate imaging direction and accurately performing self-position estimation. The captured image is an image captured by an imaging unit of the receiver. Self-position estimation means that the receiver estimates its position. Specifically, in the self-position estimation, the receiver identifies a position of a transmitter based on a received visible light signal and identifies a relative positional relationship between the receiver and the transmitter based on the size, position, shape, or the like of the transmitter appearing in a captured image. The receiver then estimates a position of the receiver based on the position of the transmitter and the relative positional relationship between the receiver and the transmitter.
The transmitter moves out of the frame due to even a little shake of the receiver at the time of partial read-out illustrated in, for example,
Next, self-position estimation using an asymmetrically shaped light emitting unit is described.
The above-described transmitter includes a light emitting unit and causes the light emitting unit to change in luminance to transmit a visible light signal. In the above-described self-position estimation, the receiver determines, as a relative positional relationship between the receiver and the transmitter, a relative angle between the receiver and the transmitter based on the shape of the transmitter (specifically, the light emitting unit) in a captured image. Here, in the case where the transmitter includes a light emitting unit 10090a having a rotationally symmetrical shape as illustrated in, for example,
The transmitter may include a light emitting unit 10090b, the shape of which is not a perfect rotation symmetry as illustrated in
The transmitter may include a light emitting unit 10090c illustrated in
The transmitter may include a light emitting unit 10090d illustrated in
The transmitter may include a light emitting unit 10090e and an object 10090f illustrated in
Next, time-series processing of the self-position estimation is described.
Every time the receiver captures an image, the receiver can perform the self-position estimation based on the position and the shape of the transmitter in the captured image. As a result, the receiver can estimate a direction and a distance in which the receiver moved while capturing images. Furthermore, the receiver can perform triangulation using frames or images to perform more accurate self-position estimation. By combining the results of estimation using images or the results of estimation using different combinations of images, the receiver is capable of performing the self-position estimation with higher accuracy. At this time, the results of estimation based on the most recently captured images are combined with a high priority, making it possible to perform the self-position estimation with higher accuracy.
Next, skipping read-out of optical black is described.
The receiver reads out a signal of horizontal optical black as illustrated in (a) in
The horizontal optical black is optical black that extends in the horizontal direction with respect to the exposure line. Vertical optical black is part of the optical black that is other than the horizontal optical black.
The receiver adjusts the black level based on a signal read out from the optical black and therefore, at a start of visible light imaging, can adjust the black level using the optical black as does at the time of normal imaging. Continuous signal reception and black level adjustment are possible when the receiver is designed to adjust the black level using only the vertical optical black if the vertical optical black is usable. The receiver may adjust the black level using the horizontal optical black at predetermined time intervals during continuous visible light imaging. In the case of alternately performing the normal imaging and the visible light imaging, the receiver skips reading out a signal of horizontal optical black when continuously performing the visible light imaging, and reads out a signal of horizontal optical black at a time other than that. The receiver then adjusts the black level based on the read-out signals and thus can adjust the black level while continuously receiving visible light signals. The receiver may adjust the black level assuming that the darkest part of a visible light captured image is black.
Thus, it is possible to continuously receive visible light signals when the optical black from which signals are read out is the vertical optical black only. Furthermore, with a mode for skipping reading out a signal of the horizontal optical black, it is possible to adjust the black level at the time of normal imaging and perform continuous communication according to the need at the time of visible light imaging. Moreover, by skipping reading out a signal of the horizontal optical black, the difference in timing of starting exposure between the exposure lines increases, with the result that a visible light signal can be received even from a transmitter that appears small in the captured image.
Next, an identifier indicating a type of the transmitter is described.
The transmitter may transmit a visible light signal after adding to the visible light signal a transmitter identifier indicating the type of the transmitter. In this case, the receiver is capable of performing a reception operation according to the type of the transmitter at the point in time when the receiver receives the transmitter identifier. For example, when the transmitter identifier indicates a digital signage, the transmitter transmits, as a visible light signal, a content ID indicating which content is currently displayed, in addition to a transmitter ID for individual identification of the transmitter. Based on the transmitter identifier, the receiver can handle these IDs separately to display information associated with the content currently displayed by the transmitter. Furthermore, for example, when the transmitter identifier indicates a digital signage, an emergency light, or the like, the receiver captures an image with increased sensitivity so that reception errors can be reduced.
This embodiment describes each example of application using a receiver such as a smartphone and a transmitter for transmitting information as a blink pattern of an LED or an organic EL device in each of the embodiments described above.
A reception method in which data parts having the same addresses are compared is described below.
The receiver receives a packet (Step S10101) and performs error correction (Step S10102). The receiver then determines whether or not a packet having the same address as the address of the received packet has already been received (Step S10103). When determining that a packet having the same address has been received (Step S10103: Y), the receiver compares data in these packets. The receiver determines whether or not the data parts are identical (Step S10104). When determining that the data parts are not identical (Step S10104: N), the receiver further determines whether or not the number of differences between the data parts is a predetermined number or more, specifically, whether or not the number of different bits or the number of slots indicating different luminance states is a predetermined number or more (Step S10105). When determining that the number of differences is the predetermined number or more (Step S10105: N), the receiver discards the already received packet (Step S10106). By doing so, when a packet from another transmitter starts being received, interference with the packet received from a previous transmitter can be avoided. In contrast, when determining that the number of differences is not the predetermined number or more (Step S10105: N), the receiver regards, as data of the address, data of the data part of packets having an identical data part, the number of which is largest (Step S10107). Alternatively, the receiver regards identical bits, the number of which is largest, as a value of a bit of the address. Still alternatively, the receiver demodulates data of the address, regarding an identical luminance state, the number of which is largest, as a luminance state of a slot of the address.
Thus, in this embodiment, the receiver first obtains a first packet including the data part and the address part from a pattern of a plurality of bright lines. Next, the receiver determines whether or not at least one packet already obtained before the first packet includes at least one second packet which is a packet including the same address part as the address part of the first packet. Next, when the receiver determines that at least one such second packet is included, the receiver determines whether or not all the data parts in at least one such second packet and the first packet are the same. When the receiver determines that all the data parts are not the same, the receiver determines, for each of at least one such second packet, whether or not the number of parts, among parts included in the data part of the second packet, which are different from parts included in the data part of the first packet, is a predetermined number or more. Here, when at least one such second packet includes the second packet in which the number of different parts is determined as the predetermined number or more, the receiver discards at least one such second packet. When at least one such second packet does not include the second packet in which the number of different parts is determined as the predetermined number or more, the receiver identifies, among the first packet and at least one such second packet, a plurality of packets in which the number of packets having the same data parts is highest. The receiver then obtains at least a part of the visible light identifier (ID) by decoding the data part included in each of the plurality of packets as the data part corresponding to the address part included in the first packet.
With this, even when a plurality of packets having the same address part are received and the data parts in the packets are different, an appropriate data part can be decoded, and thus at least a part of the visible light identifier can be properly obtained. This means that a plurality of packets transmitted from the same transmitter and having the same address part basically have the same data part. However, there are cases where the receiver receives a plurality of packets which have mutually different data parts even with the same address part, when the receiver switches the transmitter serving as a transmission source of packets from one to another. In such a case, in this embodiment, the already received packet (the second packet) is discarded as in step S10106 in
A reception method of demodulating data of the data part based on a plurality of packets is described.
First, the receiver receives a packet (Step S10111) and performs error correction on the address part (Step S10112). Here, the receiver does not demodulate the data part and retains pixel values in the captured image as they are. The receiver then determines whether or not no less than a predetermined number of packets out of the already received packets have the same address (Step S10113). When determining that no less than the predetermined number of packets have the same address (Step S10113: Y), the receiver performs a demodulation process on a combination of pixel values corresponding to the data parts in the packets having the same address (Step S10114).
Thus, in the reception method in this embodiment, a first packet including the data part and the address part is obtained from a pattern of a plurality of bright lines. It is then determined whether or not at least one packet already obtained before the first packet includes no less than a predetermined number of second packets which are each a packet including the same address part as the address part of the first packet. When it is determined that no less than the predetermined number of second packets is included, pixel values of a partial region of a bright line image corresponding to the data parts in no less than the predetermined number of second packets and pixel values of a partial region of a bright line image corresponding to the data part of the first packet are combined. That is, the pixel values are added. A combined pixel value is calculated through this addition, and at least a part of a visible light identifier (ID) is obtained by decoding the data part including the combined pixel value.
Since the packets have been received at different points in time, each of the pixel values for the data parts reflects luminance of the transmitter that is at a slightly different point in time. Therefore, the part subject to the above-described demodulation process will contain a larger amount of data (a larger number of samples) than the data part of a single packet. This makes it possible to demodulate the data part with higher accuracy. Furthermore, the increase in the number of samples makes it possible to demodulate a signal modulated with a higher modulation frequency.
The data part and the error correction code part for the data part are modulated with a higher frequency than the header unit, the address part, and the error correction code part for the address part. In the above-described demodulation method, data following the data part can be demodulated even when the data has been modulated with a high modulation frequency. With this configuration, it is possible to shorten the time for the whole packet to be transmitted, and it is possible to receive a visible light signal with higher speed from far away and from a smaller light source.
Next, a reception method of receiving data of a variable length address is described.
The receiver receives packets (Step S10121), and determines whether or not a packet including the data part in which all the bits are zero (hereinafter referred to as a 0-end packet) has been received (Step S10122). When determining that the packet has been received, that is, when determining that a 0-end packet is present (Step S10122: Y), the receiver determines whether or not all the packets having addresses following the address of the 0-end packet are present, that is, have been received (Step S10123). Note that the address of a packet to be transmitted later among packets generated by dividing data to be transmitted is assigned a larger value. When determining that all the packets have been received (Step S10123: Y), the receiver determines that the address of the 0-end packet is the last address of the packets to be transmitted from the transmitter. The receiver then reconstructs data by combining data of all the packets having the addresses up to the 0-end packet (Step S10124). In addition, the receiver checks the reconstructed data for an error (Step S10125). By doing so, even when it is not known how many parts the data to be transmitted has been divided into, that is, when the address has a variable length rather than a fixed length, data having a variable-length address can be transmitted and received, meaning that it is possible to efficiently transmit and receive more IDs than with data having a fixed-length address.
Thus, in this embodiment, the receiver obtains a plurality of packets each including the data part and the address part from a pattern of a plurality of bright lines. The receiver then determines whether or not the obtained packets include a 0-end packet which is a packet including the data part in which all the bits are 0. When determining that the 0-end packet is included, the receiver determines whether or not the packets include all N associated packets (where N is an integer of 1 or more) which are each a packet including the address part associated with the address part of the 0-end packet. Next, when determining that all the N associated packets are included, the receiver obtains a visible light identifier (ID) by arranging and decoding the data parts in the N associated packets. Here, the address part associated with the address part of the 0-end packet is an address part representing an address greater than or equal to 0 and smaller than the address represented by the address part of the 0-end packet.
Next, a reception method using an exposure time longer than a period of a modulation frequency is described.
For example, as illustrated in (a) in
In contrast, the visible light signal can be properly received when the exposure time is set to time longer than the modulation period as illustrated in (b) in
However, when the exposure time is too long, the visible light signal cannot be properly received.
For example, as illustrated in (a) in
Next, the number of packets after division is described.
When the transmitter transmits data by changing in luminance, the data size of one packet will be large if all pieces of data to be transmitted (transmission data) are included in the packet. However, when the transmission data is divided into data parts and each of these data parts is included in a packet, the data size of the packet is small. The receiver receives this packet by imaging. As the data size of the packet increases, the receiver has more difficulty in receiving the packet in a single imaging operation, and needs to repeat the imaging operation.
Therefore, it is desirable that as the data size of the transmission data increases, the transmitter increase the number of divisions in the transmission data as illustrated in (a) and (b) in
Therefore, as illustrated in (a) in
The transmitter sequentially changes in luminance based on packets containing respective ones of the data parts. For example, according to the sequence of the addresses of packets, the transmitter changes in luminance based on the packets. Furthermore, the transmitter may change in luminance again based on data parts of the packets according to a sequence different from the sequence of the addresses. This allows the receiver to reliably receive each of the data parts.
Next, a method of setting a notification operation by the receiver is described.
First, the receiver obtains, from a server near the receiver, a notification operation identifier for identifying a notification operation and a priority of the notification operation identifier (specifically, an identifier indicating the priority) (Step S10131). The notification operation is an operation of the receiver to notify a user using the receiver that packets containing data parts have been received, when the packets have been transmitted by way of luminance change and then received by the receiver. For example, this operation is making sound, vibration, indication on a display, or the like.
Next, the receiver receives packetized visible light signals, that is, packets containing respective data parts (Step S10132). The receiver obtains a notification operation identifier and a priority of the notification operation identifier (specifically, an identifier indicating the priority) which are included in the visible light signals (Step S10133).
Furthermore, the receiver reads out setting details of a current notification operation of the receiver, that is, a notification operation identifier preset in the receiver and a priority of the notification operation identifier (specifically, an identifier indicating the priority) (Step S10134). Note that the notification operation identifier preset in the receiver is one set by an operation by a user, for example.
The receiver then selects an identifier having the highest priority from among the preset notification operation identifier and the notification operation identifiers respectively obtained in Step S10131 and Step S10133 (Step S10135). Next, the receiver sets the selected notification operation identifier in the receiver itself to operate as indicated by the selected notification operation identifier, notifying a user of the reception of the visible light signals (Step S10136).
Note that the receiver may skip one of Step S10131 and Step S10133 and select a notification operation identifier with a higher priority from among two notification operation identifiers.
Note that a high priority may be assigned to a notification operation identifier transmitted from a server installed in a theater, a museum, or the like, or a notification operation identifier included in the visible light signal transmitted inside these facilities. With this, it can be made possible that sound for receipt notification is not played inside the facilities regardless of settings set by a user. In other facilities, a low priority is assigned to the notification operation identifier so that the receiver can operate according to settings set by a user to notify a user of signal reception.
First, the receiver obtains, from a server near the receiver, a notification operation identifier for identifying a notification operation and a priority of the notification operation identifier (specifically, an identifier indicating the priority) (Step S10141). Next, the receiver receives packetized visible light signals, that is, packets containing respective data parts (Step S10142). The receiver obtains a notification operation identifier and a priority of the notification operation identifier (specifically, an identifier indicating the priority) which are included in the visible light signals (Step S10143).
Furthermore, the receiver reads out setting details of a current notification operation of the receiver, that is, a notification operation identifier preset in the receiver and a priority of the notification operation identifier (specifically, an identifier indicating the priority) (Step S10144).
The receiver then determines whether or not an operation notification identifier indicating an operation that prohibits notification sound reproduction is included in the preset notification operation identifier and the notification operation identifiers respectively obtained in Step S10141 and Step S10143 (Step S10145). When determining that the operation notification identifier is included (Step S10145: Y), the receiver outputs a notification sound for notifying a user of completion of the reception (Step 10146). In contrast, when determining that the operation notification identifier is not included (Step S10145: N), the receiver notifies a user of completion of the reception by vibration, for example (Step S10147).
Note that the receiver may skip one of Step S10141 and Step S10143 and determine whether or not an operation notifying identifier indicating an operation that prohibits notification sound reproduction is included in two notification operation identifiers.
Furthermore, the receiver may perform self-position estimation based on a captured image and notify a user of the reception by an operation associated with the estimated position or facilities located at the estimated position.
This information processing program is a program for causing the light emitter of the above-described transmitter to change in luminance according to the number of divisions illustrated in
In other words, this information processing program is an information processing program that causes a computer to process information to be transmitted, in order for the information to be transmitted by way of luminance change. In detail, this information processing program causes a computer to execute: an encoding step SA41 of encoding the information to generate an encoded signal; a dividing step SA42 of dividing the encoded signal into four signal parts when a total number of bits in the encoded signal is in a range of 24 bits to 64 bits; and an output step S43 of sequentially outputting the four signal parts. Note that each of these signal parts is output in the form of the packet. Furthermore, this information processing program may cause a computer to identify the number of bits in the encoded signal and determine the number of signal parts based on the identified number of bits. In this case, the information processing program causes the computer to divide the encoded signal into the determined number of signal parts.
Thus, when the number of bits in the encoded signal is in the range of 24 bits to 64 bits, the encoded signal is divided into four signal parts, and the four signal parts are output. As a result, the light emitter changes in luminance according to the output four signal parts, and these four signal parts are transmitted in the form of visible light signals and received by the receiver. As the number of bits in an output signal increases, the level of difficulty for the receiver to properly receive the signal by imaging increases, meaning that the reception efficiency is reduced. Therefore, it is desirable that the signal have a small number of bits, that is, a signal be divided into small signals. However, when a signal is too finely divided into many small signals, the receiver cannot receive the original signal unless it receives all the small signals individually, meaning that the reception efficiency is reduced. Therefore, when the number of bits in the encoded signal is in the range of 24 bits to 64 bits, the encoded signal is divided into four signal parts and the four signal parts are sequentially output as described above. By doing so, the encoded signal representing the information to be transmitted can be transmitted in the form of a visible light signal with the best reception efficiency. As a result, it is possible to enable communication between various devices.
In the output step SA43, it may be that the four signal parts are output in a first sequence and then, the four signal parts are output in a second sequence different from the first sequence.
By doing so, since these four signals parts are repeatedly output in different sequences, these four signal parts can be received with still higher efficiency when each of the output signals is transmitted to the receiver in the form of a visible light signal. In other words, if the four signal parts are repeatedly output in the same sequence, there are cases where the receiver fails to receive the same signal part, but it is possible to reduce these cases by changing the output sequence.
Furthermore, the four signal parts may be each assigned with a notification operation identifier and output in the output step SA43 as indicated in
With this, in the case where the notification operation identifier is transmitted in the form of a visible light signal and received by the receiver, the receiver can notify a user of the reception of the four signal parts according to an operation identified by the notification operation identifier. This means that a transmitter that transmits information to be transmitted can set a notification operation to be performed by a receiver.
Furthermore, the four signal parts may be each assigned with a priority identifier for identifying a priority of the notification operation identifier and output in the output step SA43 as indicated in
With this, in the case where the priority identifier and the notification operation identifier are transmitted in the form of visible light signals and received by the receiver, the receiver can handle the notification operation identifier according to the priority identified by the priority identifier. This means that when the receiver obtained another notification operation identifier, the receiver can select, based on the priority, one of the notification operation identified by the notification operation identifier transmitted in the form of the visible light signal and the notification operation identified by the other notification operation identifier.
An image processing program according to an aspect of the present disclosure is an image processing program that causes a computer to process information to be transmitted, in order for the information to be transmitted by way of luminance change, and causes the computer to execute: an encoding step of encoding the information to generate an encoded signal; a dividing step of dividing the encoded signal into four signal parts when a total number of bits in the encoded signal is in a range of 24 bits to 64 bits; and an output step of sequentially outputting the four signal parts.
Thus, as illustrated in
Furthermore, in the output step, the four signal parts may be output in a first sequence and then, the four signal parts may be output in a second sequence different from the first sequence.
By doing so, since these four signals parts are repeatedly output in different sequences, these four signal parts can be received with still higher efficiency when each of the output signals is transmitted to the receiver in the form of a visible light signal. In other words, if the four signal parts are repeatedly output in the same sequence, there are cases where the receiver fails to receive the same signal part, but it is possible to reduce these cases by changing the output sequence.
Furthermore, in the output step, the four signal parts may further be each assigned with a notification operation identifier and output, and the notification operation identifier may be an identifier for identifying an operation of the receiver by which a user using the receiver is notified that the four signal parts have been received when the four signal parts have been transmitted by way of luminance change and received by the receiver.
With this, in the case where the notification operation identifier is transmitted in the form of a visible light signal and received by the receiver, the receiver can notify a user of the reception of the four signal parts according to an operation identified by the notification operation identifier. This means that a transmitter that transmits information to be transmitted can set a notification operation to be performed by a receiver.
Furthermore, in the output step, the four signal parts may further be each assigned with a priority identifier for identifying a priority of the notification operation identifier and output.
With this, in the case where the priority identifier and the notification operation identifier are transmitted in the form of visible light signals and received by the receiver, the receiver can handle the notification operation identifier according to the priority identified by the priority identifier. This means that when the receiver obtained another notification operation identifier, the receiver can select, based on the priority, one of the notification operation identified by the notification operation identifier transmitted in the form of the visible light signal and the notification operation identified by the other notification operation identifier.
Next, registration of a network connection of an electronic device is described.
This transmission and reception system includes: a transmitter 10131b configured as an electronic device such as a washing machine, for example; a receiver 10131a configured as a smartphone, for example, and a communication device 10131c configured as an access point or a router.
When a start button is pressed (Step S10165), the transmitter 10131b transmits, via Wi-Fi, Bluetooth®, Ethernet®, or the like, information for connecting to the transmitter itself, such as SSID, password, IP address, MAC address, or decryption key (Step S10166), and then waits for connection. The transmitter 10131b may directly transmit these pieces of information, or may indirectly transmit these pieces of information. In the case of indirectly transmitting these pieces of information, the transmitter 10131b transmits ID associated with these pieces of information. When the receiver 10131a receives the ID, the receiver 10131a then downloads, from a server or the like, information associated with the ID, for example.
The receiver 10131a receives the information (Step S10151), connects to the transmitter 10131b, and transmits to the transmitter 10131b information for connecting to the communication device 10131c configured as an access point or a router (such as SSID, password, IP address, MAC address, or decryption key) (Step S10152). The receiver 10131a registers, with the communication device 10131c, information for the transmitter 10131b to connect to the communication device 10131c (such as MAC address, IP address, or decryption key), to have the communication device 10131c wait for connection. Furthermore, the receiver 10131a notifies the transmitter 10131b that preparation for connection from the transmitter 10131b to the communication device 10131c has been completed (Step S10153).
The transmitter 10131b disconnects from the receiver 10131a (Step S10168) and connects to the communication device 10131c (Step S10169). When the connection is successful (Step S10170: Y), the transmitter 10131b notifies the receiver 10131a that the connection is successful, via the communication device 10131c, and notifies a user that the connection is successful, by an indication on the display, an LED state, sound, or the like (Step S10171). When the connection fails (Step S10170: N), the transmitter 10131b notifies the receiver 10131a that the connection fails, via the visible light communication, and notifies a user that the connection fails, using the same means as in the case where the connection is successful (Step S10172). Note that the visible light communication may be used to notify that the connection is successful.
The receiver 10131a connects to the communication device 10131c (Step S10154), and when the notifications to the effect that the connection is successful and that the connection fails (Step S10155: N and Step S10156: N) are absent, the receiver 10131a checks whether or not the transmitter 10131b is accessible via the communication device 10131c (Step S10157). When the transmitter 10131b is not accessible (Step S10157: N), the receiver 10131a determines whether or not no less than a predetermined number of attempts to connect to the transmitter 10131b using the information received from the transmitter 10131b have been made (Step S10158). When determining that the number of attempts is less than the predetermined number (Step S10158: N), the receiver 10131a repeats the processes following Step S10152. In contrast, when the number of attempts is no less than the predetermined number (Step S10158: Y), the receiver 10131a notifies a user that the processing fails (Step S10159). When determining in Step S10156 that the notification to the effect that the connection is successful is present (Step S10156: Y), the receiver 10131a notifies a user that the processing is successful (Step S10160). Specifically, using an indication on the display, sound, or the like, the receiver 10131a notifies a user whether or not the connection from the transmitter 10131b to the communication device 10131c has been successful. By doing so, it is possible to connect the transmitter 10131b to the communication device 10131c without requiring for cumbersome input from a user.
Next, registration of a network connection of an electronic device (in the case of connection via another electronic device) is described.
This transmission and reception system includes: an air conditioner 10133b; a transmitter 10133c configured as an electronic device such as a wireless adaptor or the like connected to the air conditioner 10133b; a receiver 10133a configured as a smartphone, for example; a communication device 10133d configured as an access point or a router; and another electronic device 10133e configured as a wireless adaptor, a wireless access point, a router, or the like, for example.
First, when a start button is pressed (Step S10188), the electronic device A transmits information for connecting to the electronic device A itself (such as individual ID, password, IP address, MAC address, or decryption key) (Step S10189), and then waits for connection (Step S10190). The electronic device A may directly transmit these pieces of information, or may indirectly transmit these pieces of information, in the same manner as described above.
The receiver 10133a receives the information from the electronic device A (Step S10181) and transmits the information to the electronic device B (Step S10182). When the electronic device B receives the information (Step S10196), the electronic device B connects to the electronic device A according to the received information (Step S10197). The electronic device B determines whether or not connection to the electronic device A has been established (Step S10198), and notifies the receiver 10133a of the result (Step 10199 or Step S101200).
When the connection to the electronic device B is established within a predetermine time (Step S10191: Y), the electronic device A notifies the receiver 10133a that the connection is successful, via the electronic device B (Step S10192), and when the connection fails (Step S10191: N), the electronic device A notifies the receiver 10133a that the connection fails, via the visible light communication (Step S10193). Furthermore, using an indication on the display, a light emitting state, sound, or the like, the electronic device A notifies a user whether or not the connection is successful. By doing so, it is possible to connect the electronic device A (the transmitter 10133c) to the electronic device B (the electronic device 10133e) without requiring for cumbersome input from a user. Note that the air conditioner 10133b and the transmitter 10133c illustrated in
Next, transmission of proper imaging information is described.
This transmission and reception system includes: a receiver 10135a configured as a digital still camera or a digital video camera, for example; and a transmitter 10135b configured as a lighting, for example.
First, the receiver 10135a transmits an imaging information transmission instruction to the transmitter 10135b (Step S10211). Next, when the transmitter 10135b receives the imaging information transmission instruction, when an imaging information transmission button is pressed, when an imaging information transmission switch is ON, or when a power source is turned ON (Step S10221: Y), the transmitter 10135b transmits imaging information (Step S10222). The imaging information transmission instruction is an instruction to transmit imaging information. The imaging information indicates a color temperature, a spectrum distribution, illuminance, or luminous intensity distribution of a lighting, for example. The transmitter 10135b may directly transmit the imaging information, or may indirectly transmit the imaging information. In the case of indirectly transmitting the imaging information, the transmitter 10135b transmits ID associated with the imaging information. When the receiver 10135a receives the ID, the receiver 10135a then downloads, from a server or the like, the imaging information associated with the ID, for example. At this time, the transmitter 10135b may transmit a method for transmitting a transmission stop instruction to the transmitter 10135b itself (e.g. a frequency of radio waves, infrared rays, or sound waves for transmitting a transmission stop instruction, or SSID, password, or IP address for connecting to the transmitter 10135b itself).
When the receiver 10135a receives the imaging information (Step S10212), the receiver 10135a transmits the transmission stop instruction to the transmitter 10135b (Step S10213). When the transmitter 10135b receives the transmission stop instruction from the receiver 10135a (Step S10223), the transmitter 10135b stops transmitting the imaging information and uniformly emits light (Step S10224).
Furthermore, the receiver 10135a sets an imaging parameter according to the imaging information received in Step S10212 (Step S10214) or notifies a user of the imaging information. The imaging parameter is, for example, white balance, an exposure time, a focal length, sensitivity, or a scene mode. With this, it is possible to capture an image with optimum settings according to a lighting. Next, after the transmitter 10135b stops transmitting the imaging information (Step S10215: Y), the receiver 10135a captures an image (Step S10216). Thus, it is possible to capture an image while a subject does not change in brightness for signal transmission. Note that after Step S10216, the receiver 10135a may transmit to the transmitter 10135b a transmission start instruction to request to start transmission of the imaging information (Step S10217).
Next, an indication of a state of charge is described.
For example, a transmitter 10137b configured as a charger includes a light emitting unit, and transmits from the light emitting unit a visible light signal indicating a state of charge of a battery. With this, a costly display device is not needed to allow a user to be notified of a state of charge of the battery. When a small LED is used as the light emitting unit, the visible light signal cannot be received unless an image of the LED is captured from a nearby position. In the case of a transmitter 10137c which has a protrusion near the LED, the protrusion becomes an obstacle for closeup of the LED. Therefore, it is easier to receive a visible light signal from the transmitter 10137b having no protrusion near the LED than a visible light signal from the transmitter 10137c.
This embodiment describes each example of application using a receiver such as a smartphone and a transmitter for transmitting information as a blink pattern of an LED or an organic EL in each of the embodiments described above.
First, transmission in a demo mode and upon malfunction is described.
When an error occurs, the transmitter transmits a signal indicating that an error has occurred or a signal corresponding to an error code so that the receiver can be notified that an error has occurred or of details of an error. The receiver takes an appropriate measure according to details of an error so that the error can be corrected or the details of the error can be properly reported to a service center.
In the demo mode, the transmitter transmits a demo code. With this, during a demonstration of a transmitter as a product in a store, for example, a customer can receive a demo code and obtain a product description associated with the demo code. Whether or not the transmitter is in the demo mode can be determined based on the fact that the transmitter is set to the demo mode, that a CAS card for the store is inserted, that no CAS card is inserted, or that no recording medium is inserted.
Next, signal transmission from a remote controller is described.
For example, when a transmitter configured as a remote controller of an air conditioner receives main-unit information, the transmitter transmits the main-unit information so that the receiver can receive from the nearby transmitter the information on the distant main unit. The receiver can receive information from a main unit located at a site where the visible light communication is unavailable, for example, across a network.
Next, a process of transmitting information only when the transmitter is in a bright place is described.
The transmitter transmits information when the brightness in its surrounding area is no less than a predetermined level, and stops transmitting information when the brightness falls below the predetermined level. By doing so, for example, a transmitter configured as an advertisement on a train can automatically stop its operation when the car enters a train depot. Thus, it is possible to reduce battery power consumption.
Next, content distribution according to an indication on the transmitter (changes in association and scheduling) is described.
The transmitter associates, with a transmission ID, content to be obtained by the receiver in line with the timing at which the content is displayed. Every time the content to be displayed is changed, a change in the association is registered with the server.
When the timing at which the content to be displayed is displayed is known, the transmitter sets the server so that other content is transmitted to the receiver according to the timing of a change in the content to be displayed. When the server receives from the receiver a request for content associated with the transmission ID, the server transmits to the receiver corresponding content according to the set schedule.
By doing so, for example, when content displayed by a transmitter configured as a digital signage changes one after another, the receiver can obtain content that corresponds to the content displayed by the transmitter.
Next, content distribution corresponding to what is displayed by the transmitter (synchronization using a time point) is described.
The server holds previously registered settings to transfer different content at each time point in response to a request for content associated with a predetermined ID.
The transmitter synchronizes the server with a time point, and adjusts timing to display content so that a predetermined part is displayed at a predetermined time point.
By doing so, for example, when content displayed by a transmitter configured as a digital signage changes one after another, the receiver can obtain content that corresponds to the content displayed by the transmitter.
Next, content distribution corresponding to what is displayed by the transmitter (transmission of a display time point) is described.
The transmitter transmits, in addition to the ID of the transmitter, a display time point of content being displayed. The display time point of content is information with which the content currently being displayed can be identified, and can be represented by an elapsed time from a start time point of the content, for example.
The receiver obtains from the server content associated with the received ID and displays the content according to the received display time point. By doing so, for example, when content displayed by a transmitter configured as a digital signage changes one after another, the receiver can obtain content that corresponds to the content displayed by the transmitter.
Furthermore, the receiver displays content while changing the content with time. By doing so, even when content being displayed by the transmitter changes, there is no need to renew signal reception to display content corresponding to displayed content.
Next, data upload according to a grant status of a user is described.
In the case where a user has a registered account, the receiver transmits to the server the received ID and information to which the user granted access upon registering the account or other occasions (such as position, telephone number, ID, installed applications, etc. of the receiver, or age, sex, occupation, preferences, etc. of the user).
In the case where a user has no registered account, the above information is transmitted likewise to the server when the user has granted uploading of the above information, and when the user has not granted uploading of the above information, only the received ID is transmitted to the server.
With this, a user can receive content suitable to a reception situation or own personality, and as a result of obtaining information on a user, the server can make use of the information in data analysis.
Next, running of an application for reproducing content is described.
The receiver obtains from the server content associated with the received ID. When an application currently running supports the obtained content (the application can displays or reproduces the obtained content), the obtained content is displayed or reproduced using the application currently running. When the application does not support the obtained content, whether or not any of the applications installed on the receiver supports the obtained content is checked, and when an application supporting the obtained content has been installed, the application is started to display and reproduce the obtained content. When all the applications installed do not support the obtained content, an application supporting the obtained content is automatically installed, or an indication or a download page is displayed to prompt a user to install an application supporting the obtained content, for example, and after the application is installed, the obtained content is displayed and reproduced.
By doing so, the obtained content can be appropriately supported (displayed, reproduced, etc.).
Next, running of a designated application is described.
The receiver obtains, from the server, content associated with the received ID and information designating an application to be started (an application ID). When the application currently running is a designated application, the obtained content is displayed and reproduced. When a designated application has been installed on the receiver, the designated application is started to display and reproduce the obtained content. When the designated application has not been installed, the designated application is automatically installed, or an indication or a download page is displayed to prompt a user to install the designated application, for example, and after the designated application is installed, the obtained content is displayed and reproduced.
The receiver may be designed to obtain only the application ID from the server and start the designated application.
The receiver may be configured with designated settings. The receiver may be designed to start the designated application when a designated parameter is set.
Next, selecting between streaming reception and normal reception is described.
When a predetermined address of the received data has a predetermined value or when the received data contains a predetermined identifier, the receiver determines that signal transmission is streaming distribution, and receives signals by a streaming data reception method. Otherwise, a normal reception method is used to receive the signals.
By doing so, signals can be received regardless of which method, streaming distribution or normal distribution, is used to transmit the signals.
Next, private data is described.
When the value of the received ID is within a predetermined range or when the received ID contains a predetermined identifier, the receiver refers to a table in an application and when the table has the reception ID, content indicated in the table is obtained. Otherwise, content identified by the reception ID is obtained from the server.
By doing so, it is possible to receive content without registration with the server. Furthermore, response can be quick because no communication is performed with the server.
Next, setting of an exposure time according to a frequency is described.
The receiver detects a signal and recognizes a modulation frequency of the signal. The receiver sets an exposure time according to a period of the modulation frequency (a modulation period). For example, the exposure time is set to a value substantially equal to the modulation frequency so that signals can be more easily received. When the exposure time is set to an integer multiple of the modulation frequency or an approximate value (roughly plus/minus 30%) thereof, for example, convolutional decoding can facilitate reception of signals.
Next, setting of an optimum parameter in the transmitter is described.
The receiver transmits, to the server, data received from the transmitter, and current position information, information related to a user (address, sex, age, preferences, etc.), and the like. The server transmits to the receiver a parameter for the optimum operation of the transmitter according to the received information. The receiver sets the received parameter in the transmitter when possible. When not possible, the parameter is displayed to prompt a user to set the parameter in the transmitter.
With this, it is possible to operate a washing machine in a manner optimized according to the nature of water in a district where the transmitter is used, or to operate a rice cooker in such a way that rice is cooked in an optimal way for the kind of rice used by a user, for example.
Next, an identifier indicating a data structure is described.
Information to be transmitted contains an identifier, the value of which shows to the receiver a structure of a part following the identifier. For example, it is possible to identify a length of data, kind and length of an error correction code, a dividing point of data, and the like.
This allows the transmitter to change the kind and length of data body, the error correction code, and the like according to characteristics of the transmitter, a communication path, and the like. Furthermore, the transmitter can transmit a content ID in addition to an ID of the transmitter, to allow the receiver to obtain an ID corresponding to the content ID.
This embodiment describes each example of application using a receiver such as a smartphone and a transmitter for transmitting information as a blink pattern of an LED or an organic EL device in each of the embodiments described above.
A receiver 1210a in this embodiment switches the shutter speed between high and low speeds, for example, on the frame basis, upon continuous imaging with the image sensor. Furthermore, on the basis of a frame obtained by such imaging, the receiver 1210a switches processing on the frame between a barcode recognition process and a visible light recognition process. Here, the barcode recognition process is a process of decoding a barcode appearing in a frame obtained at a low shutter speed. The visible light recognition process is a process of decoding the above-described pattern of bright lines appearing on a frame obtained at a high shutter speed.
This receiver 1210a includes an image input unit 1211, a barcode and visible light identifying unit 1212, a barcode recognition unit 1212a, a visible light recognition unit 1212b, and an output unit 1213.
The image input unit 1211 includes an image sensor and switches a shutter speed for imaging with the image sensor. This means that the image input unit 1211 sets the shutter speed to a low speed and a high speed alternately, for example, on the frame basis. More specifically, the image input unit 1211 switches the shutter speed to a high speed for an odd-numbered frame, and switches the shutter speed to a low speed for an even-numbered frame. Imaging at a low shutter speed is imaging in the above-described normal imaging mode, and imaging at a high shutter speed is imaging in the above-described visible light communication mode. Specifically, when the shutter speed is a low speed, the exposure time of each exposure line included in the image sensor is long, and a normal captured image in which a subject is shown is obtained as a frame. When the shutter speed is a high speed, the exposure time of each exposure line included in the image sensor is short, and a visible light communication image in which the above-described bright lines are shown is obtained as a frame.
The barcode and visible light identifying unit 1212 determines whether or not a barcode appears, or a bright line appears, in an image obtained by the image input unit 1211, and switches processing on the image accordingly. For example, when a barcode appears in a frame obtained by imaging at a low shutter speed, the barcode and visible light identifying unit 1212 causes the barcode recognition unit 1212a to perform the processing on the image. When a bright line appears in a frame obtained by imaging at a high shutter speed, the barcode and visible light identifying unit 1212 causes the visible light recognition unit 1212b to perform the processing on the image.
The barcode recognition unit 1212a decodes a barcode appearing in a frame obtained by imaging at a low shutter speed. The barcode recognition unit 1212a obtains data of the barcode (for example, a barcode identifier) as a result of such decoding, and outputs the barcode identifier to the output unit 1213. Note that the barcode may be a one-dimensional code or may be a two-dimensional code (for example, QR Code®).
The visible light recognition unit 1212b decodes a pattern of bright lines appearing in a frame obtained by imaging at a high shutter speed. The visible light recognition unit 1212b obtains data of visible light (for example, a visible light identifier) as a result of such decoding, and outputs the visible light identifier to the output unit 1213. Note that the data of visible light is the above-described visible light signal.
The output unit 1213 displays only frames obtained by imaging at a low shutter speed. Therefore, when the subject imaged with the image input unit 1211 is a barcode, the output unit 1213 displays the barcode. When the subject imaged with the image input unit 1211 is a digital signage or the like which transmits a visible light signal, the output unit 1213 displays an image of the digital signage without displaying a pattern of bright lines. Subsequently, when the output unit 1213 obtains a barcode identifier, the output unit 1213 obtains, from a server, for example, information associated with the barcode identifier, and displays the information. When the output unit 1213 obtains a visible light identifier, the output unit 1213 obtains, from a server, for example, information associated with the visible light identifier, and displays the information.
Stated differently, the receiver 1210a which is a terminal device includes an image sensor, and performs continuous imaging with the image sensor while a shutter speed of the image sensor is alternately switched between a first speed and a second speed higher than the first speed. (a) When a subject imaged with the image sensor is a barcode, the receiver 1210a obtains an image in which the barcode appears, as a result of imaging performed when the shutter speed is the first speed, and obtains a barcode identifier by decoding the barcode appearing in the image. (b) When a subject imaged with the image sensor is a light source (for example, a digital signage), the receiver 1210a obtains a bright line image which is an image including bright lines corresponding to a plurality of exposure lines included in the image sensor, as a result of imaging performed when the shutter speed is the second speed. The receiver 1210a then obtains, as a visible light identifier, a visible light signal by decoding the pattern of bright lines included in the obtained bright line image. Furthermore, this receiver 1210a displays an image obtained through imaging performed when the shutter speed is the first speed.
The receiver 1210a in this embodiment is capable of both decoding a barcode and receiving a visible light signal by switching between and performing the barcode recognition process and the visible light recognition process. Furthermore, such switching allows for a reduction in power consumption.
The receiver in this embodiment may perform an image recognition process, instead of the barcode recognition process, and the visible light process simultaneously.
A receiver 1210b in this embodiment switches the shutter speed between high and low speeds, for example, on the frame basis, upon continuous imaging with the image sensor. Furthermore, the receiver 1210b performs an image recognition process and the above-described visible light recognition process simultaneously on an image (frame) obtained by such imaging. The image recognition process is a process of recognizing a subject appearing in a frame obtained at a low shutter speed.
The receiver 1210b includes an image input unit 1211, an image recognition unit 1212c, a visible light recognition unit 1212b, and an output unit 1215.
The image input unit 1211 includes an image sensor and switches a shutter speed for imaging with the image sensor. This means that the image input unit 1211 sets the shutter speed to a low speed and a high speed alternately, for example, on the frame basis. More specifically, the image input unit 1211 switches the shutter speed to a high speed for an odd-numbered frame, and switches the shutter speed to a low speed for an even-numbered frame. Imaging at a low shutter speed is imaging in the above-described normal imaging mode, and imaging at a high shutter speed is imaging in the above-described visible light communication mode. Specifically, when the shutter speed is a low speed, the exposure time of each exposure line included in the image sensor is long, and a normal captured image in which a subject is shown is obtained as a frame. When the shutter speed is a high speed, the exposure time of each exposure line included in the image sensor is short, and a visible light communication image in which the above-described bright lines are shown is obtained as a frame.
The image recognition unit 1212c recognizes a subject appearing in a frame obtained by imaging at a low shutter speed, and identifies a position of the subject in the frame. As a result of the recognition, the image recognition unit 1212c determines whether or not the subject is a target of augment reality (AR) (hereinafter referred to as an AR target). When determining that the subject is an AR target, the image recognition unit 1212c generates image recognition data which is data for displaying information related to the subject (for example, a position of the subject, an AR marker thereof, etc.), and outputs the AR marker to the output unit 1215.
The output unit 1215 displays only frames obtained by imaging at a low shutter speed, as with the above-described output unit 1213. Therefore, when the subject imaged by the image input unit 1211 is a digital signage or the like which transmits a visible light signal, the output unit 1213 displays an image of the digital signage without displaying a pattern of bright lines. Furthermore, when the output unit 1215 obtains the image recognition data from the image recognition unit 1212c, the output unit 1215 refers to a position of the subject in a frame represented by the image recognition data, and superimposes on the frame an indicator in the form of a white frame enclosing the subject, based on the position referred to.
The output unit 1215 superimposes, on the frame, an indicator 1215b in the form of a white frame enclosing a subject image 1215a formed as a digital signage, for example. In other words, the output unit 1215 displays the indicator 1215b indicating the subject recognized in the image recognition process. Furthermore, when the output unit 1215 obtains the visible light identifier from the visible light recognition unit 1212b, the output unit 1215 changes the color of the indicator 1215b from white to red, for example.
The output unit 1215 further obtains, as related information, information related to the subject and associated with the visible light identifier, for example, from a server or the like. The output unit 1215 adds the related information to an AR marker 1215c represented by the image recognition data, and displays the AR marker 1215c with the related information added thereto, in association with the subject image 1215a in the frame.
The receiver 1210b in this embodiment is capable of realizing AR which uses visible light communication, by performing the image recognition process and the visible light recognition process simultaneously. Note that the receiver 1210a illustrated in
A transmitter 1220a in this embodiment transmits a visible light signal in synchronization with a transmitter 1230. Specifically, at the timing of transmission of a visible light signal by the transmitter 1230, the transmitter 1220a transmits the same visible light signal. Note that the transmitter 1230 includes a light emitting unit 1231 and transmits a visible light signal by the light emitting unit 1231 changing in luminance.
This transmitter 1220a includes a light receiving unit 1221, a signal analysis unit 1222, a transmission clock adjustment unit 1223a, and a light emitting unit 1224. The light emitting unit 1224 transmits, by changing in luminance, the same visible light signal as the visible light signal which the transmitter 1230 transmits. The light receiving unit 1221 receives a visible light signal from the transmitter 1230 by receiving visible light from the transmitter 1230. The signal analysis unit 1222 analyzes the visible light signal received by the light receiving unit 1221, and transmits the analysis result to the transmission clock adjustment unit 1223a. On the basis of the analysis result, the transmission clock adjustment unit 1223a adjusts the timing of transmission of a visible light signal from the light emitting unit 1224. Specifically, the transmission clock adjustment unit 1223a adjusts timing of luminance change of the light emitting unit 1224 so that the timing of transmission of a visible light signal from the light emitting unit 1231 of the transmitter 1230 and the timing of transmission of a visible light signal from the light emitting unit 1224 match each other.
With this, the waveform of a visible light signal transmitted by the transmitter 1220a and the waveform of a visible light signal transmitted by the transmitter 1230 can be the same in terms of timing.
As with the transmitter 1220a, a transmitter 1220b in this embodiment transmits a visible light signal in synchronization with the transmitter 1230. Specifically, at the timing of transmission of a visible light signal by the transmitter 1230, the transmitter 1200b transmits the same visible light signal.
This transmitter 1220b includes a first light receiving unit 1221a, a second light receiving unit 1221b, a comparison unit 1225, a transmission clock adjustment unit 1223b, and the light emitting unit 1224.
As with the light receiving unit 1221, the first light receiving unit 1221a receives a visible light signal from the transmitter 1230 by receiving visible light from the transmitter 1230. The second light receiving unit 1221b receives visible light from the light emitting unit 1224. The comparison unit 1225 compares a first timing in which the visible light is received by the first light receiving unit 1221a and a second timing in which the visible light is received by the second light receiving unit 1221b. The comparison unit 1225 then outputs a difference between the first timing and the second timing (that is, delay time) to the transmission clock adjustment unit 1223b. The transmission clock adjustment unit 1223b adjusts the timing of transmission of a visible light signal from the light emitting unit 1224 so that the delay time is reduced.
With this, the waveform of a visible light signal transmitted by the transmitter 1220b and the waveform of a visible light signal transmitted by the transmitter 1230 can be more exactly the same in terms of timing.
Note that two transmitters transmit the same visible light signals in the examples illustrated in
A plurality of transmitters 1220 in this embodiment are, for example, arranged in a row as illustrated in
This allows many transmitters to transmit visible light signals in synchronization.
Among the plurality of transmitters 1220 in this embodiment, one transmitter 1220 serves as a basis for synchronization of visible light signals, and the other transmitters 1220 transmit visible light signals in line with this basis.
This allows many transmitters to transmit visible light signals in more accurate synchronization.
Each of the transmitters 1240 in this embodiment receives a synchronization signal and transmits a visible light signal according to the synchronization signal. Thus, visible light signals are transmitted from the transmitters 1240 in synchronization.
Specifically, each of the transmitters 1240 includes a control unit 1241, a synchronization control unit 1242, a photocoupler 1243, an LED drive circuit 1244, an LED 1245, and a photodiode 1246.
The control unit 1241 receives a synchronization signal and outputs the synchronization signal to the synchronization control unit 1242.
The LED 1245 is a light source which outputs visible light and blinks (that is, changes in luminance) under the control of the LED drive circuit 1244. Thus, a visible light signal is transmitted from the LED 1245 to the outside of the transmitter 1240.
The photocoupler 1243 transfers signals between the synchronization control unit 1242 and the LED drive circuit 1244 while providing electrical insulation therebetween. Specifically, the photocoupler 1243 transfers to the LED drive circuit 1244 the later-described transmission start signal transmitted from the synchronization control unit 1242.
When the LED drive circuit 1244 receives a transmission start signal from the synchronization control unit 1242 via the photocoupler 1243, the LED drive circuit 1244 causes the LED 1245 to transmit a visible light signal at the timing of reception of the transmission start signal.
The photodiode 1246 detects visible light output from the LED 1245, and outputs to the synchronization control unit 1242 a detection signal indicating that visible light has been detected.
When the synchronization control unit 1242 receives a synchronization signal from the control unit 1241, the synchronization control unit 1242 transmits a transmission start signal to the LED drive circuit 1244 via the photocoupler 1243. Transmission of this transmission start signal triggers the start of transmission of the visible light signal. When the synchronization control unit 1242 receives the detection signal transmitted from the photodiode 1246 as a result of the transmission of the visible light signal, the synchronization control unit 1242 calculates delay time which is a difference between the timing of reception of the detection signal and the timing of reception of the synchronization signal from the control unit 1241. When the synchronization control unit 1242 receives the next synchronization signal from the control unit 1241, the synchronization control unit 1242 adjusts the timing of transmitting the next transmission start signal based on the calculated delay time. Specifically, the synchronization control unit 1242 adjusts the timing of transmitting the next transmission start signal so that the delay time for the next synchronization signal becomes preset delay time which has been predetermined. Thus, the synchronization control unit 1242 transmits the next transmission start signal at the adjusted timing.
When the synchronization control unit 1242 receives a synchronization signal, the synchronization control unit 1242 generates a delay time setting signal which has a delay time setting pulse at a predetermined timing. Note that the specific meaning of receiving a synchronization signal is receiving a synchronization pulse. More specifically, the synchronization control unit 1242 generates the delay time setting signal so that a rising edge of the delay time setting pulse is observed at a point in time when the above-described preset delay time has elapsed since a falling edge of the synchronization pulse.
The synchronization control unit 1242 then transmits the transmission start signal to the LED drive circuit 1244 via the photocoupler 1243 at the timing delayed by a previously obtained correction value N from the falling edge of the synchronization pulse. As a result, the LED drive circuit 1244 transmits the visible light signal from the LED 1245. In this case, the synchronization control unit 1242 receives the detection signal from the photodiode 1246 at the timing delayed by a sum of unique delay time and the correction value N from the falling edge of the synchronization pulse. This means that transmission of the visible light signal starts at this timing. This timing is hereinafter referred to as a transmission start timing. Note that the above-described unique delay time is delay time attributed to the photocoupler 1243 or the like circuit, and this delay time is inevitable even when the synchronization control unit 1242 transmits the transmission start signal immediately after receiving the synchronization signal.
The synchronization control unit 1242 identifies, as a modified correction value N, a difference in time between the transmission start timing and a rising edge in the delay time setting pulse. The synchronization control unit 1242 calculates a correction value (N+1) according to correction value (N+1)=correction value N+modified correction value N, and holds the calculation result. With this, when the synchronization control unit 1242 receives the next synchronization signal (synchronization pulse), the synchronization control unit 1242 transmits the transmission start signal to the LED drive circuit 1244 at the timing delayed by the correction value (N+1) from a falling edge of the synchronization pulse. Note that the modified correction value N can be not only a positive value but also a negative value.
Thus, since each of the transmitters 1240 receives the synchronization signal (the synchronization pulse) and then transmits the visible light signal after the preset delay time elapses, the visible light signals can be transmitted in accurate synchronization. Specifically, even when there is a variation in the unique delay time for the transmitters 1240 which is attributed to the photocoupler 1243 and the like circuit, transmission of visible light signals from the transmitters 1240 can be accurately synchronized without being affected by the variation.
Note that the LED drive circuit consumes high power and is electrically insulated using the photocoupler or the like from the control circuit which handles the synchronization signals. Therefore, when such a photocoupler is used, the above-mentioned variation in the unique delay time makes it difficult to synchronize transmission of visible light signals from transmitters. However, in the transmitters 1240 in this embodiment, the photodiode 1246 detects a timing of light emission of the LED 1245, and the synchronization control unit 1242 detects delay time based on the synchronization signal and makes adjustments so that the delay time becomes the preset delay time (the above-described preset delay time). With this, even when there is an individual-based variation in the photocouplers provided in the transmitters configured as LED lightings, for example, it is possible to transmit visible light signals (for example, visible light IDs) from the LED lightings in highly accurate synchronization.
Note that the LED lighting may be ON or may be OFF in periods other than a visible light signal transmission period. In the case where the LED lighting is ON in periods other than the visible light signal transmission period, the first falling edge of the visible light signal is detected. In the case where the LED lighting is OFF in periods other than the visible light signal transmission period, the first rising edge of the visible light signal is detected.
Note that every time the transmitter 1240 receives the synchronization signal, the transmitter 1240 transmits the visible light signal in the above-described example, but may transmit the visible light signal even when the transmitter 1240 does not receive the synchronization signal. This means that after the transmitter 1240 transmits the visible light signal following the reception of the synchronization signal once, the transmitter 1240 may sequentially transmit visible light signals even without having received synchronization signals. Specifically, the transmitter 1240 may perform sequential transmission, specifically, two to a few thousand time transmission, of a visible light signal, following one-time synchronization signal reception. The transmitter 1240 may transmit a visible light signal according to the synchronization signal once in every 100 milliseconds or once in every few seconds.
When the transmission of a visible light signal according to a synchronization signal is repeated, there is a possibility that the continuity of light emission by the LED 1245 is lost due to the above-described preset delay time. In other words, there may be a slightly long blanking interval. As a result, there is a possibility that blinking of the LED 1245 is visually recognized by humans, that is, what is called flicker may occur. Therefore, the cycle of transmission of the visible light signal by the transmitter 1240 according to the synchronization signal may be 60 Hz or more. With this, blinking is fast and less easily visually recognized by humans. As a result, it is possible to reduce the occurrence of flicker. Alternatively, the transmitter 1240 may transmit a visible light signal according to a synchronization signal in a sufficiently long cycle, for example, once in every few minutes. Although this allows humans to visually recognize blinking, it is possible to prevent blinking from being repeatedly visually recognized in sequence, reducing discomfort brought by flicker to humans.
(Preprocessing for Reception Method)
First, the receiver calculates an average value of respective pixel values of the plurality of pixels aligned parallel to the exposure lines (Step S1211). Averaging the pixel values of N pixels based on the central limit theorem results in the expected value of the amount of noise being N to the negative one-half power, which leads to an improvement of the SN ratio.
Next, the receiver leaves only the portion where changes in the pixel values are the same in the perpendicular direction for all the colors, and removes changes in the pixel values where such changes are different (Step S1212). In the case where a transmission signal (visible light signal) is represented by luminance of the light emitting unit included in the transmitter, the luminance of a backlight in a lighting or a display which is the transmitter changes. In this case, the pixel values change in the same direction for all the colors as in (b) of
Next, the receiver obtains a luminance value (Step S1213). Since the luminance is less susceptible to color changes, it is possible to remove the influence of a picture on the display or in a signage and improve the SN ratio.
Next, the receiver runs the luminance value through a low-pass filter (Step S1214). In the reception method in this embodiment, a moving average filter based on the length of exposure time is used, with the result that in the high-frequency domain, almost no signals are included; noise is dominant. Therefore, the SN ratio can be improved with the use of the low-pass filter which cuts off high frequency components. Since the amount of signal components is large at the frequencies lower than and equal to the reciprocal of exposure time, it is possible to increase the effect of improving the SN ratio by cutting off signals with frequencies higher than and equal to the reciprocal. If frequency components contained in a signal are limited, the SN ratio can be improved by cutting off components with frequencies higher than the limit of frequencies of the frequency components. A filter which excludes frequency fluctuating components (such as a Butterworth filter) is suitable for the low-pass filter.
(Reception Method Using Convolutional Maximum Likelihood Decoding)
Signals can be received most accurately when the exposure time is an integer multiple of the transmission period. Even when the exposure time is not an integer multiple of the transmission period, signals can be received as long as the exposure time is in the range of about (N±0.33) times (N is an integer) the transmission period.
First, the receiver sets the transmission and reception offset to 0 (Step S1221). The transmission and reception offset is a value for use in modifying a difference between the transmission timing and the reception timing. This difference is unknown, and therefore the receiver changes a candidate value for the transmission and reception offset little by little and adopts, as the transmission and reception offset, a value that agrees most.
Next, the receiver determines whether or not the transmission and reception offset is shorter than the transmission period (Step S1222). Here, since the reception period and the transmission period are not synchronized, the obtained reception value is not always in line with the transmission period. Therefore, when the receiver determines in Step S1222 that the transmission and reception offset is shorter than the transmission period (Step S1222: Y), the receiver calculates, for each transmission period, a reception value (for example, a pixel value) that is in line with the transmission period, by interpolation using a nearby reception value (Step S1223). Linear interpolation, the nearest value, spline interpolation, or the like can be used as the interpolation method. Next, the receiver calculates a difference between the reception values calculated for the respective transmission periods (Step S1224).
The receiver adds a predetermined value to the transmission and reception offset (Step S1226) and repeatedly performs the processing in Step S1222 and the following steps. When the receiver determines in Step S1222 that the transmission and reception offset is not shorter than the transmission period (Step S1222: N), the receiver identifies the highest likelihood among the likelihoods of the reception signals calculated for the respective transmission and reception offsets. The receiver then determines whether or not the highest likelihood is greater than or equal to a predetermined value (Step S1227). When the receiver determines that the highest likelihood is greater than or equal to the predetermined value (Step S1227: Y), the receiver uses, as a final estimation result, a reception signal having the highest likelihood. Alternatively, the receiver uses, as a reception signal candidate, a reception signal having a likelihood less than the highest likelihood by a predetermined value or less (Step S1228). When the receiver determines in Step S1227 that the highest likelihood is less than the predetermined value (Step S1227: N), the receiver discards the estimation result (Step S1229).
When there is too much noise, the reception signal often cannot be properly estimated, and the likelihood is low at the same time. Therefore, the reliability of reception signals can be enhanced by discarding the estimation result when the likelihood is low. The maximum likelihood decoding has a problem that even when an input image does not contain an effective signal, an effective signal is output as an estimation result. However, also in this case, the likelihood is low, and therefore this problem can be avoided by discarding the estimation result when the likelihood is low.
In this embodiment, how to send a protocol of the visible light communication is described.
(Multi-Level Amplitude Pulse Signal)
Pulse amplitude is given a meaning, and thus it is possible to represent a larger amount of information per unit time. For example, amplitude is classified into three levels, which allows three values to be represented in 2-slot transmission time with the average luminance maintained at 50% as in
In view of this, four symbols of (a) to (d) of
Assume that patterns in (a) and (b) of
As in (c) of
The transmitter repeatedly transmits a packet configured as just described. Packets 1 to 4 in (c) of
This embodiment describes each example of application using a receiver such as a smartphone and a transmitter for transmitting information as a blink pattern of an LED or an organic EL device in each of the embodiments described above.
A transmitter in this embodiment is configured as a backlight of a liquid crystal display, for example, and includes a blue LED 2303 and a phosphor 2310 including a green phosphorus element 2304 and a red phosphorus element 2305.
The blue LED 2303 emits blue (B) light. When the phosphor 2310 receives as excitation light the blue light emitted by the blue LED 2303, the phosphor 2310 produces yellow (Y) luminescence. That is, the phosphor 2310 emits yellow light. In detail, since the phosphor 2310 includes the green phosphorus element 2304 and the red phosphorus element 2305, the phosphor 2130 emits yellow light by the luminescence of these phosphorus elements. When the green phosphorus element 2304 out of these two phosphorus elements receives as excitation light the blue light emitted by the blue LED 2303, the green phosphorus element 2304 produces green luminescence. That is, the green phosphorus element 2304 emits green (G) light. When the red phosphorus element 2305 out of these two phosphorus elements receives as excitation light the blue light emitted by the blue LED 2303, the red phosphorus element 2305 produces red luminescence. That is, the red phosphorus element 2305 emits red (R) light. Thus, each light of RGB or Y (RG) B is emitted, with the result that the transmitter outputs white light as a backlight.
This transmitter transmits a visible light signal of white light by changing luminance of the blue LED 2303 as in each of the above embodiments. At this time, the luminance of the white light is changed to output a visible light signal having a predetermined carrier frequency.
A barcode reader emits red laser light to a barcode and reads a barcode based on a change in the luminance of the red laser light reflected off the barcode. There is a case where a frequency of this red laser light used to read the barcode is equal or approximate to a carrier frequency of a visible light signal output from a typical transmitter that has been in practical use today. In this case, an attempt by the barcode reader to read the barcode irradiated with white light, i.e., a visible light signal transmitted from the typical transmitter, may fail due to a change in the luminance of red light included in the white light. In short, an error occurs in reading a barcode due to interference between the carrier frequency of a visible light signal (in particular, red light) and the frequency used to read the barcode.
In order to prevent this, in this embodiment, the red phosphorus element 2305 includes a phosphorus material having higher persistence than the green phosphorus element 2304. This means that in this embodiment, the red phosphorus element 2305 changes in luminance at a sufficiently lower frequency than a luminance change frequency of the blue LED 2303 and the green phosphorus element 2304. In other words, the red phosphorus element 2305 reduces the luminance change frequency of a red component included in the visible light signal.
Blue light being output from the blue LED 2303 is included in the visible light signal as illustrated in (a) in
The red phosphorus element 2305 receives the blue light from the blue LED 2303 and produces red luminescence as illustrated in (c) in
When the blue LED 2303 is ON without changing in luminance, for example, the green phosphorus element 2304 emits green light having intensity I=I0 without changing in luminance (i.e. light having a luminance change frequency f=0). Furthermore, even when the blue LED 2303 changes in luminance at a low frequency, the green phosphorus element 2304 emits green light that has intensity I=I0 and changes in luminance at frequency f that is substantially the same as the low frequency. In contrast, when the blue LED 2303 changes in luminance at a high frequency, the intensity I of the green light, emitted from the green phosphorus element 2304, that changes in luminance at the frequency f that is substantially the same as the high frequency, is lower than intensity I0 due to influence of an afterglow of the green phosphorus element 2304. As a result, the intensity I of green light emitted from the green phosphorus element 2304 continues to be equal to I0 (I=I0) when the frequency f of luminance change of the light is less than a threshold fb, and is gradually lowered when the frequency f increases over the threshold fb as indicated by a dotted line in
Furthermore, in this embodiment, persistence of the red phosphorus element 2305 is higher than persistence of the green phosphorus element 2304. Therefore, the intensity I of red light emitted from the red phosphorus element 2305 continues to be equal to I0 (I=4) when the frequency f of luminance change of the light is less than a threshold fa lower than the above threshold fb, and is gradually lowered when the frequency f increases over the threshold fb as indicated by a solid line in
More specifically, the red phosphorus element 2305 in this embodiment includes a phosphorus material with which the red light emitted at the frequency f that is the same as the carrier frequency f1 of the visible light signal has intensity I=I1. The carrier frequency f1 is a carrier frequency of luminance change of the blue light LED 2303 included in the transmitter. The above intensity I1 is one third intensity of the intensity I0 or (I0-10 dB) intensity. For example, the carrier frequency f1 is 10 kHz or in the range of 5 kHz to 100 kHz.
In detail, the transmitter in this embodiment is a transmitter that transmits a visible light signal, and includes: a blue LED that emits, as light included in the visible light signal, blue light changing in luminance; a green phosphorus element that receives the blue light and emits green light as light included in the visible light signal; and a red phosphorus element that receives the blue light and emits red light as light included in the visible light signal. Persistence of the red phosphorus element is higher than persistence of the green phosphorus element. Each of the green phosphorus element and the red phosphorus element may be included in a single phosphor that receives the blue light and emits yellow light as light included in the visible light signal. Alternatively, it may be that the green phosphorus element is included in a green phosphor and the red phosphorus element is included in a red phosphor that is separate from the green phosphor.
This allows the red light to change in luminance at a lower frequency than a frequency of luminance change of the blue light and the green light because the red phosphorus element has higher persistence. Therefore, even when the frequency of luminance change of the blue light and the green light included in the visible light signal of the white light is equal or approximate to the frequency of red laser light used to read a barcode, the frequency of the red light included in the visible light signal of the white light can be significantly different from the frequency used to read a barcode. As a result, it is possible to reduce the occurrences of errors in reading a barcode.
The red phosphorus element may emit red light that changes in luminance at a lower frequency than a luminance change frequency of the light emitted from the blue LED.
Furthermore, the red phosphorus element may include: a red phosphorus material that receives blue light and emits red light; and a low-pass filter that transmits only light within a predetermined frequency band. For example, the low-pass filter transmits, out of the blue light emitted from the blue LED, only light within a low-frequency band so that the red phosphorus material is irradiated with the light. Note that the red phosphorus material may have the same persistence properties as the green phosphorus element. Alternatively, the low-pass filter transmits only light within a low-frequency band out of the red light emitted from the red phosphorus material as a result of the red phosphorus material being irradiated with the blue light emitted from the blue LED. Even when the low-pass filter is used, it is possible to reduce the occurrences of errors in reading a barcode as in the above-mentioned case.
Furthermore, the red phosphor element may be made of a phosphor material having a predetermined persistence property. For example, the predetermined persistence property is such that, assume that (a) I0 is intensity of the red light emitted from the red phosphorus element when a frequency f of luminance change of the red light is 0 and (b) f1 is a carrier frequency of luminance change of the light emitted from the blue LED, the intensity of the red light is not greater than one third of I0 or (I0−10 dB) when the frequency f of the red light is equal to (f=f1).
With this, the frequency of the red light included in the visible light signal can be reliably significantly different from the frequency used to read a barcode. As a result, it is possible to reliably reduce the occurrences of errors in reading a barcode.
Furthermore, the carrier frequency f1 may be approximately 10 kHz.
With this, since the carrier frequency actually used to transmit the visible light signal today is 9.6 kHz, it is possible to effectively reduce the occurrences of errors in reading a barcode during such actual transmission of the visible light signal.
Furthermore, the carrier frequency f1 may be approximately 5 kHz to 100 kHz.
With the advancement of an image sensor (an imaging element) of the receiver that receives the visible light signal, a carrier frequency of 20 kHz, 40 kHz, 80 kHz, 100 kHz, or the like is expected to be used in future visible light communication. Therefore, as a result of setting the above carrier frequency f1 to approximately 5 kHz to 100 kHz, it is possible to effectively reduce the occurrences of errors in reading a barcode even in future visible light communication.
Note that in this embodiment, the above advantageous effects can be produced regardless of whether the green phosphorus element and the red phosphorus element are included in a single phosphor or these two phosphor elements are respectively included in separate phosphors. This means that even when a single phosphor is used, respective persistence properties, that is, frequency characteristics, of red light and green light emitted from the phosphor are different from each other. Therefore, the above advantageous effects can be produced even with the use of a single phosphor in which the persistence property or frequency characteristic of red light is lower than the persistence property or frequency characteristic of green light. Note that lower persistence property or frequency characteristic means higher persistence or lower light intensity in a high-frequency band, and higher persistence property or frequency characteristic means lower persistence or higher light intensity in a high-frequency band.
Although the occurrences of errors in reading a barcode are reduced by reducing the luminance change frequency of the red component included in the visible light signal in the example illustrated in
As illustrated in
Therefore, the carrier frequency fc of the visible light signal is increased from about 10 kHz to, for example, 40 kHz so that the occurrences of errors in reading a barcode can be reduced.
However, when the carrier frequency fc of the visible light signal is about 40 kHz, a sampling frequency fs for the receiver to sample the visible light signal by capturing an image needs to be 80 kHz or more.
In other words, since the sampling frequency fs required by the receiver is high, an increase in the processing load on the receiver occurs as a new problem. Therefore, in order to solve this new problem, the receiver in this embodiment performs downsampling.
A transmitter 2301 in this embodiment is configured as a liquid crystal display, a digital signage, or a lighting device, for example. The transmitter 2301 outputs a visible light signal, the frequency of which has been modulated. At this time, the transmitter 2301 switches the carrier frequency fc of the visible light signal between 40 kHz and 45 kHz, for example.
A receiver 2302 in this embodiment captures images of the transmitter 2301 at a frame rate of 30 fps, for example. At this time, the receiver 2302 captures the images with a short exposure time so that a bright line appears in each of the captured images (specifically, frames), as with the receiver in each of the above embodiments. An image sensor used in the imaging by the receiver 2302 includes 1,000 exposure lines, for example. Therefore, upon capturing one frame, each of the 1,000 exposure lines starts exposure at different timings to sample a visible light signal. As a result, the sampling is performed 30,000 times (30 fps×1,000 lines) per second (30 ks/sec). In other words, the sampling frequency fs of the visible light signal is 30 kHz.
According to a general sampling theorem, only the visible light signals having a carrier frequency of 15 kHz or less can be demodulated at the sampling frequency fs of 30 kHz.
However, the receiver 2302 in this embodiment performs downsampling of the visible light signals having a carrier frequency fc of 40 kHz or 45 kHz at the sampling frequency fs of 30 kHz. This downsampling causes aliasing on the frames. The receiver 2302 in this embodiment observes and analyzes the aliasing to estimate the carrier frequency fc of the visible light signal.
First, the receiver 2302 captures an image of a subject and performs downsampling of the visible light signal of a carrier frequency fc of 40 kHz or 45 kHz at a sampling frequency fs of 30 kHz (Step S2310).
Next, the receiver 2302 observes and analyzes aliasing on a resultant frame caused by the downsampling (Step S2311). By doing so, the receiver 2302 identifies a frequency of the aliasing as, for example, 5.1 kHz or 5.5 kHz.
The receiver 2302 then estimates the carrier frequency fc of the visible light signal based on the identified frequency of the aliasing (Step S2311). That is, the receiver 2302 restores the original frequency based on the aliasing. Here, the receiver 2302 estimates the carrier frequency fc of the visible light signal as, for example, 40 kHz or 45 kHz.
Thus, the receiver 2302 in this embodiment can appropriately receive the visible light signal having a high carrier frequency by performing downsampling and restoring the frequency based on aliasing. For example, the receiver 2302 can receive the visible light signal of a carrier frequency of 30 kHz to 60 kHz even when the sampling frequency fs is 30 kHz. Therefore, it is possible to increase the carrier frequency of the visible light signal from a frequency actually used today (about 10 kHz) to between 30 kHz and 60 kHz. As a result, the carrier frequency of the visible light signal and the frequency used to read a barcode (10 kHz to 20 kHz) can be significantly different from each other so that interference between these frequencies can be reduced. As a result, it is possible to reduce the occurrences of errors in reading a barcode.
A reception method in this embodiment is a reception method of obtaining information from a subject, the reception method including: setting an exposure time of an image sensor so that, in a frame obtained by capturing the subject by the image sensor, a plurality of bright lines corresponding to a plurality of exposure lines included in the image sensor appear according to a change in luminance of the subject; capturing the subject changing in luminance, by the image sensor at a predetermined frame rate and with the set exposure time by repeating starting exposure sequentially for the plurality of the exposure lines in the image sensor each at a different time; and obtaining the information by demodulating, for each frame obtained by the capturing, data specified by a pattern of the plurality of the bright lines included in the frame. In the capturing, sequential starts of exposure for the plurality of exposure lines each at a different time are repeated to perform, on the visible light signal transmitted from the subject changing in luminance, downsampling at a sampling frequency lower than a carrier frequency of the visible light signal. In the obtaining, for each frame obtained by the capturing, a frequency of aliasing specified by a pattern of the plurality of bright lines included in the frame is identified, a frequency of the visible light signal is estimated based on the identified frequency of the aliasing, and the estimated frequency of the visible light signal is demodulated to obtain the information.
With this reception method, it is possible to appropriately receive the visible light signal having a high carrier frequency by performing downsampling and restoring the frequency based on aliasing.
The downsampling may be performed on the visible light signal having a carrier frequency higher than 30 kHz. This makes it possible to avoid interference between the carrier frequency of the visible light signal and the frequency used to read a barcode (10 kHz to 20 kHz) so that the occurrences of errors in reading a barcode can be effectively reduced.
A reception device 1610 receives visible light emitted by a transmission device including a plurality of light sources (four light sources in
First, when shifted to a mode for visible light communication, the reception device 1610 starts an imaging unit in the normal imaging mode (S1601). Note that when shifted to the mode for visible light communication, the reception device 1610 displays, on a screen, a box 1611 for capturing images of the light sources.
After a predetermined time, the reception device 1610 switches an imaging mode of the imaging unit to the macro imaging mode (S1602). Note that the timing of switching from Step S1601 to Step S1602 may be, instead of when a predetermined time has elapsed after Step S1601, when the reception device 1610 determines that images of the light sources have been captured in such a way that they are included within the box 1611. Such switching to the macro imaging mode allows a user to include the light sources into the box 1611 in a clear image in the normal imaging mode before shifted to the macro imaging mode in which the image is blurred, and thus it is possible to easily include the light sources into the box 1611.
Next, the reception device 1610 determines whether or not a signal from the light source has been received (S1603). When it is determined that a signal from the light source has been received (S1603: Yes), the processing returns to Step S1601 in the normal imaging mode, and when it is determined that a signal from the light sources has not been received (S1603: No), the macro imaging mode in Step 1602 continues. Note that when Yes in Step S1603, a process based on the received signal (e.g. a process of displaying an image represented by the received signal) may be performed.
With this reception device 1610, a user can switch from the normal imaging mode to the macro imaging mode by touching, with a finger, a display unit of a smartphone where light sources 1611 appear, to capture an image of the light sources that appear blurred. Thus, an image captured in the macro imaging mode includes a larger number of bright regions than an image captured in the normal imaging mode. In particular, light beams from two adjacent light sources among the plurality of the light source cannot be received as continuous signals because striped images are separate from each other as illustrated in the left view in (a) in
A reception device 1620 receives visible light emitted by a transmission device including a plurality of light sources (four light sources in
First, when shifted to a mode for visible light communication, the reception device 1620 starts an imaging unit in the normal imaging mode and captures an image 1623 of a wider range than an image 1622 displayed on a screen of the reception device 1620. Image data and orientation information are held in a memory (S1611). The image data represent the image 1623 captured. The orientation information indicates an orientation of the reception device 1620 detected by a gyroscope, a geomagnetic sensor, and an accelerometer included in the reception device 1620 when the image 1623 is captured. The image 1623 captured is an image, the range of which is greater by a predetermined width in the vertical direction or the horizontal direction with reference to the image 1622 displayed on the screen of the reception device 1620. When shifted to the mode for visible light communication, the reception device 1620 displays, on the screen, a box 1621 for capturing images of the light sources.
After a predetermined time, the reception device 1620 switches an imaging mode of the imaging unit to the macro imaging mode (S1612). Note that the timing of switching from Step S1611 to Step S1612 may be, instead of when a predetermined time has elapsed after Step S1611, when the image 1623 is captured and it is determined that image data representing the image 1623 captured has been held in the memory. At this time, the reception device 1620 displays, out of the image 1623, an image 1624 having a size corresponding to the size of the screen of the reception device 1620 based on the image data held in the memory.
Note that the image 1624 displayed on the reception device 1620 at this time is a part of the image 1623 that corresponds to a region predicted to be currently captured by the reception device 1620, based on a difference between an orientation of the reception device 1620 represented by the orientation information obtained in Step 1611 (a position indicated by a white broken line) and a current orientation of the reception device 1620. In short, the image 1624 is an image that is a part of the image 1623 and is of a region corresponding to an imaging target of an image 1625 actually captured in the macro imaging mode. Specifically, in Step 1612, an orientation (an imaging direction) changed from that in Step S1611 is obtained, an imaging target predicted to be currently captured is identified based on the obtained current orientation (imaging direction), the image 1624 that corresponds to the current orientation (imaging direction) is identified based on the image 1623 captured in advance, and a process of displaying the image 1624 is performed. Therefore, when the reception device 1620 moves in a direction of a void arrow from the position indicated by the white broken line as illustrated in the image 1623 in
By doing so, even when capturing an image in the macro imaging mode, the reception device 1620 can display, without displaying the image 1625 captured in the macro imaging mode, the image 1624 clipped out of a clearer image, i.e., the image 1623 captured in the normal imaging mode, according to a current orientation of the reception device 1620. In a method in the present disclosure in which, using a blurred image, continuous pieces of visible light information are obtained from a plurality of light sources distant from each other, and at the same time, a stored normal image is displayed on the display unit, the following problem is expected to occur: when a user captures an image using a smartphone, a hand shake may result in an actually captured image and a still image displayed from the memory being different in direction, making it impossible for the user to adjust the direction toward target light sources. In this case, data from the light sources cannot be received. Therefore, a measure is necessary. With an improved technique in the present disclosure, even when a hand shake occurs, an oscillation detection unit such as an image oscillation detection unit or an oscillation gyroscope detects the hand shake, and a target image in a still image is shifted in a predetermined direction so that a user can view a difference from a direction of the camera. This display allows a user to direct the camera to the target light sources, making it possible to capture an optically connected image of separated light sources while displaying a normal image, and thus it is possible to continuously receive signals. With this, signals from separated light sources can be received while a normal image is displayed. In this case, it is easy to adjust an orientation of the reception device 1620 in such a way that images of the plurality of light sources can be included in the box 1621. Note that defocusing means light source dispersion, causing a reduction in luminance to an equivalent degree, and therefore, sensitivity of a camera such as ISO is increased to produce an advantageous effect in that visible light data can be more reliably received.
Next, the reception device 1620 determines whether or not a signal from the light sources has been received (S1613). When it is determined that a signal from the light sources has been received (S1613: Yes), the processing returns to Step S1611 in the normal imaging mode, and when it is determined that a signal from the light sources has not been received (S1613: No), the macro imaging mode in Step 1612 continues. Note that when Yes in Step S1613, a process based on the received signal (e.g. a process of displaying an image represented by the received signal) may be performed.
As in the case of the reception device 1610, this reception device 1620 can also capture an image including a brighter region in the macro imaging mode. Thus, in the macro imaging mode, it is possible to increase the number of exposure lines that can generate bright lines for the subject.
A transmission device 1630 is, for example, a display device such as a television and transmits different transmission IDs at predetermined time intervals A1630 by visible light communication. Specifically, transmission IDs, i.e., ID1631, ID1632, ID1633, and ID1634, associated with data corresponding to respective images 1631, 1632, 1633, and 1634 to be displayed at time points t1631, t1632, t1633, and t1634 are transmitted. In short, the transmission device 1630 transmits the ID1631 to ID1634 one after another at the predetermined time intervals A1630.
Based on the transmission IDs received by the visible light communication, a reception device 1640 requests a server 1650 for data associated with each of the transmission IDs, receives the data from the server, and displays images corresponding to the data. Specifically, images 1641, 1642, 1643, and 1644 corresponding to the ID1631, ID1632, ID1633, and ID1634, respectively, are displayed at the time points t1631, t1632, t1633, and t1634.
When the reception device 1640 obtains the ID 1631 received at the time point t1631, the reception device 1640 may obtain, from the server 1650, ID information indicating transmission IDs scheduled to be transmitted from the transmission device 1630 at the following time points t1632 to t1634. In this case, the use of the obtained ID information allows the reception device 1640 to be saved from receiving a transmission ID from the transmission device 1630 each time, that is, to request the server 1650 for the data associated with the ID1632 to ID1634 for time points t1632 to 1634, and display the received data at the time points t1632 to 1634.
Furthermore, it may be that when the reception device 1640 requests the data corresponding to the ID1631 at the time point t1631 even if the reception device 1640 does not obtain from the server 1650 information indicating transmission IDs scheduled to be transmitted from the transmission device 1630 at the following time points t1632 to t1634, the reception device 1640 receives from the server 1650 the data associated with the transmission IDs corresponding to the following time points t1632 to t1634 and displays the received data at the time points t1632 to t1634. To put it differently, in the case where the server 1650 receives from the reception device 1640 a request for the data associated with the ID1631 transmitted at the time point t1631, the server 1650 transmits, even without requests from the reception device 1640 for the data associated with the transmission IDs corresponding to the following time points t1632 to t1634, the data to the reception device 1640 at the time points t1632 to t1634. This means that in this case, the server 1650 holds association information indicating association between the time points t1631 to t1634 and the data associated with the transmission IDs corresponding to the time points t1631 to t1634, and transmits, at a predetermined time, predetermined data associated with the predetermined time point, based on the association information.
Thus, once the reception device 1640 successfully obtains the transmission ID1631 at the time point t1631 by visible light communication, the reception device 1640 can receive, at the following time points t1632 to t1634, the data corresponding to the time points t1632 to t1634 from the server 1650 even without performing visible light communication. Therefore, a user no longer needs to keep directing the reception device 1640 to the transmission device 1630 to obtain a transmission ID by visible light communication, and thus the data obtained from the server 1650 can be easily displayed on the reception device 1640. In this case, when the reception device 1640 obtains data corresponding to an ID from the server each time, response time will be long due to time delay from the server. Therefore, in order to accelerate the response, data corresponding to an ID is obtained from the server or the like and stored into a storage unit of the receiver in advance so that the data corresponding to the ID in the storage unit is displayed. This can shorten the response time. In this way, when a transmission signal from a visible light transmitter contains time information on output of a next ID, the receiver does not have to continuously receive visible light signals because a transmission time of the next ID can be known at the time, which produces an advantageous effect in that there is no need to keep directing the reception device to the light source. An advantageous effect of this way is that when visible light is received, it is only necessary to synchronize time information (clock) in the transmitter with time information (clock) in the receiver, meaning that after the synchronization, images synchronized with the transmitter can be continuously displayed even when no data is received from the transmitter.
Furthermore, in the above-described example, the reception device 1640 displays the images 1641, 1642, 1643, and 1644 corresponding to respective transmission IDs, i.e. the ID1631, ID1632, ID1633, and ID1634, at the respective time points t1631, t1632, t1633, and t1634. Here, the reception device 1640 may present information other than images at the respective time points as illustrated in
Next, in the case of a smartphone including two cameras, left and right cameras, for stereoscopic imaging as illustrated in (b) in
This has an advantageous effect in that an image of normal quality is displayed on the display unit while the right-eye camera can receive light communication data from a plurality of separate light sources that are distant from each other.
Here, an example of application of audio synchronous reproduction is described below.
A receiver 1800a such as a smartphone receives a signal (a visible light signal) transmitted from a transmitter 1800b such as a street digital signage. This means that the receiver 1800a receives a timing of image reproduction performed by the transmitter 1800b. The receiver 1800a reproduces audio at the same timing as the image reproduction. In other words, in order that an image and audio reproduced by the transmitter 1800b are synchronized, the receiver 1800a performs synchronous reproduction of the audio. Note that the receiver 1800a may reproduce, together with the audio, the same image as the image reproduced by the transmitter 1800b (the reproduced image), or a related image that is related to the reproduced image. Furthermore, the receiver 1800a may cause a device connected to the receiver 1800a to reproduce audio, etc. Furthermore, after receiving a visible light signal, the receiver 1800a may download, from the server, content such as the audio or related image associated with the visible light signal. The receiver 1800a performs synchronous reproduction after the downloading.
This allows a user to hear audio that is in line with what is displayed by the transmitter 1800b, even when audio from the transmitter 1800b is inaudible or when audio is not reproduced from the transmitter 1800b because audio reproduction on the street is prohibited. Furthermore, audio in line with what is displayed can be heard even in such a distance that time is needed for audio to reach.
Here, multilingualization of audio synchronous reproduction is described below.
Each of the receiver 1800a and a receiver 1800c obtains, from the server, audio that is in the language preset in the receiver itself and corresponds, for example, to images, such as a movie, displayed on the transmitter 1800d, and reproduces the audio. Specifically, the transmitter 1800d transmits, to the receiver, a visible light signal indicating an ID for identifying an image that is being displayed. The receiver receives the visible light signal and then transmits, to the server, a request signal including the ID indicated by the visible light signal and a language preset in the receiver itself. The receiver obtains audio corresponding to the request signal from the server, and reproduces the audio. This allows a user to enjoy a piece of work displayed on the transmitter 1800d, in the language preset by the user themselves.
Here, an audio synchronization method is described below.
Mutually different data items (for example, data 1 to data 6 in
It is desirable that packets including IDs be different. Therefore, IDs are desirably not continuous. Alternatively, in packetizing IDs, it is desirable to adopt a packetizing method in which non-continuous parts are included in one packet. An error correction signal tends to have a different pattern even with continuous IDs, and therefore, error correction signals may be dispersed and included in plural packets, instead of being collectively included in one packet.
The transmitter 1800d transmits an ID at a point of time at which an image that is being displayed is reproduced, for example. The receiver is capable of recognizing a reproduction time point (a synchronization time point) of an image displayed on the transmitter 1800d, by detecting a timing at which the ID is changed.
In the case of (a), a point of time at which the ID changes from ID:1 to ID:2 is received, with the result that a synchronization time point can be accurately recognized.
When the duration N in which an ID is transmitted is long, such an occasion is rare, and there is a case where an ID is received as in (b). Even in this case, a synchronization time point can be recognized in the following method.
(b1) Assume a midpoint of a reception section in which the ID changes, to be an ID change point. Furthermore, a time point after an integer multiple of the duration N elapses from the ID change point estimated in the past is also estimated as an ID change point, and a midpoint of plural ID change points is estimated as a more accurate ID change point. It is possible to estimate an accurate ID change point gradually by such an algorithm of estimation.
(b2) In addition to the above condition, assume that no ID change point is included in the reception section in which the ID does not change and at a time point after an integer multiple of the duration N elapses from the reception section, gradually reducing sections in which there is a possibility that the ID change point is included, so that an accurate ID change point can be estimated.
When N is set to 0.5 seconds or less, the synchronization can be accurate.
When N is set to 2 seconds or less, the synchronization can be performed without a user feeling a delay.
When N is set to 10 seconds or less, the synchronization can be performed while ID waste is reduced.
In
This means that in this embodiment, the visible light signal indicates the time point at which the visible light signal is transmitted from the transmitter 1800d, by including second information (the time packet 2) indicating the hour and the minute of the time point, and first information (the time packet 1) indicating the second of the time point. The receiver 1800a then receives the second information, and receives the first information a greater number of times than a total number of times the second information is received.
Here, synchronization time point adjustment is described below.
After a signal is transmitted, a certain amount of time is needed before audio or video is reproduced as a result of processing on the signal in the receiver 1800a. Therefore, this processing time is taken into consideration in performing a process of reproducing audio or video so that synchronous reproduction can be accurately performed.
First, processing delay time is selected in the receiver 1800a (Step S1801). This may have been held in a processing program or may be selected by a user. When a user makes correction, more accurate synchronization for each receiver can be realized. This processing delay time can be changed for each model of receiver or according to the temperature or CPU usage rate of the receiver so that synchronization is more accurately performed.
The receiver 1800a determines whether or not any time packet has been received or whether or not any ID associated for audio synchronization has been received (Step S1802). When the receiver 1800a determines that any of these has been received (Step S1802: Y), the receiver 1800a further determines whether or not there is any backlogged image (Step S1804). When the receiver 1800a determines that there is a backlogged image (Step S1804: Y), the receiver 1800a discards the backlogged image, or postpones processing on the backlogged image and starts a reception process from the latest obtained image (Step S1805). With this, unexpected delay due to a backlog can be avoided.
The receiver 1800a performs measurement to find out a position of the visible light signal (specifically, a bright line) in an image (Step S1806). More specifically, in relation to the first exposure line in the image sensor, a position where the signal appears in a direction perpendicular to the exposure lines is found by measurement, to calculate a difference in time between a point of time at which image obtainment starts and a point of time at which the signal is received (intra-image delay time).
The receiver 1800a is capable of accurately performing synchronous reproduction by reproducing audio or video belonging to a time point determined by adding processing delay time and intra-image delay time to the recognized synchronization time point (Step S1807).
When the receiver 1800a determines in Step S1802 that the time packet or audio synchronous ID has not been received, the receiver 1800a receives a signal from a captured image (Step S1803).
As illustrated in (a) of
Next, reproduction by earphone limitation is described below.
The reproduction by earphone limitation in this process flow makes it possible to reproduce audio without causing trouble to others in surrounding areas.
The receiver 1800a checks whether or not the setting for earphone limitation is ON (Step S1811). In the case where the setting for earphone limitation is ON, the receiver 1800a has been set to the earphone limitation, for example. Alternatively, the received signal (visible light signal) includes the setting for earphone limitation. Yet another case is that information indicating that earphone limitation is ON is recorded in the server or the receiver 1800a in association with the received signal.
When the receiver 1800a confirms that the earphone limitation is ON (Step S1811: Y), the receiver 1800a determines whether or not an earphone is connected to the receiver 1800a (Step S1813).
When the receiver 1800a confirms that the earphone limitation is OFF (Step S1811: N) or determines that an earphone is connected (Step S1813: Y), the receiver 1800a reproduces audio (Step S1812). Upon reproducing audio, the receiver 1800a adjusts a volume of the audio so that the volume is within a preset range. This preset range is set in the same manner as with the setting for earphone limitation.
When the receiver 1800a determines that no earphone is connected (Step S1813: N), the receiver 1800a issues notification prompting a user to connect an earphone (Step S1814). This notification is issued in the form of, for example, an indication on the display, audio output, or vibration.
Furthermore, when a setting which prohibits forced audio playback has not been made, the receiver 1800a prepares an interface for forced playback, and determines whether or not a user has made an input for forced playback (Step S1815). Here, when the receiver 1800a determines that a user has made an input for forced playback (Step S1815: Y), the receiver 1800a reproduces audio even when no earphone is connected (Step S1812).
When the receiver 1800a determines that a user has not made an input for forced playback (Step S1815: N), the receiver 1800a holds previously received audio data and an analyzed synchronization time point, so as to perform synchronous audio reproduction immediately after an earphone is connected thereto.
The receiver 1800a first receives an ID from the transmitter 1800d (Step S1821). Specifically, the receiver 1800a receives a visible light signal indicating an ID of the transmitter 1800d or an ID of content that is being displayed on the transmitter 1800d.
Next, the receiver 1800a downloads, from the server, information (content) associated with the received ID (Step S1822). Alternatively, the receiver 1800a reads the information from a data holding unit included in the receiver 1800a. Hereinafter, this information is referred to as related information.
Next, the receiver 1800a determines whether or not a synchronous reproduction flag included in the related information represents ON (Step S1823). When the receiver 1800a determines that the synchronous reproduction flag does not represent ON (Step S1823: N), the receiver 1800a outputs content indicated in the related information (Step S1824). Specifically, when the content is an image, the receiver 1800a displays the image, and when the content is audio, the receiver 1800a outputs the audio.
When the receiver 1800a determines that the synchronous reproduction flag represents ON (Step S1823: Y), the receiver 1800a further determines whether a clock setting mode included in the related information has been set to a transmitter-based mode or an absolute-time mode (Step S1825). When the receiver 1800a determines that the clock setting mode has been set to the absolute-time mode, the receiver 1800a determines whether or not the last clock setting has been performed within a predetermined time before the current time point (Step S1826). This clock setting is a process of obtaining clock information by a predetermined method and setting time of a clock included in the receiver 1800a to the absolute time of a reference clock using the clock information. The predetermined method is, for example, a method using global positioning system (GPS) radio waves or network time protocol (NTP) radio waves. Note that the above-mentioned current time point may be a point of time at which a terminal device, that is, the receiver 1800a, received a visible light signal.
When the receiver 1800a determines that the last clock setting has been performed within the predetermined time (Step S1826: Y), the receiver 1800a outputs the related information based on time of the clock of the receiver 1800a, thereby synchronizing content to be displayed on the transmitter 1800d with the related information (Step S1827). When content indicated in the related information is, for example, moving images, the receiver 1800a displays the moving images in such a way that they are in synchronization with content that is displayed on the transmitter 1800d. When content indicated in the related information is, for example, audio, the receiver 1800a outputs the audio in such a way that it is in synchronization with content that is displayed on the transmitter 1800d. For example, when the related information indicates audio, the related information includes frames that constitute the audio, and each of these frames is assigned with a time stamp. The receiver 1800a outputs audio in synchronization with content from the transmitter 1800d by reproducing a frame assigned with a time stamp corresponding to time of the own clock.
When the receiver 1800a determines that the last clock setting has not been performed within the predetermined time (Step S1826: N), the receiver 1800a attempts to obtain clock information by a predetermined method, and determines whether or not the clock information has been successfully obtained (Step S1828). When the receiver 1800a determines that the clock information has been successfully obtained (Step S1828: Y), the receiver 1800a updates time of the clock of the receiver 1800a using the clock information (Step S1829). The receiver 1800a then performs the above-described process in Step S1827.
Furthermore, when the receiver 1800a determines in Step S1825 that the clock setting mode is the transmitter-based mode or when the receiver 1800a determines in Step S1828 that the clock information has not been successfully obtained (Step S1828: N), the receiver 1800a obtains clock information from the transmitter 1800d (Step S1830). Specifically, the receiver 1800a obtains a synchronization signal, that is, clock information, from the transmitter 1800d by visible light communication. For example, the synchronization signal is the time packet 1 and the time packet 2 illustrated in
In this embodiment, as in Step S1829 and Step S1830, when a point of time at which the process for synchronizing the clock of the terminal device, i.e., the receiver 1800a, with the reference clock (the clock setting) is performed using GPS radio waves or NTP radio waves is at least a predetermined time before a point of time at which the terminal device receives a visible light signal, the clock of the terminal device is synchronized with the clock of the transmitter using a time point indicated in the visible light signal transmitted from the transmitter 1800d. With this, the terminal device is capable of reproducing content (video or audio) at a timing of synchronization with transmitter-side content that is reproduced on the transmitter 1800d.
(Method a)
In the method a, the transmitter 1800d outputs a visible light signal indicating a content ID and an ongoing content reproduction time point, by changing luminance of the display as in the case of the above embodiments. The ongoing content reproduction time point is a reproduction time point for data that is part of the content and is being reproduced by the transmitter 1800d when the content ID is transmitted from the transmitter 1800d. When the content is video, the data is a picture, a sequence, or the like included in the video. When the content is audio, the data is a frame or the like included in the audio. The reproduction time point indicates, for example, time of reproduction from the beginning of the content as a time point. When the content is video, the reproduction time point is included in the content as a presentation time stamp (PTS). This means that content includes, for each data included in the content, a reproduction time point (a display time point) of the data.
The receiver 1800a receives the visible light signal by capturing an image of the transmitter 1800d as in the case of the above embodiments. The receiver 1800a then transmits to a server 1800f a request signal including the content ID indicated in the visible light signal. The server 1800f receives the request signal and transmits, to the receiver 1800a, content that is associated with the content ID included in the request signal.
The receiver 1800a receives the content and reproduces the content from a point of time of (the ongoing content reproduction time point+elapsed time since ID reception). The elapsed time since ID reception is time elapsed since the content ID is received by the receiver 1800a.
(Method b)
In the method b, the transmitter 1800d outputs a visible light signal indicating a content ID and an ongoing content reproduction time point, by changing luminance of the display as in the case of the above embodiments. The receiver 1800a receives the visible light signal by capturing an image of the transmitter 1800d as in the case of the above embodiments. The receiver 1800a then transmits to the server 1800f a request signal including the content ID and the ongoing content reproduction time point indicated in the visible light signal. The server 1800f receives the request signal and transmits, to the receiver 1800a, only partial content belonging to a time point on and after the ongoing content reproduction time point, among content that is associated with the content ID included in the request signal.
The receiver 1800a receives the partial content and reproduces the partial content from a point of time of (elapsed time since ID reception).
(Method c)
In the method c, the transmitter 1800d outputs a visible light signal indicating a transmitter ID and an ongoing content reproduction time point, by changing luminance of the display as in the case of the above embodiments. The transmitter ID is information for identifying a transmitter.
The receiver 1800a receives the visible light signal by capturing an image of the transmitter 1800d as in the case of the above embodiments. The receiver 1800a then transmits to the server 1800f a request signal including the transmitter ID indicated in the visible light signal.
The server 1800f holds, for each transmitter ID, a reproduction schedule which is a time table of content to be reproduced by a transmitter having the transmitter ID. Furthermore, the server 1800f includes a clock. The server 1800f receives the request signal and refers to the reproduction schedule to identify, as content that is being reproduced, content that is associated with the transmitter ID included in the request signal and time of the clock of the server 1800f (a server time point). The server 1800f then transmits the content to the receiver 1800a.
The receiver 1800a receives the content and reproduces the content from a point of time of (the ongoing content reproduction time point+elapsed time since ID reception).
(Method d)
In the method d, the transmitter 1800d outputs a visible light signal indicating a transmitter ID and a transmitter time point, by changing luminance of the display as in the case of the above embodiments. The transmitter time point is time indicated by the clock included in the transmitter 1800d.
The receiver 1800a receives the visible light signal by capturing an image of the transmitter 1800d as in the case of the above embodiments. The receiver 1800a then transmits to the server 1800f a request signal including the transmitter ID and the transmitter time point indicated in the visible light signal.
The server 1800f holds the above-described reproduction schedule. The server 1800f receives the request signal and refers to the reproduction schedule to identify, as content that is being reproduced, content that is associated with the transmitter ID and the transmitter time point included in the request signal. Furthermore, the server 1800f identifies an ongoing content reproduction time point based on the transmitter time point. Specifically, the server 1800f finds a reproduction start time point of the identified content from the reproduction schedule, and identifies, as an ongoing content reproduction time point, time between the transmitter time point and the reproduction start time point. The server 1800f then transmits the content and the ongoing content reproduction time point to the receiver 1800a.
The receiver 1800a receives the content and the ongoing content reproduction time point, and reproduces the content from a point of time of (the ongoing content reproduction time point+elapsed time since ID reception).
Thus, in this embodiment, the visible light signal indicates a time point at which the visible light signal is transmitted from the transmitter 1800d. Therefore, the terminal device, i.e., the receiver 1800a, is capable of receiving content associated with a time point at which the visible light signal is transmitted from the transmitter 1800d (the transmitter time point). For example, when the transmitter time point is 5:43, content that is reproduced at 5:43 can be received.
Furthermore, in this embodiment, the server 1800f has a plurality of content items associated with respective time points. However, there is a case where the content associated with the time point indicated in the visible light signal is not present. In this case, the terminal device, i.e., the receiver 1800a, may receive, among the plurality of content items, content associated with a time point that is closest to the time point indicated in the visible light signal and after the time point indicated in the visible light signal. This makes it possible to receive appropriate content among the plurality of content items in the server 1800f even when content associated with a time point indicated in the visible light signal is not present.
Furthermore, a reproduction method in this embodiment includes: receiving a visible light signal by a sensor of a receiver 1800a (the terminal device) from the transmitter 1800d which transmits the visible light signal by a light source changing in luminance; transmitting a request signal for requesting content associated with the visible light signal, from the receiver 1800a to the server 1800f; receiving, by the receiver 1800a, the content from the server 1800f; and reproducing the content. The visible light signal indicates a transmitter ID and a transmitter time point. The transmitter ID is ID information. The transmitter time point is time indicated by the clock of the transmitter 1800d and is a point of time at which the visible light signal is transmitted from the transmitter 1800d. In the receiving of content, the receiver 1800a receives content associated with the transmitter ID and the transmitter time point indicated in the visible light signal. This allows the receiver 1800a to reproduce appropriate content for the transmitter ID and the transmitter time point.
(Method e)
In the method e, the transmitter 1800d outputs a visible light signal indicating a transmitter ID, by changing luminance of the display as in the case of the above embodiments.
The receiver 1800a receives the visible light signal by capturing an image of the transmitter 1800d as in the case of the above embodiments. The receiver 1800a then transmits to the server 1800f a request signal including the transmitter ID indicated in the visible light signal.
The server 1800f holds the above-described reproduction schedule, and further includes a clock. The server 1800f receives the request signal and refers to the reproduction schedule to identify, as content that is being reproduced, content that is associated with the transmitter ID included in the request signal and a server time point. Note that the server time point is time indicated by the clock of the server 1800f. Furthermore, the server 1800f finds a reproduction start time point of the identified content from the reproduction schedule as well. The server 1800f then transmits the content and the content reproduction start time point to the receiver 1800a.
The receiver 1800a receives the content and the content reproduction start time point, and reproduces the content from a point of time of (a receiver time point−the content reproduction start time point). Note that the receiver time point is time indicated by a clock included in the receiver 1800a.
Thus, a reproduction method in this embodiment includes: receiving a visible light signal by a sensor of the receiver 1800a (the terminal device) from the transmitter 1800d which transmits the visible light signal by a light source changing in luminance; transmitting a request signal for requesting content associated with the visible light signal, from the receiver 1800a to the server 1800f; receiving, by the receiver 1800a, content including time points and data to be reproduced at the time points, from the server 1800f; and reproducing data included in the content and corresponding to time of a clock included in the receiver 1800a. Therefore, the receiver 1800a avoids reproducing data included in the content, at an incorrect point of time, and is capable of appropriately reproducing the data at a correct point of time indicated in the content. Furthermore, when content related to the above content (the transmitter-side content) is also reproduced on the transmitter 1800d, the receiver 1800a is capable of appropriately reproducing the content in synchronization with the transmitter-side content.
Note that even in the above methods c to e, the server 1800f may transmit, among the content, only partial content belonging to a time point on and after the ongoing content reproduction time point to the receiver 1800a as in method b.
Furthermore, in the above methods a to e, the receiver 1800a transmits the request signal to the server 1800f and receives necessary data from the server 1800f, but may skip such transmission and reception by holding the data in the server 1800f in advance.
A reproduction apparatus B10 is the receiver 1800a or the terminal device which performs synchronous reproduction in the above-described method e, and includes a sensor B11, a request signal transmitting unit B12, a content receiving unit B13, a clock B14, and a reproduction unit B15.
The sensor B11 is, for example, an image sensor, and receives a visible light signal from the transmitter 1800d which transmits the visible light signal by the light source changing in luminance. The request signal transmitting unit B12 transmits to the server 1800f a request signal for requesting content associated with the visible light signal. The content receiving unit B13 receives from the server 1800f content including time points and data to be reproduced at the time points. The reproduction unit B15 reproduces data included in the content and corresponding to time of the clock B14.
The reproduction apparatus B10 is the receiver 1800a or the terminal device which performs synchronous reproduction in the above-described method e, and performs processes in Step SB11 to Step SB15.
In Step SB11, a visible light signal is received from the transmitter 1800d which transmits the visible light signal by the light source changing in luminance. In Step SB12, a request signal for requesting content associated with the visible light signal is transmitted to the server 1800f. In Step SB13, content including time points and data to be reproduced at the time points is received from the server 1800f. In Step SB15, data included in the content and corresponding to time of the clock B14 is reproduced.
Thus, in the reproduction apparatus B10 and the reproduction method in this embodiment, data in the content is not reproduced at an incorrect time point and is able to be appropriately reproduced at a correct time point indicated in the content.
Note that in this embodiment, each of the constituent elements may be constituted by dedicated hardware, or may be obtained by executing a software program suitable for the constituent element. Each constituent element may be achieved by a program execution unit such as a CPU or a processor reading and executing a software program stored in a recording medium such as a hard disk or semiconductor memory. A software which implements the reproduction apparatus B10, etc., in this embodiment is a program which causes a computer to execute steps included in the flowchart illustrated in
The receiver 1800a performs, in order for synchronous reproduction, clock setting for setting a clock included in the receiver 1800a to time of the reference clock. The receiver 1800a performs the following processes (1) to (5) for this clock setting.
(1) The receiver 1800a receives a signal. This signal may be a visible light signal transmitted by the display of the transmitter 1800d changing in luminance or may be a radio signal from a wireless device via Wi-Fi or Bluetooth®. Alternatively, instead of receiving such a signal, the receiver 1800a obtains position information indicating a position of the receiver 1800a, for example, by GPS or the like. Using the position information, the receiver 1800a then recognizes that the receiver 1800a entered a predetermined place or building.
(2) When the receiver 1800a receives the above signal or recognizes that the receiver 1800a entered the predetermined place, the receiver 1800a transmits to the server (visible light ID solution server) 1800f a request signal for requesting data related to the received signal, place or the like (related information).
(3) The server 1800f transmits to the receiver 1800a the above-described data and a clock setting request for causing the receiver 1800a to perform the clock setting.
(4) The receiver 1800a receives the data and the clock setting request and transmits the clock setting request to a GPS time server, an NTP server, or a base station of a telecommunication corporation (carrier).
(5) The above server or base station receives the clock setting request and transmits to the receiver 1800a clock data (clock information) indicating a current time point (time of the reference clock or absolute time). The receiver 1800a performs the clock setting by setting time of a clock included in the receiver 1800a itself to the current time point indicated in the clock data.
Thus, in this embodiment, the clock included in the receiver 1800a (the terminal device) is synchronized with the reference clock by global positioning system (GPS) radio waves or network time protocol (NTP) radio waves. Therefore, the receiver 1800a is capable of reproducing, at an appropriate time point according to the reference clock, data corresponding to the time point.
The receiver 1800a is configured as a smartphone as described above, and is used, for example, by being held by a holder 1810 formed of a translucent material such as resin or glass. This holder 1810 includes a back board 1810a and an engagement portion 1810b standing on the back board 1810a. The receiver 1800a is inserted into a gap between the back board 1810a and the engagement portion 1810b in such a way as to be placed along the back board 1810a.
The receiver 1800a is inserted as described above and held by the holder 1810. At this time, the engagement portion 1810b engages with a lower portion of the receiver 1800a, and the lower portion is sandwiched between the engagement portion 1810b and the back board 1810a. The back surface of the receiver 18000a faces the back board 1810a, and a display 1801 of the receiver 1800a is exposed.
The back board 1810a has a through-hole 1811, and a variable filter 1812 is attached to the back board 1810, at a position close to the through-hole 1811. A camera 1802 of the receiver 1800a which is being held by the holder 1810 is exposed on the back board 1810a through the through-hole 1811. A flash light 1803 of the receiver 1800a faces the variable filter 1812.
The variable filter 1812 is, for example, in the shape of a disc, and includes three color filters (a red filter, a yellow filter, and a green filter) each having the shape of a circular sector of the same size. The variable filter 1812 is attached to the back board 1810a in such a way as to be rotatable about the center of the variable filter 1812. The red filter is a translucent filter of a red color, the yellow filter is a translucent filter of a yellow color, and the green filter is a translucent filter of a green color.
Therefore, the variable filter 1812 is rotated, for example, until the red filter is at a position facing the flash light 1803a. In this case, light radiated from the flash light 1803a passes through the red filter, thereby being spread as red light inside the holder 1810. As a result, roughly the entire holder 1810 glows red.
Likewise, the variable filter 1812 is rotated, for example, until the yellow filter is at a position facing the flash light 1803a. In this case, light radiated from the flash light 1803a passes through the yellow filter, thereby being spread as yellow light inside the holder 1810. As a result, roughly the entire holder 1810 glows yellow.
Likewise, the variable filter 1812 is rotated, for example, until the green filter is at a position facing the flash light 1803a. In this case, light radiated from the flash light 1803a passes through the green filter, thereby being spread as green light inside the holder 1810. As a result, roughly the entire holder 1810 glows green.
This means that the holder 1810 lights up in red, yellow, or green just like a penlight.
For example, the receiver 1800a held by the holder 1810, namely, a holder-attached receiver, can be used in amusement parks and so on. Specifically, a plurality of holder-attached receivers directed to a float moving in an amusement park blink to music from the float in synchronization. This means that the float is configured as the transmitter in the above embodiments and transmits a visible light signal by the light source attached to the float changing in luminance. For example, the float transmits a visible light signal indicating the ID of the float. The holder-attached receiver then receives the visible light signal, that is, the ID, by capturing an image by the camera 1802 of the receiver 1800a as in the case of the above embodiments. The receiver 1800a which received the ID obtains, for example, from the server, a program associated with the ID. This program includes an instruction to turn ON the flash light 1803 of the receiver 1800a at predetermined time points. These predetermined time points are set according to music from the float (so as to be in synchronization therewith). The receiver 1800a then causes the flash light 1803a to blink according to the program.
With this, the holder 1810 for each receiver 1800a which received the ID repeatedly lights up at the same timing according to music from the float having the ID.
Each receiver 1800a causes the flash light 1803 to blink according to a preset color filter (hereinafter referred to as a preset filter). The preset filter is a color filter that faces the flash light 1803 of the receiver 1800a. Furthermore, each receiver 1800a recognizes the current preset filter based on an input by a user. Alternatively, each receiver 1800a recognizes the current preset filter based on, for example, the color of an image captured by the camera 1802.
Specifically, at a predetermined time point, only the holders 1810 for the receivers 1800a which have recognized that the preset filter is a red filter among the receivers 1800a which received the ID light up at the same time. At the next time point, only the holders 1810 for the receivers 1800a which have recognized that the preset filter is a green filter light up at the same time. Further, at the next time point, only the holders 1810 for the receivers 1800a which have recognized that the preset filter is a yellow filter light up at the same time.
Thus, the receiver 1800a held by the holder 1810 causes the flash light 1803, that is, the holder 1810, to blink in synchronization with music from the float and the receiver 1800a held by another holder 1810, as in the above-described case of synchronous reproduction illustrated in
The receiver 1800a receives an ID of a float indicated by a visible light signal from the float (Step S1831). Next, the receiver 1800a obtains a program associated with the ID from the server (Step S1832). Next, the receiver 1800a causes the flash light 1803 to be turned ON at predetermined time points according to the preset filter by executing the program (Step S1833).
At this time, the receiver 1800a may display, on the display 1801, an image according to the received ID or the obtained program.
The receiver 1800a receives an ID, for example, from a Santa Clause float, and displays an image of Santa Clause as illustrated in (a) of
A holder 1820 is configured in the same manner as the above-described holder 1810 except for the absence of the through-hole 1811 and the variable filter 1812. The holder 1820 holds the receiver 1800a with a back board 1820a facing the display 1801 of the receiver 1800a. In this case, the receiver 1800a causes the display 1801 to emit light instead of the flash light 1803. With this, light from the display 1801 spreads across roughly the entire holder 1820. Therefore, when the receiver 1800a causes the display 1801 to emit red light according to the above-described program, the holder 1820 glows red. Likewise, when the receiver 1800a causes the display 1801 to emit yellow light according to the above-described program, the holder 1820 glows yellow. When the receiver 1800a causes the display 1801 to emit green light according to the above-described program, the holder 1820 glows green. With the use of the holder 1820 such as that just described, it is possible to omit the settings for the variable filter 1812.
(Visible Light Signal)
The transmitter generates a 4 PPM visible light signal and changes in luminance according to this visible light signal, for example, as illustrated in
Furthermore, the transmitter may generate a visible light signal in which the number of slots allocated to one signal unit is variable as illustrated in
The transmitter may allocate an arbitrary period (signal unit period) to one signal unit without allocating a plurality of slots to one signal unit as illustrated in
The transmitter may generate, as a visible light signal, a signal indicating L and H alternately as illustrated in
The visible light signal includes, for example, a signal 1, a brightness adjustment signal corresponding to the signal 1, a signal 2, and a brightness adjustment signal corresponding to the signal 2. The transmitter generates the signal 1 and the signal 2 by modulating the signal which has not yet been modulated, and generates the brightness adjustment signals corresponding to these signals, thereby generating the above-described visible light signal.
The brightness adjustment signal corresponding to the signal is a signal which compensates for brightness increased or decreased due to a change in luminance according to the signal 1. The brightness adjustment signal corresponding to the signal 2 is a signal which compensates for brightness increased or decreased due to a change in luminance according to the signal 2. A change in luminance according to the signal 1 and the brightness adjustment signal corresponding to the signal 1 represents brightness B1, and a change in luminance according to the signal 2 and the brightness adjustment signal corresponding to the signal 2 represents brightness B2. The transmitter in this embodiment generates the brightness adjustment signal corresponding to each of the signal 1 and the signal 2 as a part of the visible light signal in such a way that the brightness B1 and the brightness 2 are equal. With this, brightness is kept at a constant level so that flicker can be reduced.
When generating the above-described signal 1, the transmitter generates a signal 1 including data 1, a preamble (header) subsequent to the data 1, and data 1 subsequent to the preamble. The preamble is a signal corresponding to the data 1 located before and after the preamble. For example, this preamble is a signal serving as an identifier for reading the data 1. Thus, since the signal 1 includes two data items 1 and the preamble located between the two data items, the receiver is capable of properly demodulating the data 1 (that is, the signal 1) even when the receiver starts reading the visible light signal at the midway point in the first data item 1.
(Bright Line Image)
As described above, the receiver captures an image of a transmitter changing in luminance, to obtain a bright line image including, as a bright line pattern, a visible light signal transmitted from the transmitter. The visible light signal is received by the receiver through such imaging.
For example, the receiver captures an image at time t1 using N exposure lines included in the image sensor, obtaining a bright line image including a region a and a region b in each of which a bright line pattern appears as illustrated in
The receiver demodulates the visible light signal based on the bright line patterns in the region a and in the region b. However, when the receiver determines that the demodulated visible light signal alone is not sufficient, the receiver captures an image at time t2 using only M (M<N) continuous exposure lines corresponding to the region a among the N exposure lines. By doing so, the receiver obtains a bright line image including only the region a among the region a and the region b. The receiver repeatedly performs such imaging also at time t3 to time t5. As a result, it is possible to receive the visible light signal having a sufficient data amount from the subject corresponding to the region a at high speed. Furthermore, the receiver captures an image at time t6 using only L (L<N) continuous exposure lines corresponding to the region b among the N exposure lines. By doing so, the receiver obtains a bright line image including only the region b among the region a and the region b. The receiver repeatedly performs such imaging also at time t7 to time t9. As a result, it is possible to receive the visible light signal having a sufficient data amount from the subject corresponding to the region b at high speed.
Furthermore, the receiver may obtain a bright line image including only the region a by performing, at time t10 and time t11, the same or like imaging operation as that performed at time t2 to time t5. Furthermore, the receiver may obtain a bright line image including only the region b by performing, at time t12 and time t13, the same or like imaging operation as that performed at time t6 to time t9.
In the above-described example, when the receiver determines that the visible light signal is not sufficient, the receiver continuously captures the blight line image including only the region a at times t2 to t5, but this continuous imaging may be performed when a bright line appears in an image captured at time t1. Likewise, when the receiver determines that the visible light signal is not sufficient, the receiver continuously captures the blight line image including only the region b at time t6 to time t9, but this continuous imaging may be performed when a bright line appears in an image captured at time t1. The receiver may alternately obtain a bright line image including only the region a and obtain a bright line image including only the region b.
Note that the M continuous exposure lines corresponding to the above region a are exposure lines which contribute to generation of the region a, and the L continuous exposure lines corresponding to the above region b are exposure lines which contribute to generation of the region b.
For example, the receiver captures an image at time t1 using N exposure lines included in the image sensor, obtaining a bright line image including a region a and a region b in each of which a bright line pattern appears as illustrated in
When the receiver determines that the visible light signal demodulated from the bright line patterns in the region a and the region b is not sufficient, the receiver captures an image at time t2 using only P (P<N) continuous exposure lines corresponding to the overlap region among the N exposure lines. By doing so, the receiver obtains a bright line image including only the overlap region between the region a and the region b. The receiver repeatedly performs such imaging also at time t3 and time t4. As a result, it is possible to receive the visible light signals having sufficient data amounts from the subjects corresponding to the region a and the region b at approximately the same time and at high speed.
For example, the receiver captures an image at time t1 using N exposure lines included in the image sensor, obtaining a bright line image including a region made up of an area a where an unclear bright line pattern appears and an area b where a clear bright line pattern appears as illustrated in
In this case, when the receiver determines that the visible light signal demodulated from the bright line pattern in the above-described region is not sufficient, the receiver captures an image at time t2 using only Q (Q<N) continuous exposure lines corresponding to the area b among the N exposure lines. By doing so, the receiver obtains a bright line image including only the area b out of the above-described region. The receiver repeatedly performs such imaging also at time t3 and time t4. As a result, it is possible to receive the visible light signal having a sufficient data amount from the subject corresponding to the above-described region at high speed.
Furthermore, after continuously capturing the bright line image including only the area b, the receiver may further continuously captures a bright line image including only the area a.
When a bright line image includes a plurality of regions (or areas) where a bright line pattern appears as described above, the receiver assigns the regions with numbers in sequence and captures bright line images including only the regions according to the sequence. In this case, the sequence may be determined according to the magnitude of a signal (the size of the region or area) or may be determined according to the clarity level of a bright line. Alternatively, the sequence may be determined according to the color of light from the subjects corresponding to the regions. For example, the first continuous imaging may be performed for the region corresponding to red light, and the next continuous imaging may be performed for the region corresponding to white light. Alternatively, it may also be possible to perform only continuous imaging for the region corresponding to red light.
(HDR Compositing)
A camera system is mounted on a vehicle, for example, in order to prevent collision. This camera system performs high dynamic range (HDR) compositing using an image captured with a camera. This HDR compositing results in an image having a wide luminance dynamic range. The camera system recognizes surrounding vehicles, obstacles, humans or the like based on this image having a wide dynamic range.
For example, the setting mode of the camera system includes a normal setting mode and a communication setting mode. When the setting mode is the normal setting mode, the camera system captures four images at time t1 to time t4 at the same shutter speed of 1/100 seconds and with mutually different sensitivity levels, for example, as illustrated in
When the setting mode is the communication setting mode, the camera system captures three images at time t5 to time t7 at the same shutter speed of 1/100 seconds and with mutually different sensitivity levels, for example, as illustrated in
Furthermore, when the setting mode is the communication setting mode, the camera system is not required to perform the HDR compositing. For example, as illustrated in
Note that the images are captured at time t10 to time t12 with mutually different sensitivity levels in the example illustrated in
A camera system such as that described above is capable of performing the HDR compositing and also is capable of receiving the visible light signal.
(Security)
This visible light communication system includes, for example, a transmitter disposed at a cash register, a smartphone serving as a receiver, and a server. Note that communication between the smartphone and the server and communication between the transmitter and the server are each performed via a secure communication link. Communication between the transmitter and the smartphone is performed by visible light communication. The visible light communication system in this embodiment ensures security by determining whether or not the visible light signal from the transmitter has been properly received by the smartphone.
Specifically, the transmitter transmits a visible light signal indicating, for example, a value “100” to the smartphone by changing in luminance at time t1. At time t2, the smartphone receives the visible light signal and transmits a radio signal indicating the value “100” to the server. At time t3, the server receives the radio signal from the smartphone. At this time, the server performs a process for determining whether or not the value “100” indicated in the radio signal is a value of the visible light signal received by the smartphone from the transmitter. Specifically, the server transmits a radio signal indicating, for example, a value “200” to the transmitter. The transmitter receives the radio signal, and transmits a visible light signal indicating the value “200” to the smartphone by changing in luminance at time t4. At time t5, the smartphone receives the visible light signal and transmits a radio signal indicating the value “200” to the server. At time t6, the server receives the radio signal from the smartphone. The server determines whether or not the value indicated in this received radio signal is the same as the value indicated in the radio signal transmitted at time t3. When the values are the same, the server determines that the value “100” indicated in the visible light signal received at time t3 is a value of the visible light signal transmitted from the transmitter and received by the smartphone. When the values are not the same, the server determines that it is doubtful that the value “100” indicated in the visible light signal received at time t3 is a value of the visible light signal transmitted from the transmitter and received by the smartphone.
By doing so, the server is capable of determining whether or not the smartphone has certainly received the visible light signal from the transmitter. This means that when the smartphone has not received the visible light signal from the transmitter, signal transmission to the server as if the smartphone has received the visible light signal can be prevented.
Note that the communication between the smartphone, the server, and the transmitter is performed using the radio signal in the above-described example, but may be performed using an optical signal other than the visible light signal or using an electrical signal. The visible light signal transmitted from the transmitter to the smartphone indicates, for example, a value of a charged amount, a value of a coupon, a value of a monster, or a value of bingo.
(Vehicle Relationship)
For example, the leading vehicle recognizes using a sensor (such as a camera) mounted thereon that an accident occurred in a direction of travel. When the leading vehicle recognizes an accident as just described, the leading vehicle transmits a visible light signal by changing luminance of a taillight. For example, the leading vehicle transmits to a rear vehicle a visible light signal that encourages the rear vehicle to slow down. The rear vehicle receives the visible light signal by capturing an image with a camera mounted thereon, and slows down according to the visible light signal and transmits a visible light signal that encourages another rear vehicle to slow down.
Thus, the visible light signal that encourages a vehicle to slow down is transmitted in sequence from the leading vehicle to a plurality of vehicles which travel in line, and a vehicle that received the visible light signal slows down. Transmission of the visible light signal to the vehicles is so fast that these vehicles can slow down almost at the same time. Therefore, congestion due to accidents can be eased.
For example, a front vehicle may change luminance of a taillight thereof to transmit a visible light signal indicating a message (for example, “thanks”) for the rear vehicle. This message is generated by user inputs to a smartphone, for example. The smartphone then transmits a signal indicating the message to the above front vehicle. As a result, the front vehicle is capable of transmitting the visible light signal indicating the message to the rear vehicle.
For example, a headlight of a vehicle includes a plurality of light emitting diodes (LEDs). The transmitter of this vehicle changes luminance of each of the LEDs of the headlight separately, thereby transmitting a visible light signal from each of the LEDs. The receiver of another vehicle receives these visible light signals from the plurality of LEDs by capturing an image of the vehicle having the headlight.
At this time, in order to recognize which LED transmitted the visible light signal that has been received, the receiver determines a position of each of the LEDs based on the captured image. Specifically, using an accelerometer installed on the same vehicle to which the receiver is fitted, the receiver determines a position of each of the LEDs on the basis of a gravity direction indicated by the accelerometer (a downward arrow in
Note that the LED is cited as an example of a light emitter which changes in luminance in the above-described example, but may be other light emitter than the LED.
For example, the receiver mounted on a travelling vehicle obtains the bright line image illustrated in
At this time, on the basis of each of visible light signals transmitted from two headlights and demodulated, the receiver obtains an ID of the vehicle having the headlights, a speed of the vehicle, and a type of the vehicle. When IDs of two visible light signals are the same, the receiver determines that these two visible light signals are signals transmitted from the same vehicle. The receiver then identifies a length between the two headlights of the vehicle (a headlight-to-headlight distance) based on the type of the vehicle. Furthermore, the receiver measures a distance L1 between two regions included in the bright line image and where the bright line patterns appear. The receiver then calculates a distance between the vehicle on which the receiver is mounted and the rear vehicle (an inter-vehicle distance) by triangulation using the distance L1 and the headlight-to-headlight distance. The receiver determines a risk of collision based on the inter-vehicle distance and the speed of the vehicle obtained from the visible light signal, and provides a driver of the vehicle with a warning according to the result of the determination. With this, collision of vehicles can be avoided.
Note that the receiver identifies a headlight-to-headlight distance based on the vehicle type included in the visible light signal in the above-described example, but may identify a headlight-to-headlight distance based on information other than the vehicle type. Furthermore, when the receiver determines that there is a risk of collision, the receiver provides a warning in the above-described case, but may output to the vehicle a control signal for causing the vehicle to perform an operation of avoiding the risk. For example, the control signal is a signal for accelerating the vehicle or a signal for causing the vehicle to change lanes.
The camera captures an image of the rear vehicle in the above-described case, but may capture an image of an oncoming vehicle. When the receiver determines based on an image captured with the camera that it is foggy around the receiver (that is, the vehicle including the receiver), the receiver may be set to a mode of receiving a visible light signal such as that described above. With this, even when it is foggy around the receiver of the vehicle, the receiver is capable of identifying a position and a speed of an oncoming vehicle by receiving a visible light signal transmitted from a headlight of the oncoming vehicle.
A transmitter (vehicle) 7006a having, for instance, two car taillights (light emitting units or lights) transmits identification information (ID) of the transmitter 7006a to a receiver such as a smartphone. Having received the ID, the receiver obtains information associated with the ID from a server. Examples of the information include the ID of the vehicle or the transmitter, the distance between the light emitting units, the size of the light emitting units, the size of the vehicle, the shape of the vehicle, the weight of the vehicle, the number of the vehicle, the traffic ahead, and information indicating the presence/absence of danger. The receiver may obtain these information directly from the transmitter 7006a.
The ID of the transmitter 7006a and the information to be provided to the receiver receiving the ID are stored in the server in association with each other (Step 7106a). The information to be provided to the receiver may include information such as the size of the light emitting unit as the transmitter 7006a, the distance between the light emitting units, the shape and weight of the object including the transmitter 7006a, the identification number such as a vehicle identification number, the state of an area not easily observable from the receiver, and the presence/absence of danger.
The transmitter 7006a transmits the ID (Step 7106b). The transmission information may include the URL of the server and the information to be stored in the server.
The receiver receives the transmitted information such as the ID (Step 7106c). The receiver obtains the information associated with the received ID from the server (Step 7106d). The receiver displays the received information and the information obtained from the server (Step 7106e).
The receiver calculates the distance between the receiver and the light emitting unit by triangulation, from the information of the size of the light emitting unit and the apparent size of the captured light emitting unit or from the information of the distance between the light emitting units and the distance between the captured light emitting units (Step 7106f). The receiver issues a warning of danger or the like, based on the information such as the state of an area not easily observable from the receiver and the presence/absence of danger (Step 7106g).
A transmitter (vehicle) 7007b having, for instance, two car taillights (light emitting units or lights) transmits information of the transmitter 7007b to a receiver 7007a such as a transmitter-receiver in a parking lot. The information of the transmitter 7007b indicates the identification information (ID) of the transmitter 7007b, the number of the vehicle, the size of the vehicle, the shape of the vehicle, or the weight of the vehicle. Having received the information, the receiver 7007a transmits information of whether or not parking is permitted, charging information, or a parking position. The receiver 7007a may receive the ID, and obtain information other than the ID from the server.
The ID of the transmitter 7007b and the information to be provided to the receiver 7007a receiving the ID are stored in the server (parking lot management server) in association with each other (Step 7107a). The information to be provided to the receiver 7007a may include information such as the shape and weight of the object including the transmitter 7007b, the identification number such as a vehicle identification number, the identification number of the user of the transmitter 7007b, and payment information.
The transmitter 7007b (in-vehicle transmitter) transmits the ID (Step 7107b). The transmission information may include the URL of the server and the information to be stored in the server. The receiver 7007a (transmitter-receiver) in the parking lot transmits the received information to the server for managing the parking lot (parking lot management server) (Step 7107c). The parking lot management server obtains the information associated with the ID of the transmitter 7007b, using the ID as a key (Step 7107d). The parking lot management server checks the availability of the parking lot (Step 7107e).
The receiver 7007a (transmitter-receiver) in the parking lot transmits information of whether or not parking is permitted, parking position information, or the address of the server holding these information (Step 7107f). Alternatively, the parking lot management server transmits these information to another server. The transmitter (in-vehicle receiver) 7007b receives the transmitted information (Step 7107g). Alternatively, the in-vehicle system obtains these information from another server.
The parking lot management server controls the parking lot to facilitate parking (Step 7107h). For example, the parking lot management server controls a multi-level parking lot. The transmitter-receiver in the parking lot transmits the ID (Step 7107i). The in-vehicle receiver (transmitter 7007b) inquires of the parking lot management server based on the user information of the in-vehicle receiver and the received ID (Step 7107j).
The parking lot management server charges for parking according to parking time and the like (Step 7107k). The parking lot management server controls the parking lot to facilitate access to the parked vehicle (Step 7107m). For example, the parking lot management server controls a multi-level parking lot. The in-vehicle receiver (transmitter 7007b) displays the map to the parking position, and navigates from the current position (Step 7107n).
(Interior of Train)
The visible light communication system includes, for example, a plurality of lighting devices 1905 disposed inside a train, a smartphone 1905 held by a user, a server 1904, and a camera 1903 disposed inside the train.
Each of the lighting devices 1905 is configured as the above-described transmitter, and not only radiates light, but also transmits a visible light signal by changing in luminance. This visible light signal indicates an ID of the lighting device 1905 which transmits the visible light signal.
The smartphone 1906 is configured as the above-described receiver, and receives the visible light signal transmitted from the lighting device 1905, by capturing an image of the lighting device 1905. For example, when a user is involved in troubles inside a train (such as molestation or fights), the user operates the smartphone 1906 so that the smartphone 1906 receives the visible light signal. When the smartphone 1906 receives a visible light signal, the smartphone 1906 notifies the server 1904 of an ID indicated in the visible light signal.
The server 1904 is notified of the ID, and identifies the camera 1903 which has a range of imaging that is a range of illumination by the lighting device 1905 identified by the ID. The server 1904 then causes the identified camera 1903 to capture an image of a range illuminated by the lighting device 1905.
The camera 1903 captures an image according to an instruction issued by the server 1904, and transmits the captured image to the server 1904.
By doing so, it is possible to obtain an image showing a situation where a trouble occurs in the train. This image can be used as an evidence of the trouble.
Furthermore, an image captured with the camera 1903 may be transmitted from the server 1904 to the smartphone 1906 by a user operation on the smartphone 1906.
Moreover, the smartphone 1906 may display an imaging button on a screen and when a user touches the imaging button, transmit a signal prompting an imaging operation to the server 1904. This allows a user to determine a timing of an imaging operation.
The visible light communication system includes, for example, a plurality of cameras 1903 disposed in a facility and an accessory 1907 worn by a person.
The accessory 1907 is, for example, a headband with a ribbon to which a plurality of LEDs are attached. This accessory 1907 is configured as the above-described transmitter, and transmits a visible light signal by changing luminance of the LEDs.
Each of the cameras 1903 is configured as the above-described receiver, and has a visible light communication mode and a normal imaging mode. Furthermore, these cameras 1903 are disposed at mutually different positions in a path inside the facility.
Specifically, when an image of the accessory 1907 as a subject is captured with the camera 1903 in the visible light communication mode, the camera 1903 receives a visible light signal from the accessory 1907. When the camera 1903 receives the visible light signal, the camera 1903 switches the preset mode from the visible light communication mode to the normal imaging mode. As a result, the camera 1903 captures an image of a person wearing the accessory 1907 as a subject.
Therefore, when a person wearing the accessory 1907 walks in the path inside the facility, the cameras 1903 close to the person capture images of the person one after another. Thus, it is possible to automatically obtain and store images which show the person enjoying time in the facility.
Note that instead of capturing an image in the normal imaging mode immediately after receiving the visible light signal, the camera 1903 may capture an image in the normal imaging mode, for example, when the camera 1903 is given an imaging start instruction from the smartphone. This allows a user to operate the camera 1903 so that an image of the user is captured with the camera 1903 at a timing when the user touches an imaging start button displayed on the screen of the smartphone.
A play tool 1901 is, for example, configured as the above-described transmitter including a plurality of LEDs. Specifically, the play tool 1901 transmits a visible light signal by changing luminance of the LEDs.
A smartphone 1902 receives the visible light signal from the play tool 1901 by capturing an image of the play tool 1901. As illustrated in (a) of
This means that when the smartphone 1902 receives the same visible light signal, the smartphone 1902 switches video which is reproduced according to the number of times the smartphone 1902 has received the visible light signal. The number of times the smartphone 1902 has received the visible light single may be counted by the smartphone 1902 or may be counted by the server. Even when the smartphone 1902 has received the same visible light signal more than one time, the smartphone 1902 does not continuously reproduce the same video. The smartphone 1902 may decrease the probability of occurrence of video already reproduced and preferentially download and reproduce video with high probability of occurrence among a plurality of video items associated with the same visible light signal.
The smartphone 1902 may receive a visible light signal transmitted from a touch screen placed in an information office of a facility including a plurality of shops, and display an image according to the visible light signal. For example, when a default image representing an overview of the facility is displayed, the touch screen transmits a visible light signal indicating the overview of the facility by changing in luminance. Therefore, when the smartphone receives the visible light signal by capturing an image of the touch screen on which the default image is displayed, the smartphone can display on the display thereof an image showing the overview of the facility. In this case, when a user provides an input to the touch screen, the touch screen displays a shop image indicating information on a specified shop, for example. At this time, the touch screen transmits a visible light signal indicating the information on the specified shop. Therefore, the smartphone receives the visible light signal by capturing an image of the touch screen displaying the shop image, and thus can display the shop image indicating the information on the specified shop. Thus, the smartphone is capable of displaying an image in synchronization with the touch screen.
A reproduction method according to an aspect of the present disclosure includes: receiving a visible light signal by a sensor of a terminal device from a transmitter which transmits the visible light signal by a light source changing in luminance; transmitting a request signal for requesting content associated with the visible light signal, from the terminal device to a server; receiving, by the terminal device, content including time points and data to be reproduced at the time points, from the server; and reproducing data included in the content and corresponding to time of a clock included in the terminal device.
With this, as illustrated in
Furthermore, the clock included in the terminal device may be synchronized with a reference clock by global positioning system (GPS) radio waves or network time protocol (NTP) radio waves.
In this case, since the clock of the terminal device (the receiver) is synchronized with the reference clock, at an appropriate time point according to the reference clock, data corresponding to the time point can be reproduced as illustrated in
Furthermore, the visible light signal may indicate a time point at which the visible light signal is transmitted from the transmitter.
With this, the terminal device (the receiver) is capable of receiving content associated with a time point at which the visible light signal is transmitted from the transmitter (the transmitter time point) as indicated in the method d in
Furthermore, in the above reproduction method, when the process for synchronizing the clock of the terminal device with the reference clock is performed using the GPS radio waves or the NTP radio waves is at least a predetermined time before a point of time at which the terminal device receives the visible light signal, the clock of the terminal device may be synchronized with a clock of the transmitter using a time point indicated in the visible light signal transmitted from the transmitter.
For example, when the predetermined time has elapsed after the process for synchronizing the clock of the terminal device with the reference clock, there are cases where the synchronization is not appropriately maintained. In this case, there is a risk that the terminal device cannot reproduce content at a point of time which is in synchronization with the transmitter-side content reproduced by the transmitter. Thus, in the reproduction method according to an aspect of the present disclosure described above, when the predetermined time has elapsed, the clock of the terminal device (the receiver) and the clock of the transmitter are synchronized with each other as in Step S1829 and Step S1830 of
Furthermore, the server may hold a plurality of content items associated with time points, and in the receiving of content, when content associated with the time point indicated in the visible light signal is not present in the server, among the plurality of content items, content associated with a time point that is closest to the time point indicated in the visible light signal and after the time point indicated in the visible light signal may be received.
With this, as illustrated in the method d in
Furthermore, the reproduction method may include: receiving a visible light signal by a sensor of a terminal device from a transmitter which transmits the visible light signal by a light source changing in luminance; transmitting a request signal for requesting content associated with the visible light signal, from the terminal device to a server; receiving, by the terminal device, content from the server; and reproducing the content, and the visible light signal may indicate ID information and a time point at which the visible light signal is transmitted from the transmitter, and in the receiving of content, the content that is associated with the ID information and the time point indicated in the visible light signal may be received.
With this, as in the method d in
Furthermore, the visible light signal may indicate the time point at which the visible light signal is transmitted from the transmitter, by including second information indicating an hour and a minute of the time point and first information indicating a second of the time point, and the receiving of a visible light signal may include receiving the second information and receiving the first information a greater number of times than a total number of times the second information is received.
With this, for example, when a time point at which each packet included in the visible light signal is transmitted is sent to the terminal device at a second rate, it is possible to reduce the burden of transmitting, every time one second passes, a packet indicating a current time point represented using all the hour, the minute, and the second. Specifically, as illustrated in
Furthermore, the sensor of the terminal device may be an image sensor, in the receiving of a visible light signal, continuous imaging with the image sensor may be performed while a shutter speed of the image sensor is alternately switched between a first speed and a second speed higher than the first speed, (a) when a subject imaged with the image sensor is a barcode, an image in which the barcode appears may be obtained through imaging performed when the shutter speed is the first speed, and a barcode identifier may be obtained by decoding the barcode appearing in the image, and (b) when a subject imaged with the image sensor is the light source, a bright line image which is an image including bright lines corresponding to a plurality of exposure lines included in the image sensor may be obtained through imaging performed when the shutter speed is the second speed, and the visible light signal may be obtained as a visible light identifier by decoding a plurality of patterns of the bright lines included in the obtained bright line image, and the reproduction method may further include displaying an image obtained through imaging performed when the shutter speed is the first speed.
Thus, as illustrated in
Furthermore, in the obtaining of the visible light identifier, a first packet including a data part and an address part may be obtained from the plurality of patterns of the bright lines, whether or not at least one packet already obtained before the first packet includes at least a predetermined number of second packets each including the same address part as the address part of the first packet may be determined, and when it is determined that at least the predetermined number of the second packets are included, a combined pixel value may be calculated by combining a pixel value of a partial region of the bright line image that corresponds to a data part of each of at least the predetermined number of the second packets and a pixel value of a partial region of the bright line image that corresponds to the data part of the first packet, and at least a part of the visible light identifier may be obtained by decoding the data part including the combined pixel value.
With this, as illustrated in
Furthermore, the first packet may further include a first error correction code for the data part and a second error correction code for the address part, and in the receiving of a visible light signal, the address part and the second error correction code transmitted from the transmitter by changing in luminance according to a second frequency may be received, and the data part and the first error correction code transmitted from the transmitter by changing in luminance according to a first frequency higher than the second frequency may be received.
With this, erroneous reception of the address part can be reduced, and the data part having a large data amount can be promptly obtained.
Furthermore, in the obtaining of the visible light identifier, a first packet including a data part and an address part may be obtained from the plurality of patterns of the bright lines, whether or not at least one packet already obtained before the first packet includes at least one second packet which is a packet including the same address part as the address part of the first packet may be determined, when it is determined that the at least one second packet is included, whether or not all the data parts of the at least one second packet and the first packet are the same may be determined, when it is determined that not all the data parts are the same, it may be determined for each of the at least one second packet whether or not a total number of parts, among parts included in the data part of the second packet, which are different from parts included in the data part of the first packet, is a predetermined number or more, when the at least one second packet includes the second packet in which the total number of different parts is determined as the predetermined number or more, the at least one second packet may be discarded, and when the at least one second packet does not include the second packet in which the total number of different parts is determined as the predetermined number or more, a plurality of packets in which a total number of packets having the same data part is highest may be identified among the first packet and the at least one second packet, and at least a part of the visible light identifier may be obtained by decoding a data part included in each of the plurality of packets as a data part corresponding to the address part included in the first packet.
With this, as illustrated in
Furthermore, in the obtaining of the visible light identifier, a plurality of packets each including a data part and an address part may be obtained from the plurality of patterns of the bright lines, and whether or not the obtained packets include a 0-end packet which is a packet including the data part in which all bits are zero may be determined, and when it is determined that the 0-end packet is included, whether or not the plurality of packets include all N associated packets (where N is an integer of 1 or more) which are each a packet including an address part associated with an address part of the 0-end packet may be determined, and when it is determined that all the N associated packets are included, the visible light identifier may be obtained by arranging and decoding data parts of the N associated packets. For example, the address part associated with the address part of the 0-end packet is an address part representing an address greater than or equal to 0 and smaller than an address represented by the address part of the 0-end packet.
Specifically, as illustrated in
A protocol adapted for variable length and variable number of divisions is described.
A transmission packet is made up of a preamble, TYPE, a payload, and a check part. Packets may be continuously transmitted or may be intermittently transmitted. With a period in which no packet is transmitted, it is possible to change the state of liquid crystals when the backlight is turned off, to improve the sense of dynamic resolution of the liquid crystal display. When the packets are transmitted at random intervals, signal interference can be avoided.
For the preamble, a pattern that does not appear in the 4 PPM is used. The reception process can be facilitated with the use of a short basic pattern.
The kind of the preamble is used to represent the number of divisions in data so that the number of divisions in data can be made variable without unnecessarily using a transmission slot.
When the payload length varies according to the value of the TYPE, it is possible to make the transmission data variable. In the TYPE, the payload length may be represented, or the data length before division may be represented. When a value of the TYPE represents an address of a packet, the receiver can correctly arrange received packets. Furthermore, the payload length (the data length) that is represented by a value of the TYPE may vary according to the kind of the preamble, the number of divisions, or the like.
When the length of the check part varies according to the payload length, efficient error correction (detection) is possible. When the shortest length of the check part is set to two bits, efficient conversion to the 4 PPM is possible. Furthermore, when the kind of the error correction (detection) code varies according to the payload length, error correction (detection) can be efficiently performed. The length of the check part and the kind of the error correction (detection) code may vary according to the kind of the preamble or the value of the TYPE.
Some of different combinations of the payload and the number of divisions lead to the same data length. In such a case, each combination even with the same data value is given a different meaning so that more values can be represented.
A high-speed transmission and luminance modulation protocols are described.
A transmission packet is made up of a preamble part, a body part, and a luminance adjustment part. The body includes an address part, a data part, and an error correction (detection) code part. When intermittent transmission is permitted, the same advantageous effects as described above can be obtained.
(Frame Configuration in Single Frame Transmission)
A transmission frame includes a preamble (PRE), a frame length (FLEN), an ID type (IDTYPE), content (ID/DATA), and a check code (CRC), and may also include a content type (CONTENTTYPE). The bit number of each area is an example.
It is possible to transmit content of a variable length by selecting the length of ID/DATA in the FLEN.
The CRC is a check code for correcting or detecting an error in other parts than the PRE. The CRC length varies according to the length of a part to be checked so that the check ability can be kept at a certain level or higher. Furthermore, the use of a different check code according to each length of a part to be checked allows an improvement in the check ability per CRC length.
(Frame Configuration in Multiple Frame Transmission)
A transmission frame includes a preamble (PRE), an address (ADDR), and a part of divided data (DATAPART), and may also include the number of divisions (PARTNUM) and an address flag (ADDRFRAG). The bit number of each area is an example.
Content is divided into a plurality of parts before being transmitted, which enables long-distance communication.
When content is equally divided into parts of the same size, the maximum frame length is reduced, and communication is stabilized.
If content cannot be equally divided, the content is divided in such a way that one part is smaller in size than the other parts, allowing data of a moderate size to be transmitted.
When the content is divided into parts having different sizes and a combination of division sizes is given a meaning, a larger amount of information can be transmitted. One data item, for example, 32-bit data, can be treated as different data items between when 8-bit data is transmitted four times, when 16-bit data is transmitted twice, and when 15-bit data is transmitted once and 17-bit data is transmitted once; thus, a larger amount of information can be represented.
With PARTNUM representing the number of divisions, the receiver can be promptly informed of the number of divisions and can accurately display a progress of the reception.
With the settings that the address is not the last address when the ADDRFRAG is 0 and the address is the last address when the ADDRFRAG is 1, the area representing the number of divisions is no longer needed, and the information can be transmitted in a shorter period of time.
The CRC is, as described above, a check code for correcting or detecting an error in other parts than the PRE. Through this check, interference can be detected when transmission frames from a plurality of transmission sources are received. When the CRC length is an integer multiple of the DATAPART length, interference can be detected most efficiently.
At the end of the divided frame (the frame illustrated in (a), (b), or (c) of
The IDTYPE illustrated in (d) of
(Selection of ID/DATA Length)
In the cases of (a) to (d) of
(CRC Length and Generator Polynomial)
The CRC length is set in this way to keep the checking ability regardless of the length of a subject to be checked.
The generator polynomial is an example, and other generator polynomial may be used. Furthermore, a check code other than the CRC may also be used. With this, the checking ability can be improved.
(Selection of DATAPART Length and Selection of Last Address According to Type of Preamble)
When the DATAPART length is indicated with reference to the type of the preamble, the area representing the DATAPART length is no longer needed, and the information can be transmitted in a shorter period of time. Furthermore, when whether or not the address is the last address is indicated, the area representing the number of divisions is no longer needed, and the information can be transmitted in a shorter period of time. Furthermore, in the case of (b) of
The address length may be different according to the type of the preamble. With this, the number of combinations of lengths of transmission information can be increased, and the information can be transmitted in a shorter period of time, for example.
In the case of (c) of
(Selection of Address)
A value of the ADDR indicates the address of the frame, with the result that the receiver can reconstruct properly transmitted information.
A value of PARTNUM indicates the number of divisions, with the result that the receiver can be informed of the number of divisions without fail at the time of receiving the first frame and can accurately display a progress of the reception.
(Prevention of Interference by Difference in Number of Divisions)
When the transmission information is equally divided and transmitted, since signals from a transmitter A and a transmitter B in
When the transmitters A and B include a number-of-divisions setting unit, a user can prevent interference by setting the number of divisions of transmitters placed close to each other to different values.
The receiver registers the number of divisions of the received signal with the server so that the server can be informed of the number of divisions set to the transmitter, and other receiver can obtain the information from the server to accurately display a progress of the reception.
The receiver obtains, from the server or the storage unit of the receiver, information on whether or not a signal from a nearby or corresponding transmitter is an equally-divided signal. When the obtained information is equally-divided information, only a signal from a frame having the same DATAPART length is reconstructed. When the obtained information is not equally divided information or when a situation in which not all addresses in the frames having the same DATAPART length are present continues for a predetermined length of time or more, a signal obtained by combining frames having different DATAPART lengths is decoded.
(Prevention of Interference by Difference in Number of Divisions)
The server receives, from the receiver, ID and division formation (which is information on a combination of DATAPART lengths of the received signal) received by the receiver. When the ID is subject to extension according to the division formation, a value obtained by digitalizing a pattern of the division formation is defined as an auxiliary ID, and associated information using, as a key, an extended ID obtained by combining the ID and the auxiliary ID is sent to the receiver.
When the ID is not subject to the extension according to the division formation, whether or not the storage unit holds division formation associated with the ID is checked, and whether or not the division formation held in the storage unit is the same as the received division formation is checked. When the division formation held in the storage unit is different from the received division formation, a re-check instruction is transmitted to the receiver. With this, erroneous information due to a reception error in the receiver can be prevented from being presented.
When the same division formation with the same ID is received within a predetermined length of time after the re-check instruction is transmitted, it is determined that the division formation has been changed, and the division formation associated with the ID is updated. Thus, it is possible to adapt to the case where the division formation has been changed as described in the explanation with reference to
When the division formation has not been stored, when the received division formation and the held division formation match, or when the division formation is updated, the associated information using the ID as a key is sent to the receiver, and the division formation is stored into the storage unit in association with the ID.
(Indication of Status of Reception Progress)
The receiver obtains, from the server or the storage area of the receiver, the variety and ratio of the number of divisions of a transmitter corresponding to the receiver or a transmitter around the receiver. Furthermore, when partial division data is already received, the variety and ratio of the number of divisions of the transmitter which has transmitted information matching the partial division data are obtained.
The receiver receives a divided frame.
When the last address has already been received, when the variety of the obtained number of divisions is only one, or when the variety of the number of divisions corresponding to a running reception app is only one, the number of divisions is already known, and therefore, the status of progress is displayed based on this number of divisions.
Otherwise, the receiver calculates and displays a status of progress in a simple mode when there is a few available processing resources or an energy-saving mode is ON. In contrast, when there are many available processing resources or the energy-saving mode is OFF, the receiver calculates and displays a status of progress in a maximum likelihood estimation mode.
First, the receiver obtains a standard number of divisions Ns from the server. Alternatively, the receiver reads the standard number of divisions Ns from a data holding unit included therein. Note that the standard number of divisions is (a) a mode or an expected value of the number of transmitters that transmit data divided by such number of divisions, (b) the number of divisions determined for each packet length, (c) the number of divisions determined for each application, or (d) the number of divisions determined for each identifiable range where the receiver is present.
Next, the receiver determines whether or not a packet indicating that the last address is included has already been received. When the receiver determines that the packet has been received, the address of the last packet is denoted as N. In contrast, when the receiver determines that the packet has not been received, a number obtained by adding 1 or a number of 2 or more to the received maximum address Amax is denoted as Ne. Here, the receiver determines whether or not Ne>Ns is satisfied. When the receiver determines that Ne>Ns is satisfied, the receiver assumes N=Ne. In contrast, when the receiver determines that Ne>Ns is not satisfied, the receiver assumes N=Ns.
Assuming that the number of divisions in the signal that is being received is N, the receiver then calculates a ratio of the number of the received packets to packets required to receive the entire signal.
In such a simple mode, the status of progress can be calculated by a simpler calculation than in the maximum likelihood estimation mode. Thus, the simple mode is advantageous in terms of processing time or energy consumption.
First, the receiver obtains a previous distribution of the number of divisions from the server. Alternatively, the receiver reads the previous distribution from the data holding unit included therein. Note that the previous distribution is (a) determined as a distribution of the number of transmitters that transmit data divided by the number of divisions, (b) determined for each packet length, (c) determined for each application, or (d) determined for each identifiable range where the receiver is present.
Next, the receiver receives a packet x and calculates a probability P(x|y) of receiving the packet x when the number of divisions is y. The receiver then determines a probability p(y|x) of the number of divisions of a transmission signal being y when the packet x is received, according to P(x|y)×P(y)÷A (where A is a multiplier for normalization). Furthermore, the receiver assumes P(y)=P(y|x).
Here, the receiver determines whether or not a number-of-divisions estimation mode is a maximum likelihood mode or a likelihood average mode. When the number-of-divisions estimation mode is the maximum likelihood mode, the receiver calculates a ratio of the number of packets that have been received, assuming that y maximizing P(y) is the number of divisions. When the number-of-divisions estimation mode is the likelihood average mode, the receiver calculates a ratio of the number of packets that have been received, assuming that a sum of y×P(y) is the number of divisions.
In the maximum likelihood estimation mode such as that just described, a more accurate degree of progress can be calculated than in the simple mode.
Furthermore, when the number-of-divisions estimation mode is the maximum likelihood mode, a likelihood of the last address being at a position of each number is calculated using the address that have so far been received, and the number having the highest likelihood is estimated as the number of divisions. With this, a progress of reception is displayed. In this display method, a status of progress closest to the actual status of progress can be displayed.
First, the receiver calculates a ratio of the number of packets that have been received to packets required to receive the entire signal. The receiver then determines whether or not the calculated ratio is smaller than a ratio that is being displayed. When the receiver determines that the calculated ratio is smaller than the ratio that is being displayed, the receiver further determines whether or not the ratio that is being displayed is a calculation result obtained no less than a predetermined time before. When the receiver determines that the ratio that is being displayed is a calculation result obtained no less than the predetermined time before, the receiver displays the calculated ratio. When the receiver determines that the ratio that is being displayed is not a calculation result obtained no less than the predetermined time before, the receiver continues to display the ratio that is being displayed.
Furthermore, the receiver determines that the calculated ratio is greater than or equal to the ratio that is being displayed, the receiver denotes, as Ne, the number obtained by adding 1 or the number of 2 or more to a received maximum address Amax. The receiver then displays the calculated ratio.
When the last packet is received, for example, a calculation result of the status of progress smaller than a previous result thereof, that is, a downward change in status of progress (degree of progress) which is displayed, is unnatural. In this regard, such an unnatural result can be prevented from being displayed in the above-described display method.
First, the receiver calculates, for each packet length, a ratio P of the number of packets that have been received. At this time, the receiver determines which of the modes including a maximum mode, an entirety display mode, and a latest mode, the display mode is. When the receiver determines that the display mode is the maximum mode, the receiver displays the highest ratio out of the ratios P for the plurality of packet lengths. When the receiver determines that the display mode is the entirety display mode, the receiver displays all the ratios P. When the display mode is the latest mode, the receiver displays the ratio P for the packet length of the last received packet.
In
(Light Emission Control Using Common Switch and Pixel Switch)
In the transmitting method in this embodiment, a visible light signal (which is also referred to as a visible light communication signal) is transmitted by each LED included in an LED display for displaying an image, changing in luminance according to switching of a common switch and a pixel switch, for example.
The LED display is configured as a large display installed in open space, for example. Furthermore, the LED display includes a plurality of LEDs arranged in a matrix, and displays an image by causing these LEDs to blink according to an image signal. The LED display includes a plurality of common lines (COM lines) and a plurality of pixel lines (SEG lines). Each of the common lines includes a plurality of LEDs horizontally arranged in line, and each of the pixel lines includes a plurality of LEDs vertically arranged in line. Each of the common lines is connected to common switches corresponding to the common line. The common switches are transistors, for example. Each of the pixel lines is connected to pixel switches corresponding to the pixel line. The pixel switches corresponding to the plurality of pixel lines are included in an LED driver circuit (a constant current circuit), for example. Note that the LED driver circuit is configured as a pixel switch control unit that switches the plurality of pixel switches.
More specifically, one of an anode and a cathode of each LED included in the common line is connected to a terminal, such as a connector, of the transistor corresponding to that common line. The other of the anode and the cathode of each LED included in the pixel line is connected to a terminal (a pixel switch) of the above LED driver circuit which corresponds to that pixel line.
When the LED display displays an image, a common switch control unit which controls the plurality of common switches turns ON the common switches in a time-division manner. For example, the common switch control unit keeps only a first common switch ON among the plurality of common switches during a first period, and keeps only a second common switch ON among the plurality of common switches during a second period following the first period. The LED driver circuit turns each pixel switch ON according to an image signal during a period in which any of the common switches is ON. With this, only for the period in which the common switch is ON and the pixel switch is ON, an LED corresponding to that common switch and that pixel switch is ON. Luminance of pixels in an image is represented using this ON period. This means that the luminance of pixels in an image is under the PWM control.
In the transmitting method in this embodiment, the visible light signal is transmitted using the LED display, the common switches, the pixel switches, the common switch control unit, and the pixel switch control unit such as those described above. A transmitting apparatus (referred to also as a transmitter) in this embodiment that transmits the visible light signal in the transmitting method includes the common switch control unit and the pixel switch control unit.
The transmitter transmits each symbol included in the visible light signal, according to a predetermined symbol period. For example, when the transmitter transmits a symbol “00” in the 4 PPM, the common switches are switched according to the symbol (a luminance change pattern of “00”) in the symbol period made up of four slots. The transmitter then switches the pixel switches according to average luminance indicated by an image signal or the like.
More specifically, when the average luminance in the symbol period is set to 75% ((a) in
When the average luminance in the symbol period is set to 25% ((e) in
Note that the common switches are switched by the above-described common switch control unit, and the pixel switches are switched by the above-described pixel switch control unit.
Thus, the transmitting method in this embodiment is a transmitting method of transmitting a visible light signal by way of luminance change, and includes a determining step, a common switch control step, and a first pixel switch control step. In the determining step, a luminance change pattern is determined by modulating the visible light signal. In the common switch control step, a common switch for turning ON, in common, a plurality of light sources (LEDs) which are included in a light source group (the common line) of a display and are each used for representing a pixel in an image is switched according to the luminance change pattern. In the first pixel switch control step, a first pixel switch for turning ON a first light source among the plurality of light sources included in the light source group is turned ON, to cause the first light source to be ON only for a period in which the common switch is ON and the first pixel switch is ON, to transmit the visible light signal.
With this, a visible light signal can be properly transmitted from a display including a plurality of LEDs or the like as the light sources. Therefore, this enables communication between various devices including devices other than lightings. Furthermore, when the display is a display for displaying images under control of the common switch and the first pixel switch, the visible light signal can be transmitted using that common switch and that first pixel switch. Therefore, it is possible to easily transmit the visible light signal without a significant change in the structure for displaying images on the display.
Furthermore, the timing of controlling the pixel switch is adjusted to match the transmission symbol (one 4 PPM), that is, is controlled as in
Thus, in the above determining step of the transmitting method in this embodiment, the luminance change pattern is determined for each symbol period. Furthermore, in the above first pixel switch control step, the pixel switch is switched in synchronization with the symbol period. With this, even when the symbol period is 1/2400 seconds, for example, the visible light signal can be properly transmitted according to the symbol period.
When the signal (symbol) is “10” and the average luminance is around 50%, the luminance change pattern is similar to that of 0101 and there are two luminance rising edge positions. In this case, the latest one of the luminance rising positions is prioritized so that the receiver can properly receive the signal. This means that the latest one of the luminance rising edge positions is the timing at which a luminance rising edge unique to the symbol “10” is obtained.
As the average luminance increases, a signal more similar to the signal modulated in the 4 PPM can be output. Therefore, when the luminance of the entire screen or areas sharing a power line is low, the amount of current is reduced to lower the instantaneous value of the luminance so that the length of the HI section can be increased and errors can be reduced. In this case, although the maximum luminance of the screen is lowered, a switch for enabling this function is turned ON, for example, when high luminance is not necessary, such as for outdoor use, or when the visible light communication is given priority, with the result that a balance between the communication quality and the image quality can be set to the optimum.
Furthermore, in the above first pixel switch control step of the transmitting method in this embodiment, when the image is displayed on the display (the LED display), the first pixel switch is switched to increase a lighting period, which is for representing a pixel value of a pixel in the image and corresponds to the first light source, by a length of time equivalent to a period in which the first light source is OFF for transmission of the visible light signal. Specifically, in the transmitting method in this embodiment, the visible light signal is transmitted when an image is being displayed on the LED display. Accordingly, there are cases where in the period in which the LED is to be ON to represent a pixel value (specifically, a luminance value) indicated in the image signal, the LED is OFF for transmission of the visible light signal. In such a case, in the transmitting method in this embodiment, the first pixel switch is switched in such a way that the lighting period is increased by a length of time equivalent to a period in which the LED is OFF.
For example, when the image indicated in the image signal is displayed without the visible light signal being transmitted, the common switch is ON during one symbol period, and the pixel switch is ON only for the period depending on the average luminance, that is, the pixel value indicated in the image signal. When the average luminance is 75%, the common switch is ON in the first slot to the fourth slot of the symbol period. Furthermore, the pixel switch is ON in the first slot to the third slot of the symbol period. With this, the LED is ON in the first slot to the third slot during the symbol period, allowing the above-described pixel value to be represented. The LED is, however, OFF in the second slot in order to transmit the symbol “01.” Thus, in the transmitting method in this embodiment, the pixel switch is switched in such a way that the lighting period of the LED is increased by a length of time equivalent to the length of the second slot in which the LED is OFF, that is, in such a way that the LED is ON in the fourth slot.
Furthermore, in the transmitting method in this embodiment, the pixel value of the pixel in the image is changed to increase the lighting period. For example, in the above-described case, the pixel value having the average luminance of 75% is changed to a pixel value having the average luminance of 100%. In the case where the average luminance is 100%, the LED attempts to be ON in the first slot to the fourth slot, but is OFF in the first slot for transmission of the symbol “01.” Therefore, also when the visible light signal is transmitted, the LED can be ON with the original pixel value (the average luminance of 75%).
With this, the occurrence of breakup of the image due to transmission of the visible light signal can be reduced.
(Light Emission Control Shifted for Each Pixel)
When the transmitter in this embodiment transmits the same symbol (for example, “10”) from a pixel A and a pixel around the pixel A (for example, a pixel B and a pixel C), the transmitter shifts the timing of light emission of these pixels as illustrated in
As a result of the timing of light emission being shifted, a waveform indicating a pixel-to-pixel average luminance transition has a gradual rising or falling edge except the rising edge at the unique-to-symbol timing as illustrated in
Specifically, when the symbol “10” is transmitted from a predetermined pixel and the luminance of the predetermined pixel is a value intermediate between 25% and 75%, the transmitter increases or decreases an open interval of the pixel switch corresponding to the predetermined pixel. Furthermore, the transmitter adjusts, in an opposite way, an open interval of the pixel switch corresponding to the pixel around the predetermined pixel. Thus, errors can be reduced also by setting the open interval of each of the pixel switches in such a way that the luminance of the entirety including the predetermined pixel and the nearby pixel does not change. The open interval is an interval for which a pixel switch is ON.
Thus, the transmitting method in this embodiment further includes a second pixel switch control step. In this second pixel switch control step, a second pixel switch for turning ON a second light source included in the above-described light source group (the common line) and located around the first light source is turned ON, to cause the second light source to be ON only for a period in which the common switch is ON and the second pixel switch is ON, to transmit the visible light signal. The second light source is, for example, a light source located adjacent to the first light source.
In the first and second pixel switch control steps, when the first light source transmits a symbol included in the visible light signal and the second light source transmits a symbol included in the visible light signal simultaneously, and the symbol transmitted from the first light source and the symbol transmitted from the second light source are the same, among a plurality of timings at which the first pixel switch and the second pixel switch are turned ON and OFF for transmission of the symbol, a timing at which a luminance rising edge unique to the symbol is obtained is adjusted to be the same for the first pixel switch and for the second pixel switch, and a remaining timing is adjusted to be different between the first pixel switch and the second pixel switch, and the average luminance of the entirety of the first light source and the second light source in a period in which the symbol is transmitted is matched with predetermined luminance.
This allows the spatially averaged luminance to have a steep rising edge only at the timing at which the luminance rising edge unique to the symbol is obtained, as in the pixel-to-pixel average luminance transition illustrated in
When the symbol “10” is transmitted from a predetermined pixel and the luminance of the predetermined pixel is a value intermediate between 25% and 75%, the transmitter increases or decreases an open interval of the pixel switch corresponding to the predetermined pixel, in a first period. Furthermore, the transmitter adjusts, in an opposite way, an open interval of the pixel switch in a second period (for example, a frame) temporally before or after the first period. Thus, errors can be reduced also by setting the open interval of the pixel switch in such a way that temporal average luminance of the entirety of the predetermined pixel including the first period and the second period does not change.
In other words, in the above-described first pixel switch control step of the transmitting method in this embodiment, a symbol included in the visible light signal is transmitted in the first period, a symbol included in the visible light signal is transmitted in the second period subsequent to the first period, and the symbol transmitted in the first period and the symbol transmitted in the second are the same, for example. At this time, among a plurality of timings at which the first pixel switch is turned ON and OFF for transmission of the symbol, a timing at which a luminance rising edge unique to the symbol is obtained is adjusted to be the same in the first period and in the second period, and a remaining timing is adjusted to be different between the first period and the second period. The average luminance of the first light source in the entirety of the first period and the second period is matched with predetermined luminance. The first period and the second period may be a period for displaying a frame and a period for displaying the next frame, respectively. Furthermore, each of the first period and the second period may be a symbol period. Specifically, the first period and the second period may be a period for one symbol to be transmitted and a period for the next symbol to be transmitted, respectively.
This allows the temporally averaged luminance to have a steep rising edge only at the timing at which the luminance rising edge unique to the symbol is obtained, similarly to the pixel-to-pixel average luminance transition illustrated in
(Light Emission Control when Pixel Switch can be Driven at Double Speed)
When the pixel switch can be turned ON and OFF in a cycle that is one half of the symbol period, that is, when the pixel switch can be driven at double speed, the light emission pattern may be the same as that in the V4 PPM as illustrated in
In other words, when the symbol period (a period in which a symbol is transmitted) is made up of four slots, the pixel switch control unit such as an LED driver circuit which controls the pixel switch is capable of controlling the pixel switch on a 2-slot basis. Specifically, the pixel switch control unit can keep the pixel switch ON for an arbitrary length of time in the 2-slot period from the beginning of the symbol period. Furthermore, the pixel switch control unit can keep the pixel switch ON for an arbitrary length of time in the 2-slot period from the beginning of the third slot in the symbol period.
Thus, in the transmitting method in this embodiment, the pixel value may be changed in a cycle that is one half of the above-described symbol period.
In this case, there is a risk that the level of precision of each switching of the pixel switch is lowered (the accuracy is reduced). Therefore, this is performed only when a transmission priority switch is ON so that a balance between the image quality and the quality of transmission can be set to the optimum.
(Blocks for Light Emission Control Based on Pixel Value Adjustment)
The image and video input unit 1911 outputs, to the Nx speed-up unit 1912, an image signal representing an image or video at a frame rate of 60 Hz, for example.
The Nx speed-up unit 1912 multiplies the frame rate of the image signal received from the image and video input unit 1911 by N (N>1), and outputs the resultant image signal. For example, the Nx speed-up unit 1912 multiplies the frame rate by 10 (N=10), that is, increases the frame rate to a frame rate of 600 Hz.
The common switch control unit 1913 switches the common switch based on images provided at the frame rate of 600 Hz. Likewise, the common switch control unit 1914 switches the pixel switch based on images provided at the frame rate of 600 Hz. Thus, as a result of the frame rate being increased by the Nx speed-up unit 1912, it is possible to prevent flicker which is caused by switching of a switch such as the common switch or the pixel switch. Furthermore, also when an image of the LED display is captured with the imaging device using a high-speed shutter, an image without defective pixels or flicker can be captured with the imaging device.
The pixel value adjustment unit 1916 copies the image received from the image and video input unit 1911, based on the symbol rate of the visible light signal, and adjusts the pixel value according to the above-described method. With this, the common switch control unit 1913 and the pixel switch control unit 1914 downstream to the pixel value adjustment unit 1916 can output the visible light signal without luminance of the image or video being changed.
For example, in the case of an example illustrated in
Furthermore, in the transmitting method in this embodiment, the process of displaying an image and the process of transmitting a visible light signal do not need to be performed at the same time, that is, these processes may be performed in separate periods, i.e., a signal transmission period and an image display period.
Specifically, in the above-described first pixel switch control step in this embodiment, the first pixel switch is ON for the signal transmission period in which the common switch is switched according to the luminance change pattern. Moreover, the transmitting method in this embodiment may further include an image display step of displaying a pixel in an image to be displayed, by (i) keeping the common switch ON for an image display period different from the signal transmission period and (ii) turning ON the first pixel switch in the image display period according to the image, to cause the first light source to be ON only for a period in which the common switch is ON and the first pixel switch is ON.
With this, the process of displaying an image and the process of transmitting a visible light signal are performed in mutually different periods, and thus it is possible to easily display the image and transmit the visible light signal.
(Timing of Changing Power Supply)
Although a signal OFF interval is included in the case where the power line is changed, the power line is changed according to the transmission period of 4 PPM symbols because no light emission in the last part of the 4 PPM does not affect signal reception, and thus it is possible to change the power line without affecting the quality of signal reception.
Furthermore, it is possible to change the power line without affecting the quality of signal reception, by changing the power line in an LO period in the 4 PPM as well. In this case, it is also possible to maintain the maximum luminance at a high level when the signal is transmitted.
(Timing of Drive Operation)
In this embodiment, the LED display may be driven at the timings illustrated in
For example, as illustrated in
(Summary)
The transmitting method according to an aspect of the present disclosure is a transmitting method of transmitting a visible light signal by way of luminance change, and includes Step SC11 to Step SC13.
In Step SC11, a luminance change pattern is determined by modulating the visible light signal as in the above-described embodiments.
In Step SC12, a common switch for turning ON, in common, a plurality of light sources which are included in a light source group of a display and are each used for representing a pixel in an image is switched according to the luminance change pattern.
In Step SC13, a first pixel switch (that is, the pixel switch) for turning ON a first light source among the plurality of light sources included in the light source group is turned ON, to cause the first light source to be ON only for a period in which the common switch is ON and the first pixel switch is ON, to transmit the visible light signal.
A transmitting apparatus C10 according to an aspect of the present disclosure is a transmitting apparatus (or a transmitter) that transmits a visible light signal by way of luminance change, and includes a determination unit C11, a common switch control unit C12, and a pixel switch control unit C13. The determination unit C11 determines a luminance change pattern by modulating the visible light signal as in the above-described embodiments. Note that this determination unit C11 is included in the signal input unit 1915 illustrated in
The common switch control unit C12 switches the common switch according to the luminance change pattern. This common switch is a switch for turning ON, in common, a plurality of light sources which are included in a light source group of a display and are each used for representing a pixel in an image.
The pixel switch control unit C13 turns ON a pixel switch which is for turning ON a light source to be controlled among the plurality of light sources included in the light source group, to cause the light source to be ON only for a period in which the common switch is ON and the pixel switch is ON, to transmit the visible light signal. Note that the light source to be controlled is the above-described first light source.
With this, a visible light signal can be properly transmitted from a display including a plurality of LEDs and the like as the light sources. Therefore, this enables communication between various devices including devices other than lightings. Furthermore, when the display is a display for displaying images under control of the common switch and the pixel switch, the visible light signal can be transmitted using the common switch and the pixel switch. Therefore, it is possible to easily transmit the visible light signal without a significant change in the structure for displaying images on the display (that is, the display device).
(Frame Configuration in Single Frame Transmission)
A transmission frame includes, as illustrated in (a) of
When a preamble such as that illustrated in (b) of
It is possible to transmit variable-length content by selecting a length of the ID/DATA in the IDLEN as illustrated in (c) of
The CRC is a check code for correcting or detecting an error in other parts than the PRE. The CRC length varies according to the length of a part to be checked so that the check ability can be kept at a certain level or higher. Furthermore, the use of a different check code depending on the length of a part to be checked allows an improvement in the check ability per CRC length.
(Frame Configuration in Multiple Frame Transmission)
A partition type (PTYPE) and a check code (CRC) are added to transmission data (BODY), resulting in Joined data. The Joined data is divided into a certain number of DATAPARTs to each of which a preamble (PRE) and an address (ADDR) are added before transmission.
The PTYPE (or a partition mode (PMODE)) indicates how the BODY is divided or what the BODY means. When the PTYPE is set to 2 bits as illustrated in (a) of
The CRC is a check code for checking the PTYPE and the BODY. The code length of the CRC varies according to the length of a part to be checked as provided in
The preamble is determined as in
The address is determined as in
(Configuration of BODY Field)
When the BODY has a field configuration such as that in the illustration, it is possible to transmit an ID that is the same as or similar to that in the single frame transmission.
It is assumed that the same ID with the same IDTYPE represents the same meaning regardless of whether the transmission scheme is the single frame transmission or the multiple frame transmission and regardless of the combination of packets which are transmitted. This enables flexible signal transmission, for example, when data is continuously transmitted or when the length of time for reception is short.
The IDLEN indicates a length of the ID, and the remaining part is used to transmit PADDING. This part may be all 0 or 1, or may be used to transmit data that extends the ID, or may be a check code. The PADDING may be left-aligned.
With those in (b), (c), and (d) of
In the case of (b) or (c) of
In the case of (c) of
In the case of (d) of
(PTYPE)
When the PTYPE has a predetermined number of bits, the PTYPE indicates that the BODY is in the single frame compatible mode. With this, it is possible to transmit the same ID as that in the case of the single frame transmission.
For example, when PTYPE=00, the ID or IDTYPE corresponding to the PTYPE can be treated in the same or similar way as the ID or IDTYPE transmitted in the case of the single frame transmission. Thus, the management of the ID or IDTYPE can be facilitated.
When the PTYPE has a predetermined number of bits, the PTYPE indicates that the BODY is in a data stream mode. At this time, all the combinations of the number of transmission frames and the DATAPART length can be used, and it can be assumed that data having a different combination has a different meaning. The bit of the PTYPE may indicate whether the different combination has the same meaning or a different meaning. This enables flexible selection of a transmitting method.
For example, when PTYPE=01, it is possible to transmit an ID having a size not defined in the single frame transmission. Furthermore, even when the ID corresponding to the PTYPE is the same as the ID in the single frame transmission, the ID corresponding to the PTYPE can be treated as an ID different from the ID in the single frame transmission. As a result, the number of representable IDs is increased.
(Field Configuration in Single Frame Compatible Mode)
When (a) of
When (b), (c), or (d) of
With the settings that a signal can be transmitted only when the combination is in the table, other combinations can be determined as reception errors, and thus it is possible to reduce the reception error rate.
A transmitting method according to an aspect of the present disclosure is a transmitting method of transmitting a visible light signal by way of luminance change, and includes: determining a luminance change pattern by modulating the visible light signal; switching a common switch according to the luminance change pattern, the common switch being for turning ON a plurality of light sources in common, the plurality of light sources being included in a light source group of a display and each being for representing a pixel in an image; and turning ON a first pixel switch for turning ON a first light source, to cause the first light source to be ON only for a period in which the common switch is ON and the first pixel switch is ON, to transmit the visible light signal, the first light source being one of the plurality of light sources included in the light source group.
With this, a visible light signal can be properly transmitted from a display including a plurality of LEDs and the like as the light sources, as illustrated in
Furthermore, in the determining, the luminance change pattern may be determined for each symbol period, and in the turning ON of a first pixel switch, the first pixel switch may be switched in synchronization with the symbol period.
With this, even when the symbol period is 1/2400 seconds, for example, the visible light signal can be properly transmitted according to the symbol period, as illustrated in
Furthermore, in the turning ON of a first pixel switch, when the image is displayed on the display, the first pixel switch may be switched to increase a lighting period that corresponds to the first light source, by a length of time equivalent to a period in which the first light source is OFF for transmission of the visible light signal, the lighting period being a period for representing a pixel value of a pixel in the image. For example, the pixel value of the pixel in the image may be changed to increase the lighting period.
With this, even when the first light source is OFF in order for transmission of the visible light signal, images can be properly displayed showing the original visual appearance, i.e., without breakup, because a supplementary lighting period is provided, as illustrated in
Furthermore, the pixel value may be changed in a cycle that is one half of the symbol period.
With this, it is possible to properly display an image and transmit a visible light signal as illustrated in
Furthermore, the transmitting method may further include turning ON a second pixel switch for turning ON a second light source, to cause the second light source to be ON only for a period in which the common switch is ON and the second pixel switch is ON, to transmit the visible light signal, the second light source being included in the light source group and located around the first light source, and in the turning ON of a first pixel switch and in the turning ON of a second pixel switch, when the first light source transmits a symbol included in the visible light signal and the second light source transmits a symbol included in the visible light signal simultaneously, and the symbol transmitted from the first light source and the symbol transmitted from the second light source are the same, among a plurality of timings at which the first pixel switch and the second pixel switch are turned ON and OFF for transmission of the symbol, a timing at which a luminance rising edge unique to the symbol is obtained may be adjusted to be the same for the first pixel switch and for the second pixel switch, and a remaining timing may be adjusted to be different between the first pixel switch and the second pixel switch, and an average luminance of an entirety of the first light source and the second light source in a period in which the symbol is transmitted may be matched with predetermined luminance.
With this, as illustrated in
Furthermore, in the turning ON of a first pixel switch, when a symbol included in the visible light signal is transmitted in a first period, a symbol included in the visible light signal is transmitted in a second period subsequent to the first period, and the symbol transmitted in the first period and the symbol transmitted in the second period are the same, among a plurality of timings at which the first pixel switch is turned ON and OFF for transmission of the symbol, a timing at which a luminance rising edge unique to the symbol is obtained may be adjusted to be the same in the first period and in the second period, and a remaining timing may be adjusted to be different between the first period and the second period, and an average luminance of the first light source in an entirety of the first period and the second period may be matched with predetermined luminance.
With this, as illustrated in
Furthermore, in the turning ON of a first pixel switch, the first pixel switch may be ON for a signal transmission period in which the common switch is switched according to the luminance change pattern, and the transmitting method may further include displaying a pixel in an image to be displayed, by (i) keeping the common switch ON for an image display period different from the signal transmission period and (ii) turning ON the first pixel switch in the image display period according to the image, to cause the first light source to be ON only for a period in which the common switch is ON and the first pixel switch is ON.
With this, the process of displaying an image and the process of transmitting a visible light signal are performed in mutually different periods, and thus it is possible to easily display the image and transmit the visible light signal.
The present embodiment specifically describes details and variations of a visible light signal in the above embodiments. Note that trends of cameras are an increase in resolution (4K), and an increase in frame rate (60 fps). A frame scanning time is decreased due to an increase in the frame rate. As a result, a reception distance is decreased and a reception time is increased. Accordingly, a transmitter which transmits a visible light signal needs to shorten a packet transmission time. A decrease in line scanning time increases a time resolution for reception. An exposure time is 1/8000 seconds. With 4 pulse position modulation (4 PPM), signal expression and dimming are performed simultaneously, and thus signal density is low, resulting in low efficiency. Thus, a portion which needs to be received is shortened by separating a signal portion and a dimming portion in a visible light signal in the present embodiment.
A visible light signal includes a plurality of combinations of a signal portion and a dimming portion, as illustrated in
A visible light signal includes data L (Data L), preamble (Preamble), data R (Data R), and a dimming portion (Dimming). The signal portion is constituted by data L, preamble, and data R.
The preamble alternately indicates high and low luminance values along the time axis. In other words, the preamble indicates a high luminance value for the time length P1, a low luminance value for the next time length P2, a high luminance value for the next time length P3, and a low luminance value for the next time length P4. Note that the time lengths P1 to P4 are each 100 μs, for example.
Data R alternately indicates high and low luminance values along the time axis, and is disposed immediately after the preamble. Specifically, data R indicates a high luminance value for the time length DR1, indicates a low luminance value for the next time length DR2, indicates a high luminance value for the next time length DR3, and indicates the low luminance value for the next time length DR4. Note that the time lengths DR1 to DR4 are determined in accordance with an expression according to a signal to be transmitted. This expression is DRi=120+20xi (i∈1-4, xi∈0-15). Note that the numbers such as 120 and 20 indicate time (μs). These values are examples.
Data L alternately indicates high and low luminance values along the time axis, and is disposed immediately before the preamble. Specifically, data L indicates a high luminance value for the time length DL1, indicates a low luminance value for the next time length DL2, indicates a high luminance value for the next time length DL3, and indicates a low luminance value for the next time length DL4. Note that time lengths DL1 to DL4 are determined in accordance with an expression according to a signal to be transmitted. This expression is DLi=120+20×(15−xi). Note that numbers such as 120 and 20 indicate time (μs) similarly to the above. These numbers are examples.
Note that a signal to be transmitted is constituted by 4×4=16 bits, and xi is a 4-bit signal among the signal to be transmitted. In a visible light signal, time lengths DR1 to DR4 in data R or time lengths DL1 to DL4 in data L each indicate the numerical value of the xi (4-bit signal). Among the 16 bits of the signal to be transmitted, 4 bits indicate addresses, 8 bits indicate data, and 4 bits are used for error detection.
Here, data R and data L have a complementary relation with regard to brightness. In other words, if the brightness of data R is high, the brightness of data L is low, and in contrast, if the brightness of data R is low, the brightness of data L is high. In other words, a sum of the total time length of data R and the total time length of data L is constant irrespective of a signal to be transmitted.
A dimming portion is a signal for adjusting brightness (luminance) of a visible light signal, and indicates a high luminance value for the time length C1 and indicates a low signal for the next time length C2. The time lengths C1 and C2 are adjusted arbitrarily. Note that a dimming portion may be included or may not be included in the visible light signal.
In the example illustrated in
With the visible light signal illustrated in
For example, as illustrated in (a) of
As illustrated in, for example,
The visible light signal in the present embodiment (the method used in the embodiment (data on one side)) has the maximum luminance of 82% which is higher than the maximum luminance of a visible light signal according to the standard from IEC, and has the minimum luminance of 18% which is lower than the minimum luminance of a visible light signal according to the standard from IEC. Note that the maximum luminance of 82% and the minimum luminance of 18% are numerical values obtained by a visible light signal in the present embodiment which includes only one of data R and data L.
Even if the angle of view is decreased, or in other words, even if the distance from a transmitter which transmits a visible light signal to a receiver is increased, more packets are received with the visible light signal in the present embodiment (the method used in the embodiment (both)) than with the visible light signal according to the standard from IEC, thus achieving higher reliability. Note that the numerical values of the method used in the embodiment (both) illustrated in
With the visible light signal (IEEE) in the present embodiment, independently of a noise (variance of a noise), the number of received packets is greater than that achieved with the visible light signal according to the standard from IEC, thus achieving higher reliability.
With the visible light signal (IEEE) in the present embodiment, the number of received packets is greater than that achieved with the visible light signal according to the standard from IEC over a wide range of the receiver side clock error, thus achieving higher reliability. Note that the receiver side clock error is an error in timing at which exposure of an exposure line of an image sensor included in a receiver starts.
The signal to be transmitted includes four 4-bit signals (xi) (4×4=16 bits) as described above. For example, a signal to be transmitted includes signals x1 to x4. The signal x1 is constituted by bits x11 to x14, and the signal x2 is constituted by bits x21 to x24. The signal x3 is constituted by bits x31 to x34, and the signal x4 is constituted by bits x41 to x44. Here, bits x11, x21, x31, and bit x41 are prone to error, and bits other than those bits are not prone to error. In view of this, bits x42 to x44 included in the signal x4 are used for parity for bit x11 of the signal x1, bit x21 of the signal x2, and bit x31 of the signal x3, respectively, and bit x41 included in the signal x4 is not used and indicates 0 at all times. The expression illustrated in
The receiver sequentially obtains signal portions of the visible light signal described above. Each signal portion includes a 4-bit address (Addr) and 8-bit data (Data). The receiver combines data of the signal portions to generate ID constituted by a plurality of data, and Parity constituted by one or more data.
The visible light signal illustrated in
[Variation 1]
In the example illustrated in
In the visible light signal in Variation 1, a signal to be transmitted may be represented only by the time length indicating the low luminance value, similarly to the example illustrated in (a) of
For example, as illustrated in
The visible light signal in this variation may include a preamble and data as illustrated in
Here, the time length D2i−1+D2i is determined in accordance with the expression according to a signal to be transmitted. In other words, a sum of the time length indicating the high luminance value and the time length indicating the low luminance value following the high luminance value is determined in accordance with the expression. This expression is, for example, D2i−1+D2i=100+20×xi (i∈1−N, xi∈0-7, D2i>50 μs, D2i+1>50 μs).
A signal generation apparatus generates a visible light signal using a method for generating a visible light signal in this variation. According to the method for generating a visible light signal in this variation, a packet is modulated (i.e., converted) into the above signal wi to be transmitted. Note that the signal generation apparatus may be or may not be included in the transmitters according to the above embodiments.
For example, the signal generation apparatus converts a packet into a signal to be transmitted which includes numerical values indicated by signs w1, w2, w3, and w4, as illustrated in
Here, in each of the signs w1 to w4, the value of the first bit is b1, the value of the second bit is b2, and the value of the third bit is b3. Note that b1, b2, and b3 are 0 or 1. In this case, the numerical values W1 to W4 indicated by the signs w1 to w4 are each b1×20+b2×21+b3×22, for example.
A packet includes address data (A1 to A4) constituted by 0 to 4 bits, main data Da (Da1 to Da7) constituted by 4 to 7 bits, sub-data Db (Db1 to Db4) constituted by 3 to 4 bits, and the value (S) of a stop bit, as data. Note that Da1 to Da7, A1 to A4, Db1 to Db4, and S each indicate the value of the bit, that is, 0 or 1.
Specifically, the signal generation apparatus assigns data included in the packet to one of bits of the signs w1, w2, w3, and w4, when a packet is modulated to a signal to be transmitted. Accordingly, the packet is converted into a signal to be transmitted which includes numerical values indicated by the signs w1, w2, w3, and w4.
When the signal generation apparatus assigns data included in a packet, specifically, the signal generation apparatus assigns at least a portion (Da1 to Da4) of main data Da included in the packet to a first bit string which includes first bits (bit 1) of the signs w1 to w4. Furthermore, the signal generation apparatus assigns the value (S) of the stop bit included in the packet to the second bit (bit 2) of the sign w1. Furthermore, the signal generation apparatus assigns a portion (Da5 to Da7) of main data Da included in a packet or at least a portion (A1 to A3) of address data included in the packet to a second bit string which includes the second bits (bit 2) of the signs w2 to w4. Furthermore, the signal generation apparatus assigns at least a portion (Db1 to Db3) of sub-data Db included in the packet, and a portion (Db4) of the sub-data Db or a portion (A4) of address data to a third bit string which includes third bits (bit 3) of the signs w1 to w4.
Note that if all the third bits (bit 3) of the signs w1 to w4 are 0, the numerical values indicated by the signs are maintained to be 3 or less, according to “b1×20+b2×21+b3×22” stated above. Accordingly, the expression DRi=120+30×wi (i∈1-4, wi∈0-7) illustrated in
The signal generation apparatus according to this variation determines whether to divide source data, according to the bit length of the source data. The signal generation apparatus generates at least one packet from the source data, by performing processing according to the result of the determination. Specifically, the signal generation apparatus divides source data into a larger number of packets, as the bit length of the source data is longer. Conversely, the signal generation apparatus generates a packet without dividing source data, if the bit length of the source data is shorter than a predetermined bit length.
When the signal generation apparatus generates one or more packets from source data, the signal generation apparatus converts each of the one or more packets into a signal to be transmitted as described above, namely, signs w1 to w4.
Note that in
The notation on top in each block indicates a label for identifying, for instance, source data, main source data, sub-source data, start bit, and address data. The central numerical value in each block indicates a bit size (number of bits), and the numerical value on the bottom is a value of the bit.
For example, if the bit length of source data (Data) is 7 bits, the signal generation apparatus generates one packet, without dividing the source data. Specifically, source data includes 4-bit main source data Dataa (Da1 to Da4), and 3-bit sub-source data Datab (Db1 to Db3) as main data Da(1) and sub-data Db(1), respectively. In this case, the signal generation apparatus generates a packet by adding, to the source data, a start bit S (S=1) and address data (A1 to A4) constituted by 4 bits and indicating “0000”. Note that the start bit S=1 indicates that a packet which includes the start bit is a packet at the end.
The signal generation apparatus generates, by converting the packet, the sign w1=(Da1, S=1, Db1), the sign w2=(Da2, A1=0, Db2), the sign w3=(Da3, A2=0, Db3), and the sign w4=(Da4, A3=0, A4=0). Furthermore, the signal generation apparatus generates a signal to be transmitted which includes the numerical values W1, W2, W3, and W4 indicated by the signs w1, w2, w3, and w4, respectively.
Note that in this variation, wi is represented as a 3-bit sign, and also as a decimal numeral value. Thus, in this variation, in order to facilitate a description, wi (w1 to w4) used as decimal numeral values are represented as numerical values Wi (W1 to W4).
For example, if the bit length of source data (Data) is 16 bits, the signal generation apparatus generates two intermediate data by dividing the source data. Specifically, the source data includes 10-bit main source data Dataa and 6-bit sub-source data Datab. In this case, the signal generation apparatus generates first intermediate data which includes main source data Dataa and 1-bit parity for the main source data Dataa, and second intermediate data which includes sub-source data Datab and 1-bit parity for the sub-source data Datab.
Next, the signal generation apparatus divides the first intermediate data into 7-bit main data Da(1) and 4-bit main data Da(2). Furthermore, the signal generation apparatus divides the second intermediate data into 4-bit sub-data Db(1), and 3-bit sub-data Db(2). Note that the main data is a portion among a plurality of portions which constitute data which includes main source data and parity. Similarly, sub-data is a portion among a plurality of portions which constitute data which includes sub-source data and parity.
Next, the signal generation apparatus generates a 12-bit first packet which includes the start bit S (S=0), main data Da(1), and sub-data Db(1). The signal generation apparatus thus generates the first packet which does not include address data.
Furthermore, the signal generation apparatus generates a 12-bit second packet which includes the start bit S (S=1), 4-bit address data indicating “1000”, main data Da(2), and sub-data Db(2). Note that the start bit S=0 indicates that, among a plurality of packets generated, a packet which includes the start bit 0 is a packet that is not at the end. The start bit S=1 indicates that, among a plurality of packets generated, a packet which includes the start bit 1 is a packet at the end.
In this manner, the source data is divided into the first packet and the second packet.
The signal generation apparatus generates sign w1=(Da1, S=0, Db1), sign w2=(Da2, Da7, Db2), sign w3=(Da3, Da6, Db3), and sign w4=(Da4, Da5, Db4), by converting the first packet. Furthermore, the signal generation apparatus generates a signal to be transmitted which includes numerical values W1, W2, W3, and W4 indicated by the signs w1, w2, w3, and w4, respectively.
Furthermore, the signal generation apparatus generates sign w1=(Da1, S=1, Db1), sign w2=(Da2, A1=1, Db2), sign w3=(Da3, A2=0, Db3), and sign w4=(Da4, A3=0, A4=0) by converting the second packet. Furthermore, the signal generation apparatus generates a signal to be transmitted which includes the numerical values W1, W2, W3, and W4 indicated by the signs w1, w2, w3, and w4, respectively.
For example, if the bit length of source data (Data) is 17 bits, the signal generation apparatus generates two intermediate data by dividing the source data. Specifically, the source data includes 10-bit main source data Dataa and 7-bit sub-source data Datab. In this case, the signal generation apparatus generates first intermediate data which includes main source data Dataa and 6-bit parity for the main source data Dataa. Furthermore, the signal generation apparatus generates second intermediate data which includes sub-source data Datab and 4-bit parity for the sub-source data Datab. For example, the signal generation apparatus generates parity by cyclic redundancy check (CRC).
Next, the signal generation apparatus divides the first intermediate data into main data Da(1) which includes 6-bit parity, 6-bit main data Da(2), and 4-bit main data Da(3). Furthermore, the signal generation apparatus divides the second intermediate data into sub-data Db(1) which includes 4-bit parity, and 4-bit sub-data Db(2), and 3-bit sub-data Db(3).
Next, the signal generation apparatus generates a 12-bit first packet which includes the start bit S (S=0), 1-bit address data indicating “0”, main data Da(1), and sub-data Db(1). Furthermore, the signal generation apparatus generates a 12-bit second packet which includes the start bit S (S=0), 1-bit address data indicating “1”, main data Da(2), and sub-data Db(2). Furthermore, the signal generation apparatus generates a 12-bit third packet which includes the start bit S (S=1), 4-bit address data indicating “0100”, main data Da(3), and sub-data Db(3).
Accordingly, the source data is divided into the first packet, the second packet, and the third packet.
The signal generation apparatus generates sign w1=(Da1, S=0, Db1), sign w2=(Da2, A1=0, Db2), sign w3=(Da3, Da6, Db3), and sign w4=(Da4, Da5, Db4) by converting the first packet. Furthermore, the signal generation apparatus generates a signal to be transmitted which includes the numerical values W1, W2, W3, and W4 indicated by the signs w1, w2, w3, and, w4, respectively.
Similarly, the signal generation apparatus generates sign w1=(Da1, S=0, Db1), sign w2=(Da2, A1=1, Db2), sign w3=(Da3, Da6, Db3), and sign w4=(Da4, Da5, Db4) by converting the second packet. Furthermore, the signal generation apparatus generates a signal to be transmitted which includes the numerical values W1, W2, W3, and W4 indicated by the signs w1, w2, w3, and, w4, respectively.
Similarly, the signal generation apparatus generates sign w1=(Da1, S=1, Db1), sign w2=(Da2, A1=0, Db2), sign w3=(Da3, A2=1, Db3), and sign w4=(Da4, A3=0, A4=0) by converting the third packet. Furthermore, the signal generation apparatus generates a signal to be transmitted which includes the numerical values W1, W2, W3, and, W4 indicated by the signs w1, w2, w3, and, w4, respectively.
Although 6-bit or 4-bit parity is generated by CRC in the example illustrated in
In this case, if the bit length of source data (Data) is 25 bits, the signal generation apparatus generates two intermediate data by dividing the source data. Specifically, the source data includes 15-bit main source data Dataa and 10-bit sub-source data Datab. In this case, the signal generation apparatus generates first intermediate data which includes main source data Dataa and 1-bit parity for the main source data Dataa, and second intermediate data which includes sub-source data Datab and 1-bit parity for the sub-source data Datab.
Next, the signal generation apparatus divides the first intermediate data into 6-bit main data Da(1) which includes parity, 6-bit main data Da(2), and 4-bit main data Da(3). Furthermore, the signal generation apparatus divides the second intermediate data into 4-bit sub-data Db(1) which includes parity, 4-bit sub-data Db(2), and 3-bit sub-data Db(3).
Next, the signal generation apparatus generates the first packet, the second packet, and the third packet from the first intermediate data and the second intermediate data, similarly to the example illustrated in
In the example illustrated in
In this case, if the bit length of source data (Data) is 22 bits, the signal generation apparatus generates two intermediate data by dividing the source data.
Specifically, the source data includes 15-bit main source data Dataa and 7-bit sub-source data Datab. The signal generation apparatus generates first intermediate data which includes main source data Dataa, and 1-bit parity for the main source data Dataa. Furthermore, the signal generation apparatus generates 4-bit parity for the entirety of the main source data Dataa and the sub-source data Datab by performing CRC on the entirety of the main source data Dataa and the sub-source data Datab. The signal generation apparatus generates second intermediate data which includes the sub-source data Datab and the 4-bit parity.
Next, the signal generation apparatus divides the first intermediate data into 6-bit main data Da(1) which includes parity, 6-bit main data Da(2), and 4-bit main data Da(3). Furthermore, the signal generation apparatus divides the second intermediate data into 4-bit sub-data Db(1), 4-bit sub-data Db(2) which includes a portion of the CRC parity, and 3-bit sub-data Db(3) which includes the remaining of the CRC parity.
Next, the signal generation apparatus generates the first packet, the second packet, and the third packet from the first intermediate data and the second intermediate data, similarly to the example illustrated in
Note that among the specific examples of the processing of dividing source data into three, the processing illustrated in
The signal generation apparatus divides source data into four or five, in the same manner as the processing of dividing source data into three, that is, the processing illustrated in
For example, if the bit length of source data (Data) is 31 bits, the signal generation apparatus generates two intermediate data by dividing the source data. Specifically, the source data includes 16-bit main source data Dataa and 15-bit sub-source data Datab. In this case, the signal generation apparatus generates first intermediate data which includes main source data Dataa and 8-bit parity for the main source data Dataa. Furthermore, the signal generation apparatus generates second intermediate data which includes sub-source data Datab and 8-bit parity for the sub-source data Datab. For example, the signal generation apparatus generates parity by Reed-Solomon coding.
Here, if 4 bits are handled as one symbol in Reed-Solomon coding, bit lengths of main source data Dataa and sub-source data Datab need to be integral multiples of 4 bits. However, the sub-source data Datab is, as described above, 15-bit data which is 1 bit less than 16 bits that are integral multiples of 4 bits.
Thus, when the signal generation apparatus is to generate the second intermediate data, the signal generation apparatus pads sub-source data Datab, and generates, by Reed-Solomon coding, 8-bit parity for the 16-bit sub-source data Datab which has been padded.
Next, the signal generation apparatus divides each of the first intermediate data and the second intermediate data into six portions (4 bits or 3 bits) using a similar technique as those described above. The signal generation apparatus generates a first packet which includes a start bit, 3-bit or 4-bit address data, first main data, and first sub-data. The signal generation apparatus generates second to sixth packets in the same manner.
In the example illustrated in
For example, if the bit length of source data (Data) is 39 bits, the signal generation apparatus generates two intermediate data by dividing the source data. Specifically, the source data includes 20-bit main source data Dataa, and 19-bit sub-source data Datab. In this case, the signal generation apparatus generates first intermediate data which includes main source data Dataa, and 4-bit parity for the main source data Dataa, and generates second intermediate data which includes sub-source data Datab, and 4-bit parity for the sub-source data Datab. For example, the signal generation apparatus generates parity by CRC.
Next, the signal generation apparatus divides each of the first intermediate data and the second intermediate data into six portions (4 bits or 3 bits), using a similar technique to those as described above. Then, the signal generation apparatus generates a first packet which includes the start bit, 3-bit or 4-bit address data, first main data, and first sub-data. The signal generation apparatus generates second to sixth packets in the same manner.
Note that among specific examples of processing of dividing source data into six, seven, or eight, the processing illustrated in
For example, if the bit length of source data (Data) is 55 bits, the signal generation apparatus generates nine packets, namely first to ninth packets by dividing the source data. Note that first intermediate data and second intermediate data are omitted in
Specifically, the bit length of the source data (Data) is 55 bits, and is 1 bit less than 56 bits that are integral multiples of 4 bits. Accordingly, the signal generation apparatus pads the source data, and generates, by Reed-Solomon coding, parity (16 bits) for the 56-bit source data which has been padded.
Next, the signal generation apparatus divides entire data which includes 16-bit parity, and 55-bit source data into nine data DaDb(1) to DaDb(9).
Each of data DaDb(k) includes a k-th 4-bit portion included in main source data Dataa, and a k-th 4-bit portion included in the sub-source data Datab. Note that k is an integer from among 1 to 8. Data DaDb(9) includes a ninth 4-bit portion included in the main source data Dataa and ninth 3-bit portion included in the sub-source data Datab.
Next, the signal generation apparatus generates the first to ninth packets by adding the start bit S and address data to each of nine data DaDb(1) to DaDb(9).
For example, if the bit length of source data (Data) is 7×(N−2) bits, the signal generation apparatus generates N packets, namely, the first to Nth packets by dividing the source data. Note that N is an integer from among 10 to 16. In
Specifically, the signal generation apparatus generates parity (14 bits) for the source data which includes 7×(N−2) bits, by Reed-Solomon coding. Note that 7 bits are handled as one symbol in Reed-Solomon coding.
Next, the signal generation apparatus divides, into N data, namely, DaDb(1) to DaDb(N), entire data which includes the 14-bit parity, and the source data constituted by 7×(N−2) bits.
Each of data DaDb(k) includes the k-th 4-bit portion included in the main source data Dataa and the k-th 3-bit portion included in the sub-source data Datab. Note that k is an integer from among 1 to (N−1).
Next, the signal generation apparatus generates first to Nth packets by adding the start bit S and address data to each of N data, namely DaDb(1) to DaDb(N).
Specifically,
This variation includes a short mode and a full mode. In the case of the short mode, sub-data in a packet indicates 0, and all the bits of the third bit string illustrated in
On the other hand, in the case of the full mode, one of the bits of the third bit string illustrated in
In this variation, if the division count is small, a visible light signal in the short mode can be generated, as illustrated in
The method for generating a visible light signal in the present embodiment is a method for generating a visible light signal transmitted by changing luminance of a light source included in a transmitter, and includes steps SD1 to SD3.
In step SD1, a preamble which is data in which first and second luminance values that are different values alternately appear along the time axis is generated.
In step SD2, in the data in which the first and second luminance values appear alternately along the time axis, first data is generated by determining time lengths in which the first and second luminance values are maintained, in accordance with a first method according to a signal to be transmitted.
At last, in step SD3, a visible light signal is generated by combining a preamble and the first data.
For example, as illustrated in
The method for generating the visible light signal may further include: generating second data that has a complementary relation with regard to brightness represented by the first data, by determining time lengths in which the first and second luminance values are maintained in data in which the first and second luminance values alternately appear along the time axis, in accordance with a second method according to a signal to be transmitted; and when the visible light signal is generated, generating the visible light signal by combining the preamble, the first data, and the second data in the order of the first data, the preamble, and the second data.
For example, as illustrated in
Furthermore, when a and b denote constants, a numerical value included in the signal to be transmitted is denoted by n, and a constant which is the maximum value of the numerical value n is denoted by m, the first method may be a method of determining, based on a+b×n, a time length in which the first or second luminance value is maintained in the first data, and the second method may be a method of determining, based on a+b×(m−n), a time length in which the first or second luminance value is maintained in the second data.
For example, a is 120 μs, b is 20 μs, n is an integer (numerical value indicated by signal xi) from among 0 to 15, and m is 15, as illustrated in
In the complementary relation, a sum of a time length of the entire first data and a time length of the entire second data may be constant.
The method for generating the visible light signal may further include: generating a dimming portion which is data for adjusting the brightness represented by the visible light signal; and when the visible light signal is to be generated, generating the visible light signal by further combining the dimming portion.
The dimming portion is a signal (Dimming) which indicates a high luminance value for a time length C1, and indicates a low luminance value for a time length C2, in
A signal generation apparatus D10 according to the present embodiment is a signal generating apparatus which generates a visible light signal that is transmitted by changing luminance of a light source included in a transmitter, and includes a preamble generation unit D11, a data generation unit D12, and a combining unit D13 The preamble generation unit D11 generates a preamble which is data in which first and second luminance values that are different values alternately appear along the time axis for a predetermined time length.
The data generation unit D12 generates first data by determining, in accordance with a first method according to a signal to be transmitted, time lengths in which the first and second luminance values are maintained in data in which the first and second luminance values appear alternately along the time axis.
The combining unit D13 generates a visible light signal by combining a preamble and the first data.
By transmitting a visible light signal thus generated, as illustrated in
As in Variation 1 of Embodiment 20, the generation method for generating the visible light signal may further include: determining whether to divide source data according to the bit length of the source data; and generating one or more packets from the source data by performing processing according to the result of the determination. The one or more packets may be each converted into a signal to be transmitted.
In conversion to a signal to be transmitted, as illustrated in
When assigning the data, at least a portion of main data included in a target packet is assigned to the first bit string constituted by the first bits of the signs w1 to w4. The value of the stop bit included in the target packet is assigned to the second bit of the sign w1. A portion of main data included in the target packet or at least a portion of address data included in the target packet is assigned to the second bit string constituted by the second bits of the signs w2 to w4. Sub-data included in the target packet is assigned to the third bit string constituted by the third bits of the signs w1 to w4.
Here, the stop bit indicates whether the target packet, among one or more generated packets, is at the end. The address data indicates, as an address, where in the order the target packet is included among the one or more generated packets. The main data and the sub-data are for restoring the source data.
When a and b denote constants, W1, W2, W3, and W4 denote the numerical values indicated by the signs w1, w2, w3, and w4, the first method described above is a method for determining the time length in which the first or second luminance value is maintained in the first data, based on a+b×W1, a+b×W2, a+b×W3, and a+b×W4, as illustrated in
For example, in each of the signs w1 to w4, the value of the first bit is b1, the value of the second bit is b2, and the value of the third bit is b3. In this case, each of the values W1 to W4 indicated by the signs w1 to w4 is, for example, b1×20+b2×21+b3×22. Accordingly, in the signs w1 to w4, the values W1 to W4 indicated by the signs w1 to w4 are greater when the second bit is 1 than when the first bit is 1. In addition, the values W1 to W4 indicated by the signs w1 to w4 are greater when the third bit is 1 than when the second bit is 1. If the values W1 to W4 indicated by the signs w1 to w4 are great, the time lengths (for example, DRi) in which the above-mentioned first and second luminance values are increased, and thus the wrong detection of the luminance of a visible light signal is prevented from being incorrectly detected and an error in reception can be reduced. On the contrary, when the values W1 to W4 indicated by the signs w1 to w4 are small, the time lengths in which the above-mentioned first and second luminance values are maintained are decreased, and thus incorrect detection of luminance of a visible light signal is comparatively easy to be caused.
In view of this, in Variation 1 of Embodiment 20, the stop bit and address which are important in order to receive source data are preferentially assigned to the second bits of the signs w1 to w4, thus error in reception can be reduced. The sign w1 defines a time length in which a high or low luminance value closest to the preamble is maintained. In other words, the sign w1 is closer to the preamble than the other signs w2 to w4, and thus is likely to be received more appropriately than the other signs. In view of this, in Variation 1 of Embodiment 20, error in reception can be further suppressed by assigning a stop bit to the second bit of the sign w1.
In Variation 1 of Embodiment 20, main data is preferentially assigned to the first bit string for which incorrect detection tends to be comparatively easy to occur. However, if an error correcting code (parity) is included in the main data, error in reception of the main data can be suppressed.
Furthermore, in Variation 1 of Embodiment 20, sub-data is assigned to the third bit string constituted by the third bits of the signs w1 to w4. Thus, if sub-data is 0, time lengths in which the high and low luminance values defined by the signs w1 to w4 are maintained can be greatly shortened. As a result, a so-called short mode which greatly reduces time for transmitting a visible light signal per one packet can be achieved. In such a short mode, a transmission time is short as described above, and thus a packet can be readily received even from a distance. Accordingly, the communication range for a visible light communication can be increased.
According to Variation 1 of Embodiment 20, as illustrated in
For example, the packet (Packet 1) which is not at the end and is illustrated in
Accordingly, if source data is divided into two packets, address data is unnecessary for a packet which is not at the end, namely, the first packet as long as a start bit (S=0) is included in the packet. Accordingly, all the bits of the second bit string are used for main data, and thus the amount of data included in the packet can be increased.
When data is assigned in Variation 1 of Embodiment 20, among three bits included in the second bit string, a bit on the leading side in the arrangement order is preferentially used for assigning address data, and if the entire address data is assigned to one or two bits on the leading side of the second bit string, a portion of main data is assigned to 1 or 2 bits in the second bit string, to which address data is not assigned. For example, in Packet 1 in
Accordingly, the second bit string can be shared by the address data and a portion of the main data, and thus the flexibility of a packet configuration can be increased.
When data in Variation 1 of Embodiment 20 is assigned, if the entire address data cannot be assigned to the second bit string, a remaining portion of the address data other than the portion assigned to the second bit string is assigned to any bit of the third bit string. For example, the entirety of the 4-bit address data A1 to A4 cannot be assigned to the second bit string in Packet 3 in
In this manner, address data can be assigned appropriately to the signs w1 to w4.
When data is assigned in Variation 1 of Embodiment 20, if a packet at the end among one or more packets is converted into a signal to be transmitted as a target packet, address data is assigned to the second bit string and any one bit included in the third bit string. For example, the number of bits for address data of the packet at the end in
Accordingly, address data can be appropriately assigned to the signs w1 to w4.
In Variation 1 of Embodiment 20, when generating one or more packets, two divided source data are generated by dividing the source data into two, and error correcting codes for the two divided source data are generated. Two or more packets are generated using the two divided source data and the error correcting codes generated for the two divided source data. When the error correcting codes for the two pieces of divided source data are generated, if the number of bits of any of the two divided source data is less than the number of bits for generating an error correcting code, the divided source data is padded, and an error correcting code for the padded divided source data is generated. For example, as illustrated in
Accordingly, even if the number of bits of divided source data is less than the number of bits for generating an error correcting code, an appropriate error correcting code can be generated.
When data is assigned in Variation 1 of Embodiment 20, if sub-data indicates 0, 0 is assigned to all the bits included in the third bit string. Accordingly, the short mode described above can be achieved, and a communication range for a visible light communication can be increased.
When a receiver is to receive a high frequency visible light signal, the receiver adds guard time (guard intervals) to portions when a visible light signal rises and falls, as illustrated in (a) of
When the receiver separates a high frequency signal indicating a high luminance value and a high frequency signal indicating a low luminance value from a high frequency visible light signal, the receiver adjusts the gains of the high frequency signals automatically (automatic gain control). Accordingly, the gains (luminance values) of the high frequency signals are equalized.
The receiver which receives a high frequency visible light signal includes an image sensor similarly to the above embodiments, and further includes a digital mirror device (DMD) element, and photosensors. The photosensors are photo-diodes or avalanche photodiodes.
The receiver captures an image of a transmitter (light source) which transmits a high frequency visible light signal, using the image sensor. The receiver thus obtains a bright line image which includes a bright-line striped pattern. The bright-line striped pattern appears due to luminance change of a signal other than the high frequency signal among a high frequency visible light signal, that is, a visible light signal illustrated in
The receiver may include half mirrors and light emitting elements as illustrated in
Accordingly, even if there are a plurality of transmitters (light sources) whose images are captured by the image sensor, the receiver can bidirectionally communicate with the transmitters simultaneously at high speed. For example, if the receiver includes 100 photosensors which can receive data at 10 Gbps and if the receivers communicate with 100 transmitters, the transmission speed of 1 Tbps can be achieved.
For example, a receiver includes lenses L1 and L2, a plurality of half mirrors, a DMD element, an image sensor, a light absorbing portion (black body), a processing unit, a DMD control unit, photosensors 1 and 2, and light emitting elements 1 and 2.
Such a receiver bidirectionally communicates with two cars, according to a theory similar to that of the example illustrated in
The image sensor receives high frequency visible light signals and normal light via the lens L1. Accordingly, a bright line image which includes bright-line striped patterns formed by the high frequency visible light signals is obtained, similarly to the example illustrated in
In this manner, high frequency visible light signals from the two cars which have passed through the lens L1 and the half mirror are reflected off the micro mirrors of the DMD element and guided to the lens L2. In contrast, the normal light from the headlights of the one car does not form a bright-line striped pattern, and thus even though the normal light has passed through the lens L1 and the half mirror, the normal light is reflected off a micro mirror in the off state of the DMD element. The light reflected off the micro mirror in the off state is absorbed by the light absorption portion (black body).
The high frequency visible light signals which have passed through the lens L2 each pass through a half mirror, and are received by the photosensors 1 and 2. Accordingly, high frequency visible light signals from the cars can be received. If the light emitting elements 1 and 2 output visible light signals (or high frequency visible light signals) to the half mirrors, the visible light signals are reflected off the half mirrors, pass through the lens L2, and further reflected off micro mirrors in the on state on the DMD element. As a result, the visible light signals from the light emitting elements 1 and 2 are transmitted via the half mirror and the lens L1, to the cars which have transmitted the high frequency visible light signals. In other words, the receiver can bidirectionally communicate with a plurality of cars which transmit high frequency visible light signals.
Accordingly, the receiver according to the present embodiment obtains a bright line image using the image sensor, and determines the position of a bright-line striped pattern in the bright line image. The receiver identifies a micro mirror corresponding to the position of the striped pattern, from among micro mirrors included in the DMD element. The receiver receives, using a photosensor, a high frequency visible light signal by bringing the micro mirror into the on state. Further, the receiver causes a light emitting element to output a visible light signal, and causes the micro mirror in the on state to reflect the visible light signal, thus transmitting the visible light signal to the transmitter.
Note that in the examples illustrated in
A signal output apparatus which outputs a high frequency signal to be superimposed on the visible light signal illustrated in
The present embodiment describes an autonomous flight device (also referred to as a drone) achieved using the visible light communication according to the above embodiments.
An autonomous flight device 1921 according to the present embodiment is housed inside a monitoring camera 1922. For example, if the monitoring camera 1922 captures an image of a suspicious person, a door of the monitoring camera 1922 opens, and the autonomous flight device 1921 housed inside takes off from the monitoring camera 1922, and starts tracking the suspicious person. The autonomous flight device 1921 includes a small camera, and tracks the suspicious person so that the small camera also captures an image of the suspicious person as captured by the monitoring camera 1922. Furthermore, if the autonomous flight device 1921 detects that power is insufficient for flight, for instance, the autonomous flight device 1921 returns to the monitoring camera 1922, and is housed in the monitoring camera 1922. At this time, if another autonomous flight device 1921 is housed in the monitoring camera 1922, the other autonomous flight device 1921 starts tracking the suspicious person, instead of the autonomous flight device 1921 which does not have sufficient power left. The autonomous flight device 1921 which does not have sufficient power left receives power supply from a wireless power feeder 1921a included in the monitoring camera 1922. Note that power is supplied from the wireless power feeder 1921a in accordance with the standard Qi, for example.
The small camera of the autonomous flight device 1921 and the monitoring camera 1922 can receive the visible light signals described in the above embodiments, and can operate according to the received visible light signals. If at least one of the autonomous flight device 1921 and the monitoring camera 1922 includes a transmitter which transmits a visible light signal, the autonomous flight device 1921 and the monitoring camera 1922 can communicate with each other by visible light communication. As a result, the suspicious person can be tracked more efficiently.
The present embodiment describes, for instance, a display method which achieves augmented reality (AR) using light IDs.
A receiver 200 according to the present embodiment is the receiver according to any of Embodiments 1 to 22 described above which includes an image sensor and a display 201, and is configured as a smartphone, for example. The receiver 200 obtains a captured display image Pa which is a normal captured image described above and a decode target image which is a visible light communication image or a bright line image described above, by an image sensor included in the receiver 200 capturing an image of a subject.
Specifically, the image sensor of the receiver 200 captures an image of a transmitter 100 configured as a station sign. The transmitter 100 is the transmitter according to any of Embodiments 1 to 22 above, and includes one or more light emitting elements (for example, LEDs). The transmitter 100 changes luminance by causing the one or more light emitting elements to blink, and transmits a light ID (light identification information) through the luminance change. The light ID is a visible light signal described above.
The receiver 200 obtains a captured display image Pa in which the transmitter 100 is shown by capturing an image of the transmitter 100 for a normal exposure time, and also obtains a decode target image by capturing an image of the transmitter 100 for a communication exposure time shorter than the normal exposure time. Note that the normal exposure time is a time for exposure in the normal imaging mode described above, and the communication exposure time is a time for exposure in the visible light communication mode described above.
The receiver 200 obtains a light ID by decoding the decode target image. In other words, the receiver 200 receives a light ID from the transmitter 100. The receiver 200 transmits the light ID to a server. Then, the receiver 200 obtains an AR image P1 and recognition information associated with the light ID from the server. The receiver 200 recognizes a region according to the recognition information as a target region, from the captured display image Pa. For example, the receiver 200 recognizes, as a target region, a region in which a station sign which is the transmitter 100 is shown. The receiver 200 superimposes the AR image P1 on the target region, and displays, on the display 201, the captured display image Pa on which the AR image P1 is superimposed. For example, if the station sign which is the transmitter 100 shows “Kyoto Eki” in Japanese which is the name of the station, the receiver 200 obtains the AR image P1 showing the name of the station in English, that is, “Kyoto Station”. In this case, the AR image P1 is superimposed on the target region of the captured display image Pa, and thus the captured display image Pa can be displayed as if a station sign showing the English name of the station were actually present. As a result, by looking at the captured display image Pa, a user who knows English can readily know the name of the station shown by the station sign which is the transmitter 100, even if the user cannot read Japanese.
For example, recognition information may be an image to be recognized (for example, an image of the above station sign) or may indicate feature points and a feature quantity of the image. Feature points and a feature quantity can be obtained by image processing such as scale-invariant feature transform (SIFT), speeded-up robust feature (SURF), oriented-BRIEF (ORB), and accelerated KAZE (AKAZE), for example. Alternatively, recognition information may be a white quadrilateral image similar to the image to be recognized, and may further indicate an aspect ratio of the quadrilateral. Alternatively, identification information may include random dots which appear in the image to be recognized. Furthermore, recognition information may indicate orientation of the white quadrilateral or random dots mentioned above relative to a predetermined direction. The predetermined direction is a gravity direction, for example.
The receiver 200 recognizes, as a target region, a region according to such recognition information from the captured display image Pa. Specifically, if recognition information indicates an image, the receiver 200 recognizes a region similar to the image shown by the recognition information, as a target region. If the recognition information indicates feature points and a feature quantity obtained by image processing, the receiver 200 detects feature points and extracts a feature quantity by performing the image processing on the captured display image Pa. The receiver 200 recognizes, as a target region, a region which has feature points and a feature quantity similar to the feature points and the feature quantity indicated by the recognition information. If recognition information indicates a white quadrilateral and the orientation of the image, the receiver 200 first detects the gravity direction using an acceleration sensor included in the receiver 200. The receiver 200 recognizes, as a target region, a region similar to the white quadrilateral arranged in the orientation indicated by the recognition information, from the captured display image Pa disposed based on the gravity direction.
Here, the recognition information may include reference information for locating a reference region of the captured display image Pa, and target information indicating a relative position of the target region with respect to the reference region. Examples of the reference information include an image to be recognized, feature points and a feature quantity, a white quadrilateral image, and random dots, as mentioned above. In this case, the receiver 200 first locates a reference region from the captured display image Pa, based on reference information, when the receiver 200 is to recognize a target region. Then, the receiver 200 recognizes, as a target region, a region in a relative position indicated by target information based on the position of the reference region, from the captured display image Pa. Note that the target information may indicate that a target region is in the same position as the reference region. Accordingly, the recognition information includes reference information and target information, and thus a target region can be recognized from various aspects. The server can set freely a spot where an AR image is superimposed, and inform the receiver 200 of the spot.
Reference information may indicate that the reference region in the captured display image Pa is a region in which a display is shown in the captured display image. In this case, if the transmitter 100 is configured as, for example, a display of a TV, a target region can be recognized based on a region in which the display is shown.
In other words, the receiver 200 according to the present embodiment identifies a reference image and an image recognition method, based on a light ID. The image recognition method is a method for recognizing a captured display image Pa, and examples of the method include, for instance, geometric feature quantity extraction, spectrum feature quantity extraction, and texture feature quantity extraction. The reference image is data which indicates a feature quantity used as the basis. The feature quantity may be a feature quantity of a white outer frame of an image, for example, or specifically, data showing features of the image represented in vector form. The receiver 200 extracts a feature quantity from the captured display image Pa in accordance with the image recognition method, and detects an above-mentioned reference region or target region from the captured display image Pa, by comparing the extracted feature quantity and a feature quantity of a reference image.
Examples of the image recognition method may include a location utilizing method, a marker utilizing method, and a marker free method. The location utilizing method is a method in which positional information provided by the global positioning system (GPS) (namely, the position of the receiver 200) is utilized, and a target region is recognized from the captured display image Pa, based on the positional information. The marker utilizing method is a method in which a marker which includes a white and black pattern such as a two-dimensional barcode is used as a mark for target identification. In other words, a target region is recognized based on a marker shown in the captured display image P, according to the marker utilizing method. According to the marker free method, feature points or a feature quantity are/is extracted from the captured display image Pa, through image analysis on the captured display image Pa, and the position of a target region and the target region are located, based on the extracted feature points or feature quantity. In other words, if the image recognition method is the marker free method, the image recognition method is, for instance, geometric feature quantity extraction, spectrum feature quantity extraction, or texture feature quantity extraction mentioned above.
The receiver 200 may identify a reference image and an image recognition method, by receiving a light ID from the transmitter 100, and obtaining, from the server, a reference image and an image recognition method associated with the light ID (hereinafter, received light ID). In other words, a plurality of sets each including a reference image and an image recognition method are stored in the server, and associated with different light IDs. This allows identifying one set associated with the received light ID, from among the plurality of sets stored in the server. Accordingly, the speed of image processing for superimposing an AR image can be improved. Furthermore, the receiver 200 may obtain a reference image associated with a received light ID by making an inquiry to the server, or may obtain a reference image associated with the received light ID, from among a plurality of reference images prestored in the receiver 200.
The server may store, for each light ID, relative positional information associated with the light ID, together with a reference image, an image recognition method, and an AR image. The relative positional information indicates a relative positional relationship of the above reference region and a target region, for example. In this manner, when the receiver 200 transmits the received light ID to the server to make an inquiry, the receiver 200 obtains the reference image, the image recognition method, the AR image, and the relative positional information associated with the received light ID. In this case, the receiver 200 locates the above reference region from the captured display image Pa, based on the reference image and the image recognition method. The receiver 200 recognizes, as a target region mentioned above, a region in the direction and at the distance indicated by the above relative positional information from the position of the reference region, and superimposes an AR image on the target region. Alternatively, if the receiver 200 does not have relative positional information, the receiver 200 may recognize, as a target region, a reference region as mentioned above, and superimpose an AR image on the reference region. In other words, the receiver 200 may prestore a program for displaying an AR image, based on a reference image, instead of obtaining relative positional information, and may display an AR image within the white frame which is a reference region, for example. In this case, relative positional information is unnecessary.
There are the following four variations (1) to (4) of storing and obtaining a reference image, relative positional information, an AR image, and an image recognition method.
(1) The server stores a plurality of sets each including a reference image, relative positional information, an AR image, and an image recognition method. The receiver 200 obtains one set associated with a received light ID from among the plurality of sets.
(2) The server stores a plurality of sets each including a reference image and an AR image. The receiver 200 obtains one set associated with a received light ID from among the plurality of sets, using predetermined relative positional information and a predetermined image recognition method. Alternatively, the receiver 200 prestores a plurality of sets each including relative positional information and an image recognition method, and may select one set associated with a received light ID, from among the plurality of sets. In this case, the receiver 200 may transmit a received light ID to the server to make an inquiry, and obtain information for identifying relative positional information and an image recognition method associated with the received light ID, from the server. The receiver 200 selects one set, based on information obtained from the server, from among the prestored plurality of sets each including relative positional information and an image recognition method. Alternatively, the receiver 200 may select one set associated with a received light ID, from among the prestored plurality of sets each including relative positional information and an image recognition method, without making an inquiry to the server.
(3) The receiver 200 stores a plurality of sets each including a reference image, relative positional information, an AR image, and an image recognition method, and selects one set from among the plurality of sets. The receiver 200 may select one set by making an inquiry to the server or may select one set associated with a received light ID, similarly to (2) above.
(4) The receiver 200 stores a plurality of sets each including a reference image and an AR image, and selects one set associated with a received light ID. The receiver 200 uses a predetermined image recognition method and predetermined relative positional information.
The display system according to the present embodiment includes the transmitter 100 which is a station sign as mentioned above, the receiver 200, and a server 300, for example.
The receiver 200 first receives a light ID from the transmitter 100 in order to display the captured display image on which an AR image is superimposed as described above. Next, the receiver 200 transmits the light ID to the server 300.
The server 300 stores, for each light ID, an AR image and recognition information associated with the light ID. Upon reception of a light ID from the receiver 200, the server 300 selects an AR image and recognition information associated with the received light ID, and transmits the AR image and the recognition information that are selected to the receiver 200. Accordingly, the receiver 200 receives the AR image and the recognition information transmitted from the server 300, and displays a captured display image on which the AR image is superimposed.
The display system according to the present embodiment includes, for example, the transmitter 100 which is a station sign mentioned above, the receiver 200, a first server 301, and a second server 302.
The receiver 200 first receives a light ID from the transmitter 100, in order to display a captured display image on which an AR image is superimposed as described above. Next, the receiver 200 transmits the light ID to the first server 301.
Upon reception of the light ID from the receiver 200, the first server 301 notifies the receiver 200 of a uniform resource locator (URL) and a key which are associated with the received light ID. The receiver 200 which has received such a notification accesses the second server 302 based on the URL, and delivers the key to the second server 302.
The second server 302 stores, for each key, an AR image and recognition information associated with the key. Upon reception of the key from the receiver 200, the second server 302 selects an AR image and recognition information associated with the key, and transmits the selected AR image and recognition information to the receiver 200. Accordingly, the receiver 200 receives the AR image and the recognition information transmitted from the second server 302, and displays a captured display image on which the AR image is superimposed.
The display system according to the present embodiment includes the transmitter 100 which is a station sign mentioned above, the receiver 200, the first server 301, and the second server 302, for example.
The receiver 200 first receives a light ID from the transmitter 100, in order to display a captured display image on which an AR image is superimposed as described above. Next, the receiver 200 transmits the light ID to the first server 301.
Upon reception of the light ID from the receiver 200, the first server 301 notifies the second server 302 of a key associated with the received light ID.
The second server 302 stores, for each key, an AR image and recognition information associated with the key. Upon reception of the key from the first server 301, the second server 302 selects an AR image and recognition information which are associated with the key, and transmits the selected AR image and the selected recognition information to the first server 301. Upon reception of the AR image and the recognition information from the second server 302, the first server 301 transmits the AR image and the recognition information to the receiver 200. Accordingly, the receiver 200 receives the AR image and the recognition information transmitted from the first server 301, and displays a captured display image on which the AR image is superimposed.
Note that the second server 302 transmits an AR image and recognition information to the first server 301 in the above example, but may transmit an AR image and recognition information to the receiver 200, without transmitting to the first server 301.
First, the receiver 200 starts image capturing for the normal exposure time and the communication exposure time described above (step S101). Then, the receiver 200 obtains a light ID by decoding a decode target image obtained by image capturing for the communication exposure time (step S102). Next, the receiver 200 transmits the light ID to the server (step S103).
The receiver 200 obtains an AR image and recognition information associated with the transmitted light ID from the server (step S104). Next, the receiver 200 recognizes, as a target region, a region according to the recognition information, from a captured display image obtained by image capturing for the normal exposure time (step S105). The receiver 200 superimposes the AR image on the target region, and displays the captured display image on which the AR image is superimposed (step S106).
Next, the receiver 200 determines whether to terminate image capturing and displaying the captured display image (step S107). Here, if the receiver 200 determines that image capturing and displaying the captured display image are not to be terminated (N in step S107), the receiver 200 further determines whether the acceleration of the receiver 200 is greater than or equal to a threshold (step S108). An acceleration sensor included in the receiver 200 measures the acceleration. If the receiver 200 determines that the acceleration is less than the threshold (N in step S108), the receiver 200 executes processing from step S105. Accordingly, even if the captured display image displayed on the display 201 of the receiver 200 is displaced, the AR image can be caused to follow the target region of the captured display image. If the receiver 200 determines that the acceleration is greater than or equal to the threshold (Y in step S108), the receiver 200 executes processing from step S102. Accordingly, if the captured display image stops showing the transmitter 100, the receiver 200 can be prevented from incorrectly recognizing, as a target region, a region in which a subject different from the transmitter 100 is shown.
As described above, in the present embodiment, an AR image is displayed, being superimposed on a captured display image, and thus an image useful to a user can be displayed. Furthermore, an AR image can be superimposed on an appropriate target region, while maintaining a processing load light.
Specifically, in typical augmented reality (namely, AR), a captured display image is compared with a huge number of prestored recognition target images, to determine whether the captured display image includes any of the recognition target images. Then, if the captured display image is determined to include a recognition target image, an AR image associated with the recognition target image is superimposed on the captured display image. At this time, the AR image is positioned based on the recognition target image. Accordingly, in such typical augmented reality, a captured display image is compared with a huge number of recognition target images, and also the position of a recognition target image in the captured display image needs to be detected when an AR image is positioned. Thus, a large amount of calculation is involved and a processing load is heavy, which is a problem.
However, with the display method according to the present embodiment, a light ID is obtained by decoding a decode target image which is obtained by capturing an image of a subject. Specifically, a light ID transmitted from a transmitter which is a subject is received. Furthermore, an AR image and recognition information associated with the light ID are obtained from a server. Accordingly, the server does not need to compare a captured display image with a huge number of recognition target images, and can select an AR image associated in advance with the light ID, and transmit the AR image to a display apparatus. In this manner, a processing load can be greatly reduced by decreasing the amount of calculation. Processing of displaying an AR image can be performed at high speed.
In the present embodiment, recognition information associated with the light ID is obtained from the server. Recognition information is for recognizing, from a captured display image, a target region on which an AR image is to be superimposed. This recognition information may indicate that a white quadrilateral, for example, is a target region. In this case, a target region can be readily recognized and a processing load can be further reduced. Specifically, a processing load can be further reduced depending on the content of recognition information. The server can arbitrarily set the content of the recognition information according to a light ID, and thus the balance of a processing load and recognition precision can be maintained appropriate.
Note that in the present embodiment, the receiver 200 transmits a light ID to the server, and thereafter the receiver 200 obtains an AR image and recognition information associated with the light ID from the server. Yet, at least one of an AR image and recognition information may be obtained in advance. Specifically, the receiver 200 obtains, at a time, from the server and stores a plurality of AR images and a plurality of pieces of recognition information associated with a plurality of light IDs which may be received. Thereafter, upon reception of a light ID, the receiver 200 selects an AR image and recognition information associated with the light ID, from among the plurality of AR images and the plurality of pieces of recognition information stored in the receiver 200. Accordingly, processing of displaying an AR image can be performed at higher speed.
The transmitter 100 is configured as, for example, a lighting apparatus as illustrated in
The receiver 200 obtains a captured display image Pb and a decode target image by capturing an image of the guideboard 101 illuminated by the transmitter 100, similarly to the above. The receiver 200 obtains a light ID by decoding the decode target image. In other words, the receiver 200 receives a light ID from the guideboard 101. The receiver 200 transmits the light ID to a server. The receiver 200 obtains an AR image P2 and recognition information associated with the light ID from the server. The receiver 200 recognizes a region according to the recognition information as a target region from the captured display image Pb. For example, the receiver 200 recognizes a region in which a frame 102 in the guideboard 101 is shown as a target region. The frame 102 is for showing the waiting time of the facility. The receiver 200 superimposes the AR image P2 on the target region, and displays, on the display 201, the captured display image Pb on which the AR image P2 is superimposed. For example, the AR image P2 is an image which includes a character string “30 min.”. In this case, the AR image P2 is superimposed on the target region of the captured display image Pb, and thus the receiver 200 can display the captured display image Pb as if the guideboard 101 showing the waiting time “30 min.” were actually present. In this manner, the user of the receiver 200 can be readily and concisely informed of a waiting time without providing the guideboard 101 with a special display apparatus.
The transmitters 100 are achieved by two lighting apparatuses, as illustrated in
The receiver 200 obtains a captured display image Pc and a decode target image by capturing an image of the guideboard 104 illuminated by the transmitters 100. The receiver 200 obtains a light ID by decoding the decode target image. In other words, the receiver 200 receives a light ID from the guideboard 104. The receiver 200 transmits the light ID to a server. Then, the receiver 200 obtains, from the server, an AR image P3 and recognition information associated with the light ID. The receiver 200 recognizes, as a target region, a region according to the recognition information from the captured display image Pc. For example, the receiver 200 recognizes a region in which the guideboard 104 is shown as a target region. Then, the receiver 200 superimposes the AR image P3 on the target region, and displays, on the display 201, the captured display image Pc on which the AR image P3 is superimposed. For example, the AR image P3 shows the names of a plurality of facilities. On the AR image P3, the longer the waiting time of a facility is, the smaller the name of the facility is displayed. Conversely, the shorter the waiting time of a facility is, the larger the name of the facility is displayed. In this case, the AR image P3 is superimposed on the target region of the captured display image Pc, and thus the receiver 200 can display the captured display image Pc as if the guideboard 104 showing the names of the facilities in sizes according to waiting time were actually present. Accordingly, the user of the receiver 200 can be readily and concisely informed of the waiting time of the facilities without providing the guideboard 104 with a special display apparatus.
The transmitters 100 are achieved by two lighting apparatuses, as illustrated in
The receiver 200 obtains a captured display image Pd and a decode target image by capturing an image of the rampart 105 illuminated by the transmitters 100, similarly to the above. The receiver 200 obtains a light ID by decoding the decode target image. In other words, the receiver 200 receives a light ID from the rampart 105. The receiver 200 transmits the light ID to a server. Then, the receiver 200 obtains an AR image P4 and recognition information associated with the light ID from the server. The receiver 200 recognizes a region according to the recognition information as a target region from the captured display image Pd. For example, the receiver 200 recognizes, as a target region, a region of the rampart 105 in which an area that includes the hidden character 106 is shown. The receiver 200 superimposes the AR image P4 on the target region, and displays, on the display 201, the captured display image Pd on which the AR image P4 is superimposed. For example, the AR image P4 is an imitation of the face of a character. The AR image P4 is sufficiently larger than the hidden character 106 shown on the captured display image Pd. In this case, the AR image P4 is superimposed on the target region of the captured display image Pd, and thus the receiver 200 can display the captured display image Pd as if the rampart 105 in which a large mark which is an imitation of a face of the character is carved were actually present. Accordingly, the user of the receiver 200 can be readily informed of the position of the hidden character 106.
The transmitters 100 are achieved by two lighting apparatuses as illustrated in
The receiver 200 obtains a captured display image Pe and a decode target image, by capturing an image of the guideboard 107 illuminated by the transmitters 100, similarly to the above. The receiver 200 obtains a light ID by decoding the decode target image. In other words, the receiver 200 receives a light ID from the guideboard 107. The receiver 200 transmits the light ID to a server. Then, the receiver 200 obtains an AR image P5 and recognition information associated with the light ID from the server. The receiver 200 recognizes a region according to the recognition information as a target region from the captured display image Pe. For example, the receiver 200 recognizes, as a target region, a region in which the guideboard 107 is shown.
Specifically, the recognition information indicates that a quadrilateral circumscribing the plurality spots to which the infrared barrier coating 108 is applied is a target region. Furthermore, the infrared barrier coating 108 blocks infrared radiation included in the light emitted from the transmitters 100. Accordingly, the image sensor of the receiver 200 recognizes the spots to which the infrared barrier coating 108 is applied as images darker than the peripheries of the images. The receiver 200 recognizes, as a target region, a quadrilateral circumscribing the plurality of spots to which the infrared barrier coating 108 is applied and which appear as dark images.
The receiver 200 superimposes the AR image P5 on the target region, and displays, on the display 201, the captured display image Pe on which the AR image P5 is superimposed. For example, the AR image P5 shows a schedule of events which take place at the facility indicated by the guideboard 107. In this case, the AR image P5 is superimposed on the target region of the captured display image Pe, and thus the receiver 200 can display the captured display image Pe as if the guideboard 107 showing the schedule of events were actually present. Accordingly, the user of the receiver 200 can be concisely informed of the schedule of events at the facility, without providing the guideboard 107 with a special display apparatus.
Note that infrared reflective paint may be applied to the guideboard 107, instead of the infrared barrier coating 108. The infrared reflective paint reflects infrared radiation included in light emitted from the transmitters 100. Thus, the image sensor of the receiver 200 recognizes the spots to which the infrared reflective paint is applied as images brighter than the peripheries of the images. Specifically, in this case, the receiver 200 recognizes, as a target region, a quadrilateral circumscribing the spots to which the infrared reflective paint is applied and which appear as bright images.
The transmitter 100 is configured as a station sign, and is disposed near a station exit guide 110. The station exit guide 110 includes a light source and emits light, but does not transmit a light ID, unlike the transmitter 100.
The receiver 200 obtains a captured display image Ppre and a decode target image Pdec, by capturing an image which includes the transmitter 100 and the station exit guide 110. The transmitter 100 changes luminance, and the station exit guide 110 is emitting light, and thus a bright line pattern region Pdec1 corresponding to the transmitter 100 and a bright region Pdec2 corresponding to the station exit guide 110 appear in the decode target image Pdec. The bright line pattern region Pdec1 includes a pattern formed by a plurality of bright lines which appear due to a plurality of exposure lines included in the image sensor of the receiver 200 being exposed for the communication exposure time.
Here, identification information includes, as described above, reference information for locating a reference region Pbas of the captured display image Ppre, and target information which indicates a relative position of a target region Ptar with reference to the reference region Pbas. For example, the reference information indicates that the position of the reference region Pbas in the captured display image Ppre matches the position of the bright line pattern region Pdec1 in the decode target image Pdec. Furthermore, the target information indicates that the position of a target region is the position of the reference region.
Thus, the receiver 200 locates the reference region Pbas from the captured display image Ppre, based on the reference information. Specifically, the receiver 200 locates, as the reference region Pbas, a region of the captured display image Ppre which is in the same position as the position of the bright line pattern region Pdec1 in the decode target image Pdec. Furthermore, the receiver 200 recognizes, as the target region Ptar, a region of the captured display images Ppre which is in the relative position indicated by the target information with respect to the position of the reference region Pbas. In the above example, the target information indicates that the position of the target region Ptar is the position of the reference region Pbas. Thus, the receiver 200 recognizes the reference region Pbas of the captured display images Ppre as the target region Ptar.
The receiver 200 superimposes the AR image P1 on the target region Ptar in the captured display image Ppre.
Accordingly, in the above example, the receiver 200 uses the bright line pattern region Pdec1 to recognize the target region Ptar. On the other hand, if a region in which the transmitter 100 is shown is to be recognized as the target region Ptar only from the captured display image Ppre, without using the bright line pattern region Pdec1, the receiver 200 may incorrectly recognize the region. Specifically, in the captured display images Ppre, the receiver 200 may incorrectly recognize a region in which the station exit guide 110 is shown, as the target region Ptar, rather than a region in which the transmitter 100 is shown. This is because the image of the transmitter 100 and the image of the station exit guide 110 in the captured display image Ppre are similar to each other. However, if the bright line pattern region Pdec1 is used as in the above example, the receiver 200 can accurately recognize the target region Ptar while preventing incorrect recognition.
In the example illustrated in
In the example illustrated in
In such a case, the receiver 200 locates the reference region Pbas from the captured display image Ppre, based on reference information. Specifically, the receiver 200 locates, as the reference region Pbas, a region of the captured display image Ppre which is in the same position as the position of the bright line pattern region Pdec1 in the decode target image Pdec. Specifically, the receiver 200 locates the reference region Pbas in a quadrilateral shape which is horizontally long and vertically short. Furthermore, the receiver 200 recognizes, as the target region Ptar, a region of the captured display image Ppre which is in a relative position indicated by the target information, based on the position of the reference region Pbas. Specifically, the receiver 200 recognizes a region of the captured display image Ppre which is above the reference region Pbas, as the target region Ptar. Note that at this time, the receiver 200 determines an upward direction from the reference region Pbas, based on the gravity direction measured by the acceleration sensor included in the receiver 200.
Note that the target information may indicate the size, the shape, and the aspect ratio of the target region Ptar, rather than just the relative position of the target region Ptar. In this case, the receiver 200 recognizes the target region Ptar having the size, the shape, and the aspect ratio indicated by the target information. The receiver 200 may determine the size of the target region Ptar, based on the size of the reference region Pbas.
The receiver 200 executes processing of steps S101 to S104, similarly to the example illustrated in
Next, the receiver 200 locates the bright line pattern region Pdec1 from the decode target image Pdec (step S111). Next, the receiver 200 locates the reference region Pbas corresponding to the bright line pattern region Pdec1 from the captured display image Ppre (step S112). Then, the receiver 200 recognizes the target region Ptar from the captured display image Ppre, based on recognition information (specifically, target information) and the reference region Pbas (step S113).
Next, the receiver 200 superimposes an AR image on the target region Ptar of the captured display image Ppre, and displays the captured display image Ppre on which the AR image is superimposed, similarly to the example illustrated in
The receiver 200 enlarges and displays an AR image P1, if the user taps the AR image P1 in a captured display image Ppre displayed. Furthermore, if the user taps the AR image P1, the receiver 200 may display a new AR image showing a more detailed content than the content shown by the AR image P1, instead of the AR image P1. If the AR image P1 shows one-page worth information of a guide magazine which includes a plurality of pages, the receiver 200 may display a new AR image showing information of the next page of the page shown by the AR image P1, instead of the AR image P1. Alternatively, when the user taps the AR image P1, the receiver 200 may display, as a new AR image, a video relevant to the AR image P1, instead of the AR image P1. At this time, the receiver 200 may display a video showing that, for instance, an object (autumn leaves in the example of
While capturing images, the receiver 200 obtains captured images such as captured display images Ppre and decode target images Pdec at a frame rate of 30 fps, as illustrated in (al) in
When displaying captured images, the receiver 200 displays only the captured display images Ppre among the captured images, and does not display the decode target images Pdec. Specifically, when the receiver 200 is to obtain a decode target image Pdec, the receiver 200 displays a captured display image Ppre obtained immediately before the decode target image Pdec, as illustrated in (a2) of
Here, in the example illustrated in (al) of
Further, the receiver 200 needs to switch a captured image to be obtained between the captured display image Ppre and the decode target image Pdec, and the switching may take time. In view of this, as illustrated in (b1) of
If switching periods are provided in such a manner, the receiver 200 displays, in a switching period, a captured display image Ppre obtained immediately before, as illustrated in (b2) of
The receiver 200 displays, on the display 201, a captured display image Ppre obtained by image capturing, as illustrated in (a) of
The receiver 200 first superimposes an AR image on a target region Ptar of a captured display image Ppre, and causes the AR image to follow the target region Ptar similarly to the above (step S122). Specifically, the receiver 200 displays an AR image which moves together with the target region Ptar of the captured display image Ppre. Then, the receiver 200 determines whether to maintain the display of the AR image (step S122). Here, if the receiver 200 determines that the display of the AR image is not to be maintained (N in step S122), and if the receiver 200 obtains a new light ID by image capturing, the receiver 200 displays the captured display image Ppre on which a new AR image associated with the new light ID is superimposed (step S123).
On the other hand, if the receiver 200 determines to maintain the display of the AR image (Y in step S122), the receiver 200 repeatedly executes processing from step S121. At this time, even if the receiver 200 has obtained another AR image, the receiver 200 does not display the other AR image. Alternatively, even if the receiver 200 has obtained a new decode target image Pdec, the receiver 200 does not obtain a light ID by decoding the decode target image Pdec. At this time, power consumption involving decoding can be reduced.
Accordingly, maintaining the display of an AR image prevents the displayed AR image from disappearing or being not to be readily viewed due to the display of another AR image. In other words, the displayed AR image can be readily viewed by the user.
For example, in step S122, the receiver 200 determines to maintain the display of an AR image until a predetermined period (certain period) elapses after the AR image is displayed. Specifically, when the receiver 200 displays the captured display image Ppre, while preventing a second AR image different from a first AR image superimposed in step S121 from being displayed, the receiver 200 displays the first AR image for a predetermined display period. The receiver 200 may prohibit decoding a decode target image Pdec newly obtained, during the display period.
Accordingly, when the user is looking at the first AR image once displayed, the first AR image is prevented from being immediately replaced with the second AR image different from the first AR image. Furthermore, decoding a newly obtained decode target image Pdec is wasteful processing when the display of the second AR image is prevented, and thus prohibiting such decoding can reduce power consumption.
Alternatively, in step S122, if the receiver 200 includes a face camera, and detects that the face of a user is approaching, based on the result of image capturing by the face camera, the receiver 200 may determine to maintain the display of the AR image. Specifically, when the receiver 200 displays the captured display image Ppre, the receiver 200 further determines whether the face of the user is approaching the receiver 200, based on image capturing by the face camera included in the receiver 200. Then, when the receiver 200 determines that the face is approaching, the receiver 200 displays the first AR image superimposed in step S121 while preventing the display of the second AR image different from the first AR image.
Alternatively, in step S122, if the receiver 200 includes an acceleration sensor, and detects that the face of the user is approaching, based on the result of measurement by the acceleration sensor, the receiver 200 may determine to maintain the display of the AR image. Specifically, when the receiver 200 is to display the captured display image Ppre, the receiver 200 further determines whether the face of the user is approaching the receiver 200, based on the acceleration of the receiver 200 measured by the acceleration sensor. For example, if the acceleration of the receiver 200 measured by the acceleration sensor indicates a positive value in a direction outward and perpendicular to the display 201 of the receiver 200, the receiver 200 determines that the face of the user is approaching. If the receiver 200 determines that the face of the user is approaching, while preventing the display of a second AR image different from a first AR image that is an AR image superimposed in step S121, the receiver 200 displays the first AR image.
In this manner, when the user brings his/her face closer to the receiver 200 to look at the first AR image, the first AR image can be prevented from being replaced with the second AR image different from the first AR image.
Alternatively, in step S122, the receiver 200 may determine that display of the AR image is to be maintained if a lock button included in the receiver 200 is pressed.
In step S122, the receiver 200 may determine that display of the AR image is not to be maintained after the above-mentioned certain period (namely, display period) elapses. Even before the above-mentioned certain period has elapsed, the receiver 200 may determine that display of the AR image is not to be maintained if the acceleration sensor measures an acceleration greater than or equal to the threshold. Specifically, when the receiver 200 is to display the captured display image Ppre, the receiver 200 further measures the acceleration of the receiver 200 using the acceleration sensor in the above-mentioned display period, and determines whether the measured acceleration is greater than or equal to the threshold. When the receiver 200 determines that the acceleration is greater than or equal to the threshold, the receiver 200 displays, in step S123, the second AR image instead of the first AR image, by no longer preventing display of the second AR image.
Accordingly, when the acceleration of the display apparatus greater than or equal to the threshold is measured, the display of the second AR image is no longer prevented. Thus, for example, when the user greatly moves the receiver 200 to direct the image sensor to another subject, the receiver 200 can immediately display the second AR image.
As illustrated in
The two receivers 200 capture images of the stage 111 illuminated by the transmitter 100 from lateral sides.
The receiver 200 on the left among the two receivers 200 obtains a captured display image Pf and a decode target image similarly to the above, by capturing an image of the stage 111 illuminated by the transmitter 100 from the left. The left receiver 200 obtains a light ID by decoding the decode target image. In other words, the left receiver 200 receives a light ID from the stage 111. The left receiver 200 transmits the light ID to the server. Then, the left receiver 200 obtains a three-dimensional AR image and recognition information associated with the light ID from the server. The three-dimensional AR image is for displaying a doll three-dimensionally, for example. The left receiver 200 recognizes a region according to the recognition information as a target region, from the captured display images Pf. For example, the left receiver 200 recognizes a region above the center of the stage 111 as a target region.
Next, based on the orientation of the stage 111 shown in the captured display image Pf, the left receiver 200 generates a two-dimensional AR image P6a according to the orientation from the three-dimensional AR image. The left receiver 200 superimposes the two-dimensional AR image P6a on the target region, and displays, on the display 201, the captured display image Pf on which the AR image P6a is superimposed. In this case, the two-dimensional AR image P6a is superimposed on the target region of the captured display image Pf, and thus the left receiver 200 can display the captured display image Pf as if a doll were actually present on the stage 111.
Similarly, the receiver 200 on the right among the two receivers 200 obtains a captured display image Pg and a decode target image similarly to the above, by capturing an image of the stage 111 illuminated by the transmitter 100 from the right side. The right receiver 200 obtains a light ID by decoding the decode target image. In other words, the right receiver 200 receives a light ID from the stage 111. The right receiver 200 transmits the light ID to the server. The right receiver 200 obtains a three-dimensional AR image and recognition information associated with the light ID from the server. The right receiver 200 recognizes a region according to the recognition information as a target region from the captured display image Pg. For example, the right receiver 200 recognizes a region above the center of the stage 111 as a target region.
Next, based on an orientation of the stage 111 shown in the captured display image Pg, the right receiver 200 generates a two-dimensional AR image P6b according to the orientation from the three-dimensional AR image. The right receiver 200 superimposes the two-dimensional AR image P6b on the target region, and displays, on the display 201, the captured display image Pg on which the AR image P6b is superimposed. In this case, the two-dimensional AR image P6b is superimposed on the target region of the captured display image Pg, and thus the right receiver 200 can display the captured display image Pg as if a doll were actually present on the stage 111.
Accordingly, the two receivers 200 display the AR images P6a and P6b at the same position on the stage 111. The AR images P6a and P6b are generated according to the orientation of the receiver 200, as if a virtual doll were actually facing in a predetermined direction. Accordingly, no matter what direction an image of the stage 111 is captured from, a captured display image can be displayed as if a doll were actually present on the stage 111.
Note that in the above example, the receiver 200 generates a two-dimensional AR image according to the positional relationship between the receiver 200 and the stage 111, from a three-dimensional AR image, but may obtain the two-dimensional AR image from the server. Specifically, the receiver 200 transmits information indicating the positional relationship to a server together with a light ID, and obtains the two-dimensional AR image from the server, instead of the three-dimensional AR image. Accordingly, the burden on the receiver 200 is decreased.
The transmitter 100 is configured as a lighting apparatus, and transmits a light ID by changing luminance while illuminating a cylindrical structure 112 as illustrated in
The receiver 200 obtains a captured display image Ph and a decode target image, by capturing an image of the structure 112 illuminated by the transmitter 100, similarly to the above. The receiver 200 obtains a light ID by decoding the decode target image. Specifically, the receiver 200 receives a light ID from the structure 112. The receiver 200 transmits the light ID to a server. Then, the receiver 200 obtains an AR image P7 and recognition information associated with the light ID from the server. The receiver 200 recognizes a region according to the recognition information as a target region, from the captured display images Ph. For example, the receiver 200 recognizes a region in which the center portion of the structure 112 is shown, as a target region. The receiver 200 superimposes an AR image P7 on the target region, and displays, on the display 201, the captured display image Ph on which the AR image P7 is superimposed. For example, the AR image P7 is an image which includes a character string “ABCD”, and the character string is warped according to the curved surface of the center portion of the structure 112. In this case, the AR image P2 which includes the warped character string is superimposed on the target region of the captured display image Ph, and thus the receiver 200 can display the captured display image Ph as if the character string drawn on the structure 112 were actually present.
The transmitter 100 transmits a light ID by changing luminance while illuminating a menu 113 of a restaurant, as illustrated in
The receiver 200 obtains a captured display image Pi and a decode target image, by capturing an image of the menu 113 illuminated by the transmitter 100, similarly to the above. The receiver 200 obtains a light ID by decoding the decode target image. In other words, the receiver 200 receives a light ID from the menu 113. The receiver 200 transmits the light ID to a server. Then, the receiver 200 obtains an AR image P8 and recognition information associated with the light ID from the server. The receiver 200 recognizes a region according to the recognition information as a target region, from the captured display image Pi. For example, the receiver 200 recognizes a region in which the menu 113 is shown as a target region. Then, the receiver 200 superimposes the AR image P8 on the target region, and displays, on the display 201, the captured display image Pi on which the AR image P8 is superimposed. For example, the AR image P8 shows food ingredients used for the dishes, using marks. For example, the AR image P8 shows a mark imitating an egg for the dish “XYZ salad” in which eggs are used, and shows a mark imitating a pig for the dish “KLM lunch” in which pork is used. In this case, the AR image P8 is superimposed on the target region in the captured display image Pi, and thus the receiver 200 can display the captured display image Pi as if the menu 113 having marks showing food ingredients were actually present. Accordingly, the user of the receiver 200 can be readily and concisely informed of food ingredients of the dishes, without providing the menu 113 with a special display apparatus.
The receiver 200 may obtain a plurality of AR images, select an AR image suitable for the user from among the AR images, based on user information set by the user, and superimpose the selected AR image. For example, if user information indicates that the user is allergic to eggs, the receiver 200 selects an AR image having an egg mark given to the dish in which eggs are used. Furthermore, if user information indicates that eating pork is prohibited, the receiver 200 selects an AR image having a pig mark given to the dish in which pork is used. Furthermore, the receiver 200 may transmit the user information to the server together with the light ID, and may obtain an AR image according to the light ID and the user information from the server. In this manner, for each user, a menu which prompts the user to pay attention can be displayed.
The transmitter 100 is configured as a TV, as illustrated in FIG. 254, for example, and transmits a light ID by changing luminance while displaying a video on the display. Furthermore, a typical TV 114 is disposed near the transmitter 100. The TV 114 shows a video on the display, but does not transmit a light ID.
The receiver 200 obtains a captured display image Pj and a decode target image by, for example, capturing an image which includes the transmitter 100 and also the TV 114, similarly to the above. The receiver 200 obtains a light ID by decoding the decode target image. In other words, the receiver 200 receives a light ID from the transmitter 100. The receiver 200 transmits the light ID to a server. Then, the receiver 200 obtains an AR image P9 and recognition information associated with the light ID from the server. The receiver 200 recognizes a region according to the recognition information as a target region, from the captured display image Pj.
For example, the receiver 200 recognizes, as a first target region, a lower portion of a region of the captured display image Pj in which the transmitter 100 transmitting a light ID is shown, using a bright line pattern region of the decode target image. Note that at this time, reference information included in the recognition information indicates that the position of the reference region in the captured display image Pj matches the position of the bright line pattern region in the decode target image. Furthermore, target information included in the recognition information indicates that a target region is below the reference region. The receiver 200 recognizes the first target region mentioned above, using such recognition information.
Furthermore, the receiver 200 recognizes, as a second target region, a region whose position is fixed in advance in a lower portion of the captured display image Pj. The second target region is larger than the first target region. Note that target information included in the recognition information further indicates not only the position of the first target region, but also the position and size of the second target region. The receiver 200 recognizes the second target region mentioned above, using such recognition information.
The receiver 200 superimposes the AR image P9 on each of the first target region and the second target region, and displays, on the display 201, the captured display image Pj on which on the AR images P9 are superimposed. When the AR images P9 are to be superimposed, the receiver 200 adjusts the size of the AR image P9 to the size of the first target region, and superimposes the AR image P9 whose size has been adjusted on the first target region. Furthermore, the receiver 200 adjusts the size of the AR image P9 to the size of the second target region, and superimposes the AR image P9 whose size has been adjusted on the second target region.
For example, the AR images P9 each indicate subtitles of the video on the transmitter 100. Furthermore, the language of the subtitles shown by the AR images P9 depends on user information set and registered in the receiver 200. Specifically, when the receiver 200 transmits a light ID to the server, the receiver 200 also transmits to the server the user information (for example, information indicating, for instance, nationality of the user or the language that the user uses). Then, the receiver 200 obtains the AR image P9 showing subtitles in the language according to the user information. Alternatively, the receiver 200 may obtain a plurality of AR images P9 showing subtitles in different languages, and select, according to the user information set and registered, an AR image P9 to be used and superimposed, from among the AR images P9.
In other words, in the example illustrated in
Accordingly, the receiver 200 can display the captured display image Pj as if subtitles were actually present in the video on the transmitter 100. Furthermore, the receiver 200 superimposes large subtitles on the lower portion of the captured display image Pj, and thus the subtitles can be made legible even if the subtitles given to the video on the transmitter 100 are small. Note that if no subtitles are given to the video on the transmitter 100 and only enlarged subtitles are superimposed on the lower portion of the captured display image Pj, it is difficult to determine whether the superimposed subtitles are for a video on the transmitter 100 or for a video on the TV 114. However, in the present embodiment, subtitles are given also to the video on the transmitter 100 which transmits a light ID, and thus the user can readily determine whether the superimposed subtitles are for either a video on the transmitter 100 or a video on the TV 114.
The receiver 200 may determine whether information obtained from the server includes sound information, when the captured display image Pj is to be displayed. When the receiver 200 determines that sound information is included, the receiver 200 preferentially outputs the sound indicated by the sound information over the first and second subtitles. In this manner, since sound is output preferentially, a burden on the user to read subtitles is reduced.
In the above example, according to user information (namely, the attribute of the user), the language of the subtitles has been changed to a different language, yet a video displayed on the transmitter 100 (that is, content) itself may be changed. For example, if a video displayed on the transmitter 100 is news, and if user information indicates that the user is a Japanese, the receiver 200 obtains news broadcast in Japan as an AR image. The receiver 200 superimposes the news on a region (namely, target region) where the display of the transmitter 100 is shown. On the other hand, if user information indicates that the user is an American, the receiver 200 obtains a news broadcast in the U.S. as an AR image. Then, the receiver 200 superimposes the news video on a region (namely, target region) where the display of the transmitter 100 is shown. Accordingly, a video suitable for the user can be displayed. Note that user information indicates, for example, nationality or the language that the user uses as the attribute of the user, and the receiver 200 obtains an AR image as mentioned above, based on the attribute.
Even if recognition information is, for example, feature points or a feature quantity as describes above, incorrect recognition may be made. For example, transmitters 100a and 100b are configured as station signs as with the transmitter 100. If the transmitters 100a and 100b are in near positions although the transmitters 100a and 100b are different station signs, the transmitters 100a and 100b may be incorrectly recognized due to the similarities.
For each of the transmitters 100a and 100b, recognition information of the transmitter may indicate a distinctive portion of an image of the transmitter, rather than feature points and a feature quantity of the entire image.
For example, a portion al of the transmitter 100a and a portion b1 of the transmitter 100b are greatly different, and a portion a2 of the transmitter 100a and a portion b2 of the transmitter 100b are greatly different. The server stores feature points and feature quantities of images of the portions a1 and a2, as recognition information associated with the transmitter 100a, if the transmitters 100a and 100b are installed within a predetermined range (namely, short distance). Similarly, the server stores feature points and feature quantities of images of portions b1 and b2 as identification information associated with the transmitter 100b.
Accordingly, the receiver 200 can appropriately recognize target regions using identification information associated with the transmitters 100a and 100b, even if the transmitters 100a and 100b similar to each other are close to each other (within a predetermined range as mentioned above).
The receiver 200 first determines whether the user has visual impairment, based on user information set and registered in the receiver 200 (step S131). Here, if the receiver 200 determines that the user has visual impairment (Y in step S131), the receiver 200 audibly outputs the words on an AR image superimposed and displayed (step S132). On the other hand, if the receiver 200 determines that the user has no visual impairment (N in step S131), the receiver 200 further determines whether the user has hearing impairment, based on the user information (step S133). Here, if the receiver 200 determines that the user has hearing impairment (Y in step S133), the receiver 200 stops outputting sound (step S134). At this time, the receiver 200 stops output of sound achieved by all functions.
Note that when the receiver 200 determines in step S131 that the user has visual impairment (Y in step S131), the receiver 200 may perform processing in step S133. Specifically, when the receiver 200 determines that the user has visual impairment, but has no hearing impairment, the receiver 200 may audibly output the words on the AR image superimposed and displayed.
The receiver 200 first obtains a decode target image by capturing an image which includes two transmitters each transmitting a light ID, and obtains light IDs by decoding a decode target image, as illustrated in (e) of
Even if the receiver 200 has once obtained the light IDs, or in other words, the receiver 200 has already known the light IDs, the receiver 200 may confront, during image capturing, a situation in which the receiver 200 does not know from which of the bright line pattern regions the light IDs are obtained. In such a case, the receiver 200 can readily determine, for each of the known light IDs, from which of the bright line pattern regions the light ID has been obtained, by performing processing illustrated in (a) to (d) of
Specifically, the receiver 200 first obtains a decode target image Pdec11, and obtains the numerical values for the address 0 of the light IDs of the bright line pattern regions X and Y, by decoding the decode target image Pdec11, as illustrated in (a) of
In view of this, the receiver 200 obtains a decode target image Pdec12 as illustrated in (b) of
Accordingly, the receiver 200 further obtains a decode target image Pdec13 as illustrated in (c) of
However, in order to increase reliability, as illustrated in (d) of
As described above, in the present embodiment, the numerical values for at least one address are re-obtained rather than again obtaining the numerical values (namely, data) for all the addresses of the light IDs. Accordingly, the receiver 200 can readily determine from which of the bright line pattern regions the known light IDs are obtained.
Note that in the above examples illustrated in (c) and (d) of
The receiver 200 is configured as a smartphone in the above examples, yet may be configured as a head mount display (also referred to as glasses) which includes the image sensor, as with the example illustrated in
Power consumption increases if a processing circuit for displaying AR images as described above (hereinafter, referred to as AR processing circuit) is kept running at all times, and thus the receiver 200 may start the AR processing circuit when a predetermined signal is detected.
For example, the receiver 200 includes a touch sensor 202. If a user's finger, for instance, touches the touch sensor 202, the touch sensor 202 outputs a touch signal. The receiver 200 starts the AR processing circuit when the touch signal is detected.
Furthermore, the receiver 200 may start the AR processing circuit when a radio wave signal transmitted via, for instance, Bluetooth (registered trademark) or Wi-Fi (registered trademark) is detected.
Furthermore, the receiver 200 may include an acceleration sensor, and start the AR processing circuit when the acceleration sensor measures acceleration greater than or equal to a threshold in a direction opposite the direction of gravity. Specifically, the receiver 200 starts the AR processing circuit when a signal indicating the above acceleration is detected. For example, if the user pushes up a nose-pad portion of the receiver 200 configured as glasses with a fingertip from below, the receiver 200 detects a signal indicating the above acceleration, and starts the AR processing circuit.
Furthermore, the receiver 200 may start the AR processing circuit when the receiver 200 detects that the image sensor is directed to the transmitter 100, according to the GPS or a 9-axis sensor, for instance. Specifically, the receiver 200 starts the AR processing circuit, when a signal indicating that the receiver 200 is directed to a given direction is detected. In this case, if the transmitter 100 is, for instance, a Japanese station sign described above, the receiver 200 superimposes an AR image showing the name of the station in English on the station sign, and displays the image.
If the receiver 200 obtains a light ID from the transmitter 100 (step S141), the receiver 200 switches between noise cancellation modes (step S142). The receiver 200 determines whether to terminate such processing of switching between modes (step S143), and if the receiver 200 determines not to terminate the processing (N in step S143), the receiver 200 repeatedly executes the processing from step S141. The noise cancellation modes are switched between, for example, a mode (ON) for cancelling noise from, for instance, the engine when the user is on an airplane and a mode (OFF) for not cancelling such noise. Specifically, the user carrying the receiver 200 is listening to sound such as music output from the receiver 200 while the user is wearing earphones connected to the receiver 200 over his/her ears. If such a user gets on an airplane, the receiver 200 obtains a light ID. As a result, the receiver 200 switches between the noise cancellation modes from OFF to ON. In this manner, even if the user is on the plane, he/she can listen to sound which does not include noise such as engine noise. Also when the user gets out of the airplane, the receiver 200 obtains a light ID. The receiver 200 which has obtained the light ID switches between the noise cancellation modes from ON to OFF. Note that the noise which is to be cancelled may be any sound such as human voice, not only engine noise.
This transmission system includes a plurality of transmitters 120 arranged in a predetermined order. The transmitters 120 are each one of the transmitters according to any of Embodiments 1 to 22 above like the transmitter 100, and each include one or more light emitting elements (for example, LEDs). The leading transmitter 120 transmits a light ID by changing luminance of one or more light emitting elements according to a predetermined frequency (carrier frequency). Furthermore, the leading transmitter 120 outputs a signal indicating a change in luminance to the succeeding transmitter 120, as a synchronization signal. Upon receipt of the synchronization signal, the succeeding transmitter 120 changes the luminance of one or more light emitting elements according to the synchronization signal, to transmit a light ID. Furthermore, the succeeding transmitter 120 outputs a signal indicating the change in luminance as a synchronization signal to the next succeeding transmitter 120. In this manner, all the transmitters 120 included in the transmission system transmit the light ID in synchronization.
Here, the synchronization signal is delivered from the leading transmitter 120 to the succeeding transmitter 120, and further from the succeeding transmitter 120 to the next succeeding the transmitter 120, and reaches the last transmitter 120. It takes about, for example, 1 μs to deliver the synchronization signal. Accordingly, if the transmission system includes N transmitters 120 (N is an integer of 2 or more), it will take 1×N μs for the synchronization signal to reach the last transmitter 120 from the leading transmitter 120. As a result, the timing of transmitting the light ID will be delayed for a maximum of N μs. For example, even if N transmitters 120 transmit a light ID according to a frequency of 9.6 kHz, and the receiver 200 is to receive the light ID at a frequency of 9.6 kHz, the receiver 200 receives a light ID delayed for N μs, and thus may not properly receive the light ID.
In view of this, in the present embodiment, the leading transmitter 120 transmits a light ID at a higher speed depending on the number of transmitters 120 included in the transmission system. For example, the leading transmitter 120 transmits a light ID according to a frequency of 9.605 kHz. On the other hand, the receiver 200 receives the light ID at a frequency of 9.6 kHz. At this time, even if the receiver 200 receives the light ID delayed for N μs, the frequency at which the leading transmitter 120 has transmitted the light ID is higher than the frequency at which the receiver 200 has received the light ID by 0.005 kHz, and thus the occurrence of an error in reception due to the delay of the light ID can be prevented.
The leading transmitter 120 may control the amount of adjusting the frequency, by having the last transmitter 120 to feed back the synchronization signal. For example, the leading transmitter 120 measures a time from when the leading transmitter 120 outputs the synchronization signal until when the leading transmitter 120 receives the synchronization signal fed back from the last transmitter 120. Then, the leading transmitter 120 transmits a light ID according to a frequency higher than a reference frequency (for example, 9.6 kHz) as the measured time is longer.
The transmission system includes two transmitters 120 and the receiver 200, for example. One of the two transmitters 120 transmits a light ID according to a frequency of 9.599 kHz, whereas the other transmitter 120 transmits a light ID according to a frequency of 9.601 kHz. In such a case, the two transmitters 120 each notify the receiver 200 of a frequency at which the light ID is transmitted, by means of a radio wave signal.
Upon receipt of the notification of the frequencies, the receiver 200 attempts decoding according to each of the notified frequencies. Specifically, the receiver 200 attempts decoding a decode target image according to a frequency of 9.599 kHz, and if the receiver 200 cannot receive a light ID by the decoding, the receiver 200 attempts decoding the decode target image according to a frequency of 9.601 kHz. Accordingly, the receiver 200 attempts decoding a decode target image according to each of all the notified frequencies. In other words, the receiver 200 performs decoding according to each of the notified frequencies. The receiver 200 may attempt decoding according to an average frequency of all the notified frequencies. Specifically, the receiver 200 attempts decoding according to 9.6 kHz which is an average frequency of 9.599 kHz and 9.601 kHz.
In this manner, the rate of occurrence of an error in reception caused by a difference in frequency between the receiver 200 and the transmitter 120 can be reduced.
First, the receiver 200 starts image capturing (step S151), and initializes the parameter N to 1 (step S152). Next, the receiver 200 decodes a decode target image obtained by the image capturing, according to a frequency associated with the parameter N, and calculates an evaluation value for the decoding result (step S153). For example, 1, 2, 3, 4, and 5 which are parameters N are associated in advance with frequencies such as 9.6 kHz, 9.601 kHz, 9.599 kHz, and 9.602 kHz. The evaluation value has a higher numerical value as the decoding result is similar to a correct light ID.
Next, the receiver 200 determines whether the numerical value of the parameter N is equal to Nmax which is a predetermined integer of 1 or more (step S154). Here, if the receiver 200 determines that the numerical value of the parameter N is not equal to Nmax (N in step S154), the receiver 200 increments the parameter N (step S155), and repeatedly executes processing from step S153. On the other hand, if the receiver 200 determines that the numerical value of the parameter N is equal to Nmax (Y in step S154), the receiver 200 registers, as an optimum frequency, a frequency with which the greatest evaluation value is calculated in the server in association with location information indicating the location of the receiver 200. After being registered, the optimum frequency and location information which are registered in the above manner are used to receive a light ID by the receiver 200 which has moved to the location indicated by the location information. Further, the location information may indicate the position measured by the GPS, for example, or may be identification information of an access point in a wireless local area network (LAN) (for example, service set identifier: SSID).
The receiver 200 which has registered such a frequency in a server displays the above AR images, for example, according to a light ID obtained by decoding according to the optimum frequency.
After the optimum frequency has been registered in the server illustrated in
Next, the receiver 200 starts image capturing (step S163), and decodes a decode target image obtained by the image capturing, according to the optimum frequency obtained in step S162 (step S164). The receiver 200 displays an AR image as mentioned above, according to a light ID obtained by the decoding, for example.
In this way, after the optimum frequency has been registered in the server, the receiver 200 obtains the optimum frequency and receives a light ID, without executing processing illustrated in
The display method according to the present embodiment is a display method for a display apparatus which is the receiver 200 described above to display an image, and includes steps SL11 to SL16.
In step SL11, the display apparatus obtains a captured display image and a decode target image by the image sensor capturing an image of a subject. In step SL12, the display apparatus obtains a light ID by decoding the decode target image. In step SL13, the display apparatus transmits the light ID to the server. In step SL14, the display apparatus obtains an AR image and recognition information associated with the light ID from the server. In step SL15, the display apparatus recognizes a region according to the recognition information as a target region, from the captured display image. In step SL16, the display apparatus displays the captured display image in which an AR image is superimposed on the target region.
Accordingly, the AR image is superimposed on the captured display image and displayed, and thus an image useful to a user can be displayed. Furthermore, the AR image can be superimposed on an appropriate target region, while preventing an increase in processing load.
Specifically, according to typical augmented reality (namely, AR), it is determined, by comparing a captured display image with a huge number of prestored recognition target images, whether the captured display image includes any of the recognition target images. If it is determined that the captured display image includes a recognition target image, an AR image corresponding to the recognition target image is superimposed on the captured display image. At this time, the AR image is aligned based on the recognition target image. In this manner, according to such typical AR, a huge number of recognition target images and a captured display image are compared, and furthermore, the position of a recognition target image needs to be detected from the captured display image also when an AR image is aligned, and thus a large amount of calculation involves and processing load is high, which is a problem.
However, with the display method according to the present embodiment, a light ID is obtained by decoding a decode target image obtained by capturing an image of a subject, as illustrated also in
Furthermore, with the display method according to the present embodiment, recognition information associated with the light ID is obtained from the server. Recognition information is for recognizing, from a captured display image, a target region on which an AR image is superimposed. The recognition information may indicate that a white quadrilateral is a target region, for example. In this case, the target region can be recognized easily, and processing load can be further reduced. Specifically, processing load can be further reduced according to the content of recognition information. In the server, the content of the recognition information can be arbitrarily determined according to a light ID, and thus balance between processing load and recognition accuracy can be maintained appropriately.
Here, the recognition information may be reference information for locating a reference region of the captured display image, and in (e), the reference region may be located from the captured display image, based on the reference information, and the target region may be recognized from the captured display image, based on a position of the reference region.
The recognition information may include reference information for locating a reference region of the captured display image, and target information indicating a relative position of the target region with respect to the reference region. In this case, in (e), the reference region is located from the captured display image, based on the reference information, and a region in the relative position indicated by the target information is recognized as the target region from the captured display image, based on a position of the reference region.
In this manner, as illustrated in
The reference information may indicate that the position of the reference region in the captured display image matches a position of a bright line pattern region in the decode target image, the bright line pattern region including a pattern formed by bright lines which appear due to exposure lines included in the image sensor being exposed.
In this manner, as illustrated in
The reference information may indicate that the reference region in the captured display image is a region in which a display is shown in the captured display image.
In this manner, if a station sign is a display, a target region can be recognized based on a region in which the display is shown, as illustrated in
In (f), a first AR image which is the AR image may be displayed for a predetermined display period, while preventing display of a second AR image different from the first AR image.
In this manner, when the user is looking at a first AR image displayed once, the first AR image can be prevented from being immediately replaced with a second AR image different from the first AR image, as illustrated in
In (f), decoding a decode target image newly obtained may be prohibited during the predetermined display period.
Accordingly, as illustrated in
Moreover, (f) may further include: measuring an acceleration of the display apparatus using an acceleration sensor during the display period; determining whether the measured acceleration is greater than or equal to a threshold; and displaying the second AR image instead of the first AR image by no longer preventing the display of the second AR image, if the measured acceleration is determined to be greater than or equal to the threshold.
In this manner, as illustrated in
Moreover, (f) may further include: determining whether a face of a user is approaching the display apparatus, based on image capturing by a face camera included in the display apparatus; and displaying a first AR image while preventing display of a second AR image different from the first AR image, if the face is determined to be approaching. Alternatively, (f) may further include: determining whether a face of a user is approaching the display apparatus, based on an acceleration of the display apparatus measured by an acceleration sensor; and displaying a first AR image while preventing display of a second AR image different from the first AR image, if the face is determined to be approaching.
In this manner, the first AR image can be prevented from being replaced with the second AR image different from the first AR image when the user is bringing his/her face close to the display apparatus to look at the first AR image, as illustrated in
Furthermore, as illustrated in
In this manner, the first subtitles are superimposed on the image of the transmission display, and thus a user can be readily informed of which of a plurality of displays the first subtitles are for the image of. The second subtitles obtained by enlarging the first subtitles are also displayed, and thus even if the first subtitles are small and hard to read, the subtitles can be readily read by displaying the second subtitles.
Moreover, (f) may further include: determining whether information obtained from the server includes sound information; and preferentially outputting sound indicated by the sound information over the first subtitles and the second subtitles, if the sound information is determined to be included.
Accordingly, sound is preferentially output, and thus burden on a user to reads subtitles is reduced.
A display apparatus 10 according to the present embodiment is a display apparatus which displays an image, an image sensor 11, a decoding unit 12, a transmission unit 13, an obtaining unit 14, a recognition unit 15, and a display unit 16. Note that the display apparatus 10 corresponds to the receiver 200 described above.
The image sensor 11 obtains a captured display image and a decode target image by capturing an image of a subject. The decoding unit 12 obtains a light ID by decoding the decode target image. The transmission unit 13 transmits the light ID to a server. The obtaining unit 14 obtains an AR image and recognition information associated with the light ID from the server. The recognition unit 15 recognizes a region according to the recognition information as a target region, from the captured display image. The display unit 16 displays a captured display image in which the AR image is superimposed on the target region.
Accordingly, the AR image is superimposed on the captured display image and displayed, and thus an image useful to a user can be displayed. Furthermore, processing load can be reduced and the AR image can be superimposed on an appropriate target region.
Note that in the present embodiment, each of the elements may be constituted by dedicated hardware, or may be obtained by executing a software program suitable for the element. Each element may be obtained by a program execution unit such as a CPU or a processor reading and executing a software program stored in a hard disk or a recording medium such as semiconductor memory. Here, software which achieves the receiver 200 or the display apparatus 10 according to the present embodiment is a program which causes a computer to execute the steps included in the flowcharts illustrated in
The following describes Variation 1 of Embodiment 23, that is, Variation 1 of the display method which achieves AR using a light ID.
The receiver 200 obtains, by the image sensor capturing an image of a subject, a captured display image Pk which is a normal captured image described above and a decode target image which is a visible light communication image or bright line image described above.
Specifically, the image sensor of the receiver 200 captures an image that includes a transmitter 100c configured as a robot and a person 21 next to the transmitter 100c. The transmitter 100c is any of the transmitters according to Embodiments 1 to 22 above, and includes one or more light emitting elements (for example, LEDs) 131. The transmitter 100c changes luminance by causing one or more of the light emitting elements 131 to blink, and transmits a light ID (light identification information) by the luminance change. The light ID is the above-described visible light signal.
The receiver 200 obtains the captured display image Pk in which the transmitter 100c and the person 21 are shown, by capturing an image that includes the transmitter 100c and the person 21 for a normal exposure time. Furthermore, the receiver 200 obtains a decode target image by capturing an image that includes the transmitter 100c and the person 21, for a communication exposure time shorter than the normal exposure time.
The receiver 200 obtains a light ID by decoding the decode target image. Specifically, the receiver 200 receives a light ID from the transmitter 100c. The receiver 200 transmits the light ID to a server. Then, the receiver 200 obtains an AR image P10 and recognition information associated with the light ID from the server. The receiver 200 recognizes a region according to the recognition information as a target region from the captured display image Pk. For example, the receiver 200 recognizes, as a target region, a region on the right of the region in which the robot which is the transmitter 100c is shown. Specifically, the receiver 200 identifies the distance between two markers 132a and 132b of the transmitter 100c shown in the captured display image Pk. Then, the receiver 200 recognizes, as a target region, a region having the width and the height according to the distance. Specifically, recognition information indicates the shapes of the markers 132a and 132b and the location and the size of a target region based on the markers 132a and 132b.
The receiver 200 superimposes the AR image P10 on the target region, and displays, on the display 201, the captured display image Pk on which the AR image P10 is superimposed. For example, the receiver 200 obtains the AR image P10 showing another robot different from the transmitter 100c. In this case, the AR image P10 is superimposed on the target region of the captured display image Pk, and thus the captured display image Pk can be displayed as if the other robot is actually present next to the transmitter 100c. As a result, the person 21 can have his/her picture taken together with the other robot, as well as the transmitter 100c, even if the other robot does not really exist.
The transmitter 100 is configured as an image display apparatus which includes a display panel, as illustrated in, for example,
The receiver 200 obtains a captured display image Pm and a decode target image by capturing an image of the transmitter 100, in the same manner as the above. The receiver 200 obtains a light ID by decoding the decode target image. Specifically, the receiver 200 receives a light ID from the transmitter 100. The receiver 200 transmits the light ID to a server. Then, the receiver 200 obtains an AR image P11 and recognition information associated with the light ID from the server. The receiver 200 recognizes a region according to the recognition information as a target region, from the captured display image Pm. For example, the receiver 200 recognizes a region in which the display panel of the transmitter 100 is shown as a target region. The receiver 200 superimposes the AR image P11 on the target region, and displays, on the display 201, the captured display image Pm on which the AR image P11 is superimposed. For example, the AR image P11 is a video having a picture which is the same or substantially the same as the still picture PS displayed on the display panel of the transmitter 100, as a leading picture in the display order. Specifically, the AR image P11 is a video which starts moving from the still picture PS.
In this case, the AR image P11 is superimposed on a target region of the captured display image Pm, and thus the receiver 200 can display the captured display image Pm, as if an image display apparatus which displays the video is actually present.
The transmitter 100 is configured as a station sign, as illustrated in, for example,
The receiver 200 captures an image of the transmitter 100 from a location away from the transmitter 100, as illustrated in (a) of
In this case, the AR image P12 is superimposed on the first target region of the captured display image Pn and displayed, and thus the user approaches the transmitter 100 with the receiver 200 facing the transmitter 100. Such approach of the receiver 200 to the transmitter 100 increases a region of the captured display image Pn in which the transmitter 100 is shown (corresponding to the reference region as described above). If the size of the region is greater than or equal to a first threshold, the receiver 200 further superimposes the AR image P13 on a second target region that is a region in which the transmitter 100 is shown, as illustrated in, for example, (b) of
Also in this case, the AR image P12 which is an arrow is superimposed on the first target region of the captured display image Pn and displayed, and thus the user approaches the transmitter 100 with the receiver 200 facing the transmitter 100. Such approach of the receiver 200 to the transmitter 100 further increases a region of the captured display image Pn in which the transmitter 100 is shown (corresponding to the reference region as described above). If the size of the region is greater than or equal to a second threshold, the receiver 200 changes the AR image P13 superimposed on the second target region to the AR image P14, as illustrated in, for example, (c) of
Specifically, the receiver 200 displays, on the display 201, the captured display image Pn on which the AR image P14 is superimposed. For example, the AR image P14 is a message informing a user of detailed information on the vicinity of the station shown on the station sign. The AR image P14 has the same size as a region of the captured display image Pn in which the transmitter 100 is shown. The closer the receiver 200 is to the transmitter 100, the larger the region in which the transmitter 100 is shown. Accordingly, the AR image P14 is larger than the AR image P13.
Accordingly, the receiver 200 increases the AR image as the transmitter 100 approaches, and displays more information. The arrow, like the AR image P12, which facilitates the user to bring the receiver 200 closer is displayed, and thus the user can be readily informed that the closer the user brings the receiver 200, the more information is displayed.
The receiver 200 displays more information if the receiver 200 approaches the transmitter 100 in the example illustrated in
Specifically, the receiver 200 obtains a captured display image Po and a decode target image, by capturing an image of the transmitter 100 as illustrated in
In this case, the AR image P15 is superimposed on the target region of the captured display image Po, and thus the user of the receiver 200 can display a lot of information on the receiver 200, without approaching the transmitter 100.
The receiver 200 is configured as a smartphone in the above example, yet may be configured as a head mount display (also referred to as glasses) which includes an image sensor, as with the examples illustrated in
Such a receiver 200 obtains a light ID by decoding only a partial decoding target region of a decode target image. For example, the receiver 200 includes an eye gaze detection camera 203 as illustrated in (a) of
The receiver 200 displays a gaze frame 204 in such a manner that, for example, the gaze frame 204 appears in a region to which the detected gaze is directed in the user's view, as illustrated in (b) of
If the decode target image includes a plurality of bright line pattern regions each for outputting sound, the receiver 200 may decode only a bright line pattern region within a decoding target region, and output only sound for the bright line pattern region. Alternatively, the receiver 200 may decode the plurality of bright line pattern regions included in the decode target image, output sound for the bright line pattern region within the decoding target region at high volume, and output sound for a bright line pattern region outside the decoding target region at low volume. Further, if the plurality of bright line pattern regions are outside the decoding target region, the receiver 200 may output sound for a bright line pattern region at higher volume as the bright line pattern region is closer to the decoding target region.
The transmitter 100 is configured as an image display apparatus which includes a display panel as illustrated in, for example,
The receiver 200 obtains a captured display image Pp and a decode target image by capturing an image of the transmitter 100, similarly to the above.
At this time, the receiver 200 locates, from the captured display image Pp, a region which is in the same position as the bright line pattern region in a decode target image, and has the same size as the bright line pattern region. Then, the receiver 200 may display a scanning line P100 which repeatedly moves from one edge of the region toward the other edge.
While displaying the scanning line P100, the receiver 200 obtains a light ID by decoding a decode target image, and transmits the light ID to a server. The receiver 200 obtains an AR image and recognition information associated with the light ID from the server. The receiver 200 recognizes a region according to the recognition information as a target region, from the captured display image Pp.
If the receiver 200 recognizes such a target region, the receiver 200 terminates the display of the scanning line P100, superimposes an AR image on the target region, and displays, on the display 201, the captured display image Pp on which the AR image is superimposed.
Accordingly, after the receiver 200 has captured an image of the transmitter 100, the receiver 200 displays the scanning line P100 which moves until the AR image is displayed. Thus, a user can be informed that processing of, for instance, reading a light ID and an AR image is being performed.
Two transmitters 100 are each configured as an image display apparatus which includes a display panel, as illustrated in, for example,
The receiver 200 obtains a captured display image Pq and a decode target image by capturing an image that includes the two transmitters 100, similarly to the example illustrated in
The receiver 200 recognizes regions according to those pieces of recognition information as target regions from the captured display image Pq. For example, the receiver 200 recognizes the regions in which the display panels of the two transmitters 100 are shown as target regions. The receiver 200 superimposes the AR image P16 on the target region corresponding to the light ID “01” and superimposes the AR image P17 on the target region corresponding to the light ID “02”. Then, the receiver 200 displays a captured display image Pq on which the AR images P16 and P17 are superimposed, on the display 201. For example, the AR image P16 is a video having, as a leading picture in the display order, a picture which is the same or substantially the same as a still picture PS displayed on the display panel of the transmitter 100 corresponding to the light ID “01”. The AR image P17 is a video having, as the leading picture in the display order, a picture which is the same or substantially the same as a still picture PS displayed on the display panel of the transmitter 100 corresponding to the light ID “02”. Specifically, the leading pictures of the AR images P16 and P17 which are videos are the same. However, the AR images P16 and P17 are different videos, and have different pictures except the leading pictures.
Accordingly, such AR images P16 and P17 are superimposed on the captured display image Pq, and thus the receiver 200 can display the captured display image Pq as if the image display apparatuses which display different videos whose playback starts from the same picture were actually present.
First, the receiver 200 obtains a first light ID by capturing an image of a first transmitter 100 as a first subject (step S201). Next, the receiver 200 recognizes the first subject from the captured display image (step S202). Specifically, the receiver 200 obtains a first AR image and first recognition information associated with the first light ID from a server, and recognizes the first subject, based on the first recognition information. Then, the receiver 200 starts playing a first video which is the first AR image from the beginning (step S203). Specifically, the receiver 200 starts the playback from the leading picture of the first video.
Here, the receiver 200 determines whether the first subject has gone out of the captured display image (step S204). Specifically, the receiver 200 determines whether the receiver 200 is unable to recognize the first subject from the captured display image. Here, if the receiver 200 determines that the first subject has gone out of the captured display image (Y in step S204), the receiver 200 interrupts playback of the first video which is the first AR image (step S205).
Next, by capturing an image of a second transmitter 100 different from the first transmitter 100 as a second subject, the receiver 200 determines whether the receiver 200 has obtained a second light ID different from the first light ID obtained in step S201 (step S206). Here, if the receiver 200 determines that the receiver 200 has obtained the second light ID (Y in step S206), the receiver 200 performs processing similar to the processing in steps S202 to S203 performed after the first light ID is obtained. Specifically, the receiver 200 recognizes the second subject from the captured display image (step S207). Then, the receiver 200 starts playing the second video which is the second AR image corresponding to the second light ID from the beginning (step S208). Specifically, the receiver 200 starts the playback from the leading picture of the second video.
On the other hand, if the receiver 200 determines that the receiver 200 has not obtained the second light ID in step S206 (N in step S206), the receiver 200 determines whether the first subject has come into the captured display image again (step S209). Specifically, the receiver 200 determines whether the receiver 200 again recognizes the first subject from the captured display image. Here, if the receiver 200 determines that the first subject has come into the captured display image (Y in step S209), the receiver 200 further determines whether the elapsed time is less than a time period previously determined (namely, a predetermined time period) (step S210). In other words, the receiver 200 determines whether the predetermined time period has elapsed since the first subject has gone out of the captured display image until the first subject has come into the until the first again. Here, if the receiver 200 determines that the elapsed time is less than the predetermined time period (Y in step S210), the receiver 200 starts the playback of the interrupted first video not from the beginning (step S211). Note that a playback resumption leading picture which is a picture of the first video first displayed when the playback starts not from the beginning may be the next picture in the display order following the picture displayed the last when playback of the first video is interrupted. Alternatively, the playback resumption leading picture may be a picture previous by n pictures (n is an integer of 1 or more) in the display order than the picture displayed the last.
On the other hand, if the receiver 200 determines that the predetermined time period has elapsed (N in step S210), the receiver 200 starts playing the interrupted first video from the beginning (step S212).
The receiver 200 superimposes an AR image on a target region of a captured display image in the above example, yet may adjust the brightness of the AR image at this time. Specifically, the receiver 200 determines whether the brightness of an AR image obtained from the server matches the brightness of a target region of a captured display image. Then, if the receiver 200 determines that the brightness does not match, the receiver 200 causes the brightness of the AR image to match the brightness of the target region by adjusting the brightness of the AR image. Then, the receiver 200 superimposes the AR image whose brightness has been adjusted onto the target region of the captured display image. This brings the AR image which is to be superimposed further close to an image of an object that is actually present, and odd feeling that the user feels from the AR image can be reduced. Note that the brightness of an AR image is the average spatial brightness of the AR image, and also the brightness of the target region is the average spatial brightness of the target region.
The receiver 200 may enlarge an AR image by tapping the AR image and display the enlarged AR image on the entire display 201, as illustrated in
The following describes Variation 2 of Embodiment 23, specifically, Variation 2 of the display method which achieves AR using a light ID.
For example, the receiver 200 according to Embodiment 23 or Variation 1 of Embodiment 23 captures an image of a subject at time t1. Note that the above subject is a transmitter such as a TV which transmits a light ID by changing luminance, a poster illuminated with light from the transmitter, a guideboard, or a signboard, for instance. As a result, the receiver 200 displays, as a captured display image, the entire image obtained through an effective pixel region of an image sensor (hereinafter, referred to as entire captured image) on the display 201. At this time, the receiver 200 recognizes, as a target region on which an AR image is to be superimposed, a region according to recognition information obtained based on the light ID, from the captured display image. The target region is a region in which an image of a transmitter such as a TV or an image of a poster, for example. The receiver 200 superimposes the AR image on the target region of the captured display image, and displays, on the display 201, the captured display image on which the AR image is superimposed. Note that the AR image may be a still image or a video, or may be a character string which includes one or more characters or symbols.
Here, if the user of the receiver 200 approaches a subject in order to display the AR image in a larger size, a region (hereinafter, referred to as a recognition region) on an image sensor corresponding to the target region protrudes off the effective pixel region at time t2. Note that the recognition region is a region where an image shown in the target region of the captured display image is projected in the effective pixel region of the image sensor. Specifically, the effective pixel region and the recognition region of the image sensor correspond to the captured display image and the target region of the display 201, respectively.
Due to the recognition region protruding off the effective pixel region, the receiver 200 cannot recognize the target region from the captured display image, and cannot display an AR image.
In view of this, the receiver 200 according to this variation obtains, as an entire captured image, an image corresponding to a wider angle of view than that for a captured display image displayed on the entire display 201.
The angle of view for the entire captured image obtained by the receiver 200 according to this variation, that is, the angle of view for the effective pixel region of the image sensor is wider than the angle of view for the captured display image displayed on the entire display 201. Note that in an image sensor, a region corresponding to an image area displayed on the display 201 is hereinafter referred to as a display region.
For example, the receiver 200 captures an image of a subject at time t1. As a result, the receiver 200 displays, on the display 201 as a captured display image, only an image obtained through the display region that is smaller than the effective pixel region of the image sensor, out of the entire captured image obtained through the effective pixel region. At this time, the receiver 200 recognizes, as a target region on which an AR image is to be superimposed, a region according to the recognition information obtained based on the light ID, from the entire captured image, similarly to the above. Then, the receiver 200 superimposes the AR image on the target region of the captured display image, and displays, on the display 201, the captured display image on which the AR image is superimposed.
Here, if the user of the receiver 200 approaches a subject in order to display the AR image in a larger size, the recognition region on the image sensor expands. Then, at time t2, the recognition region protrudes off the display region on the image sensor. Specifically, an image shown in the target region (for example, an image of a poster) protrudes off the captured display image displayed on the display 201. However, the recognition region on the image sensor is not protruding off the effective pixel region. Specifically, the receiver 200 has obtained the entire captured image which includes a target region also at time t2. As a result, the receiver 200 can recognize the target region from the entire captured image. The receiver 200 superimposes, only on a partial region within the target region in the captured display image, a portion of the AR image corresponding to the region, and displays the images on the display 201.
Accordingly, even if the user approaches the subject in order to display the AR image in a greater size and the target region protrudes off the captured display image, the display of the AR image can be continued.
The receiver 200 obtains an entire captured image and a decode target image by the image sensor capturing an image of a subject (step S301). Next, the receiver 200 obtains a light ID by decoding the decode target image (step S302). Next, the receiver 200 transmits the light ID to the server (step S303). Next, the receiver 200 obtains an AR image and recognition information associated with the light ID from the server (step S304). Next, the receiver 200 recognizes a region according to the recognition information as a target region, from the entire captured image (step S305).
Here, the receiver 200 determines whether a recognition region, in the effective pixel region of the image sensor, corresponding to an image shown in the target region protrudes off the display region (step S306). Here, if the receiver 200 determines that the recognition region is protruding off (Yes in step S306), the receiver 200 displays, on only a partial region of the target region in the captured display image, a portion of the AR image corresponding to the partial region (step S307). On the other hand, if the receiver 200 determines that the recognition region is not protruding off (No in step S306), the receiver 200 superimposes the AR image on the target region of the captured display image, and displays the captured display image on which the AR image is superimposed (step S308).
Then, the receiver 200 determines whether processing of displaying the AR image is to be terminated (step S309), and if the receiver 200 determines that the processing is not to be terminated (No in step S309), the receiver 200 repeatedly executes the processing from step S305.
The receiver 200 may switch between screen displays of AR images according to the ratio of the size of the recognition region relative to the display region stated above.
When the horizontal width of the display region of the image sensor is w1, the vertical width is h1, the horizontal width of the recognition region is w2, and the vertical width is h2, the receiver compares a greater one of the ratios (h2/h1) and (w2/w1) with a threshold.
For example, the receiver 200 compares the ratio of the greater one with a first threshold (for example, 0.9) when a captured display image in which an AR image is superimposed on a target region is displayed as shown by (Screen Display 1) in
The receiver 200 compares the greater one of the ratios with a second threshold (for example, 0.7) when, for example, the receiver 200 enlarges the AR image and displays the enlarged AR image over the entire display 201, as shown by (Screen Display 2) in
The receiver 200 first performs light ID processing (step S301a). The light ID processing includes steps S301 to S304 illustrated in
Next, the receiver 200 determines whether a greater one of the ratios of a recognition region, namely, the ratios (h2/h1) and (w2/w1) is greater than or equal to a first threshold K (for example, K=0.9) (step S313). Here, if the receiver 200 determines that the greater one is not greater than or equal to the first threshold K (No in step S313), the receiver 200 repeatedly executes processing from step S311. On the other hand, if the receiver 200 determines that the greater one is greater than or equal to the first threshold K (Yes in step S313), the receiver 200 enlarges the AR image and displays the enlarged AR image over the entire display 201 (step S314). At this time, the receiver 200 periodically switches between on and off of the power of the image sensor. Power consumption of the receiver 200 can be reduced by turning off the power of the image sensor periodically.
Next, the receiver 200 determines whether the greater one of the ratios of the recognition region is equal to or smaller than the second threshold L (for example, L=0.7) when the power of the image sensor is periodically turned on. Here, if the receiver 200 determines that the greater one of the ratios of the recognition region is not equal to or smaller than the second threshold L (No in step S315), the receiver 200 repeatedly executes the processing from step S314. On the other hand, if the receiver 200 determines that the ratio of the recognition region is equal to or smaller than the second threshold L (Yes in step S315), the receiver 200 superimposes the AR image on the target region of the captured display image, and displays the captured display image on which the AR image is superimposed (step S316).
Then, the receiver 200 determines whether processing of displaying an AR image is to be terminated (step S317), and if the receiver 200 determines that the processing is not to be terminated (No in step S317), the receiver 200 repeatedly executes the processing from step S313.
Accordingly, by setting the second threshold L to a value smaller than the first threshold K, the screen display of the receiver 200 is prevented from being frequently switched between (Screen Display 1) and (Screen Display 2), and the state of the screen display can be stabilized.
Note that the display region and the effective pixel region may be the same or may be different in the example illustrated in
In the example illustrated in
For example, the receiver 200 captures an image of a subject at time t1. As a result, the receiver 200 displays, on the display 201 as a captured display image, only an image obtained through the display region smaller than the effective pixel region, out of the entire captured image obtained through the effective pixel region of the image sensor. At this time, the receiver 200 recognizes, as a target region on which an AR image is to be superimposed, a region according to recognition information obtained based on a light ID, from the entire captured image, similarly to the above. Then, the receiver 200 superimposes the AR image on the target region of the captured display image, and displays, on the display 201, the captured display image on which the AR image is superimposed.
Here, if the user changes the orientation of the receiver 200 (specifically, the image sensor), the recognition region of the image sensor moves to, for example, the upper left in
When the recognition region protrudes off the display region as described above, the receiver 200 compares, with a threshold, the pixel count for a distance between the edge of the effective pixel region and the edge of the display region (hereinafter, referred to as an interregional distance).
For example, dh denotes the pixel count for a shorter one (hereinafter referred to as a first distance) of a distance between the upper sides of the effective pixel region and the display region and a distance between the lower sides of the effective pixel region and the display region. Furthermore, dw denotes the pixel count for a shorter one (hereinafter, referred to as a second distance) of a distance between the left sides of the effective pixel region and the display region and a distance between the right sides of the effective pixel region and the display region. At this time, the above interregional distance is a shorter one of the first and second distances.
Specifically, the receiver 200 compares a smaller one of the pixel counts dw and dh with a threshold N. If the smaller pixel count is below the threshold N at, for example, time t2, the receiver 200 fixes the size and the position of a portion of an AR image, according to the position of the recognition region of the image sensor. Accordingly, the receiver 200 switches between screen displays of the AR image. For example, the receiver 200 fixes the size and the location of a portion of the AR image to be displayed to the size and the position of a portion of the AR image displayed on the display 201 when the smaller one of the pixel counts becomes the threshold N.
Accordingly, even if the recognition region further moves and protrudes off the effective pixel region at time t3, the receiver 200 continues displaying a portion of the AR image in the same manner as at time t2. Specifically, as long as a smaller one of the pixel counts dw and dh is equal to or less than the threshold N, the receiver 200 superimposes a portion of the AR image whose size and position are fixed on the captured display image in the same manner as at time t2, and continues displaying the images.
In the example illustrated in
For example, similarly to the example illustrated in
In view of this, in the example illustrated in
As described above, when the recognition region protrudes off the display region, the receiver 200 compares a smaller one of the pixel counts dw and dh with the threshold N. Then, the receiver 200 fixes the display magnification and the position of the AR image without changing the display magnification and the position according to the position of the recognition region of the image sensor, if the smaller pixel count becomes below the threshold N at time t2, for example. Specifically, the receiver 200 switches between screen displays of the AR image. For example, the receiver 200 fixes the display magnification and the position of a displayed AR image to the display magnification and the position of the AR image displayed on the display 201 when the smaller pixel count becomes the threshold N.
Accordingly, the recognition region further moves and protrudes off the effective pixel region at time t3, the receiver 200 continues displaying the AR image in the same manner as at time t2. In other words, as long as the smaller one of the pixel counts dw and dh is equal to or smaller than the threshold N, the receiver 200 superimposes, on the captured display image, the AR image whose display magnification and position are fixed and continues displaying the images, in the same manner as at time t2.
Note that in the above example, a smaller one of the pixel counts dw and dh is compared with the threshold, yet the ratio of the smaller pixel count may be compared with the threshold. The ratio of the pixel count dw is, for example, a ratio (dw/w0) of the pixel count dw relative to the horizontal pixel count w0 of the effective pixel region. Similarly, the ratio of the pixel count dh is, for example, a ratio (dh/h0) of the pixel count dh relative to the vertical pixel count h0 of the effective pixel region. Alternatively, instead of the horizontal or vertical pixel count of the effective pixel region, the ratios of the pixel counts dw and dh may be represented using the horizontal or vertical pixel count of the display region. The threshold compared with the ratios of the pixel counts dw and dh is 0.05, for example.
The angle of view corresponding to a smaller one of the pixel counts dw and dh may be compared with the threshold. If the pixel count along the diagonal line of the effective pixel region is m, and the angle of view corresponding to the diagonal line is 0 (for example, 55 degrees), the angle of view corresponding to the pixel count dw is θ×dw/m, and the angle of view corresponding to the pixel count dh is θ×dh/m.
In the example illustrated in
For example, the receiver 200 captures an image of a subject at time t1. As a result, the receiver 200 displays, on the display 201 as a captured display image, only an image obtained through the display region smaller than the effective pixel region, out of the entire captured image obtained through the effective pixel region of the image sensor. At this time, the receiver 200 recognizes, as a target region on which an AR image is to be superimposed, a region according to the recognition information obtained based on a light ID, from the entire captured image, similarly to the above. The receiver 200 superimposes an AR image on the target region of the captured display image, and displays, on the display 201, the captured display image on which the AR image is superimposed.
Here, if the user changes the orientation of the receiver 200, the receiver 200 changes the position of the AR image to be displayed, according to the movement of the recognition region of the image sensor. For example, the recognition region of the image sensor moves, for example, to the upper left in
When the recognition region further moves and protrudes off the display region, the receiver 200 fixes the size and the position of the AR image displayed at time t2, without changing the size and the position. Specifically, the receiver 200 switches between the screen displays of the AR image.
Thus, even if the recognition region further moves, and protrudes off the effective pixel region at time t3, the receiver 200 continues displaying the AR image in the same manner as at time t2. Specifically, as long as the recognition region is off the display region, the receiver 200 superimposes the AR image on the captured display image in the same size as at time t2 and in the same position as at time t2, and continues displaying the images.
Accordingly, in the example illustrated in
Although the above is a description of the screen display of the AR image with reference to
Note that in the example illustrated in
In view of this, as illustrated in
The display method according to an aspect of the present disclosure includes steps S41 to S43.
In step S41, a captured image is obtained by an image sensor capturing an image of, as a subject, an object illuminated by a transmitter which transmits a signal by changing luminance. In step S42, the signal is decoded from the captured image. In step S43, a video corresponding to the decoded signal is read from a memory, the video is superimposed on a target region corresponding to the subject in the captured image, and the captured image in which the video is superimposed on the target region is displayed on a display. Here, in step S43, the video is displayed, starting with one of, among images included in the video, an image which includes the object and a predetermined number of images which are to be displayed around a time at which the image which includes the object is to be displayed. The predetermined number of images are, for example, ten frames. Alternatively, the object is a still image, and in step S43, the video is displayed, starting with an image same as the still image. Note that an image with which the display of a video starts is not limited to the same image as a still image, and may be an image located before or after the same image as the still image, that is, an image which includes an object, by a predetermined number of frames in the display order. The object may not be limited to a still image, and may be a doll, for instance.
Note that the image sensor and the captured image are the image sensor and the entire captured image in Embodiment 23, for example. Furthermore, an illuminated still image may be a still image displayed on the display panel of the image display apparatus, and may also be a poster, a guideboard, or a signboard illuminated with light from a transmitter.
Such a display method may further include a transmission step of transmitting a signal to a server, and a receiving step of receiving a video corresponding to the signal from the server.
In this manner, as illustrated in, for example,
The still image may include an outer frame having a predetermined color, and the display method according to an aspect of the present disclosure may include recognizing the target region from the captured image, based on the predetermined color. In this case, in step S43, the video may be resized to a size of the recognized target region, the resized video may be superimposed on the target region in the captured image, and the captured image in which the resized video is superimposed on the target region may be displayed on the display. For example, the outer frame having a predetermined color is a white or black quadrilateral frame surrounding a still image, and is indicated by recognition information in Embodiment 23. Then, the AR image in Embodiment 23 is resized as a video and superimposed.
Accordingly, a video can be displayed more realistically as if the video were actually present as a subject.
Out of an imaging region of the image sensor, only an image to be projected in the display region smaller than the imaging region is displayed on a display. In this case, in step S43, if a projection region in which a subject is projected in the imaging region is larger than the display region, an image obtained through a portion of the projection region beyond the display region may not be displayed on the display. Here, for example, as illustrated in
In this manner, for example, as illustrated in
For example, the horizontal and vertical widths of the display region are w1 and h1, and the horizontal and vertical widths of the projection region are w2 and h2. In this case, in step S43, if a greater value of h2/h1 and w2/w1 is greater than or equal to a predetermined value, a video is displayed on the entire screen of the display, and if a greater value of h2/h1 and w2/w1 is smaller than the predetermined value, a video may be superimposed on the target region of the captured image, and displayed on the display.
Accordingly, as illustrated in, for example,
The display method according to an aspect of the present disclosure may further include a control step of turning off the operation of the image sensor if a video is displayed on the entire screen of the display.
Accordingly, for example, as illustrated in step S314 in
In step S43, if a target region cannot be recognized from a captured image due to the movement of the image sensor, a video may be displayed in the same size as the size of the target region recognized immediately before the target region is unable to be recognized. Note that the case in which the target region cannot be recognized from a captured image is a state in which, for example, at least a portion of a target region corresponding to a still image which is a subject is not included in a captured image. If a target region cannot be thus recognized, a video having the same size as the size of the target region recognized immediately before is displayed, as with the case at time t3 in
In step S43, if the movement of the image sensor brings only a portion of the target region into a region of the captured image which is to be displayed on the display, a portion of a spatial region of a video corresponding to the portion of the target region may be superimposed on the portion of the target region and displayed on the display. Note that the portion of the spatial region of the video is a portion of each of the pictures which constitute the video.
Accordingly, for example, as at time t2 in
In step S43, if the movement of the image sensor makes the target region unable to be recognized from the captured image, a portion of a spatial region of a video corresponding to a portion of the target region which has been displayed immediately before the target region becomes unable to be recognized may be continuously displayed
In this manner, for example, as at time t3 in
Furthermore, in step S43, if the horizontal and vertical widths of the imaging region of the image sensor are w0 and h0 and the distances in the horizontal and vertical directions between the imaging region and a projection region of the imaging region, in which the subject is projected, are dh and dw, it may be determined that the target region cannot be recognized when a smaller value of dw/w0 and dh/h0 is equal to or less than a predetermined value. Note that the projection region is the recognition region illustrated in
Accordingly, whether the target region can be recognized can be appropriately determined.
A display apparatus A10 according to an aspect of the present disclosure includes an image sensor A11, a decoding unit A12, and a display control unit A13.
The image sensor A11 obtains a captured image by capturing, as a subject, an image of a still image illuminated by a transmitter which transmits a signal by changing luminance.
The decoding unit A12 decodes a signal from the captured image.
The display control unit A13 reads a video corresponding to the decoded signal from a memory, superimposes the video on a target region corresponding to the subject in the captured image, and displays the images on the display. Here, the display control unit A13 displays a plurality of images in order, starting from a leading image which is the same image as a still image among a plurality of images included in the video.
Accordingly, advantageous effects as those obtained by the display method describe above can be produced.
The image sensor A11 may include a plurality of micro mirrors and a photosensor, and the display apparatus A10 may further include an imaging controller which controls the image sensor. In this case, the imaging controller locates a region which includes a signal as a signal region, from the captured image, and controls the angle of a micro mirror corresponding to the located signal region, among the plurality of micro mirrors. The imaging controller causes the photosensor to receive only light reflected off the micro mirror whose angle has been controlled, among the plurality of micro mirrors.
In this manner, as illustrated in, for example,
It should be noted that in the embodiments and the variations described above, each of the elements may be constituted by dedicated hardware or may be obtained by executing a software program suitable for the element. Each element may be obtained by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or semiconductor memory. For example, the program causes a computer to execute the display method shown by the flowcharts in
The above is a description of the display method according to one or more aspects, based on the embodiments and the variations, yet the present disclosure is not limited to such embodiments. The present disclosure may also include embodiments as a result of adding, to the embodiments, various modifications that may be conceived by those skilled in the art, and embodiments obtained by combining constituent elements in the embodiments without departing from the spirit of the present disclosure.
The following describes Variation 3 of Embodiment 23, that is, Variation 3 of the display method which achieves AR using a light ID.
The receiver 200 superimposes an AR image P21 on a target region of a captured display image Ppre as illustrated in (a) of
Here, upon reception of a resizing instruction, the receiver 200 resizes the AR image P21 according to the instruction, as illustrated in (b) of
Furthermore, upon reception of a position change instruction as illustrated in (c) of
Thus, enlarging an AR image which is a video can make the AR image readily viewed, and also reducing or moving an AR image which is a video can allow a region of the captured display image Ppre covered by the AR image to be displayed to the user.
The receiver 200 superimposes an AR image P22 on the target region of a captured display image Ppre as illustrated in (a) in
Here, upon reception of a resizing instruction, the receiver 200 resizes the AR image P22 according to the instruction, as illustrated in (b) of
Upon further reception of a resizing instruction, the receiver 200 resizes the AR image P22 according to the instruction as illustrated in (c) of
Note that when the enlargement instruction is received, if the enlargement ratio of the AR image according to the instruction will be greater than or equal to the threshold, the receiver 200 may obtain a high-resolution AR image. In this case, instead of the original AR image already displayed, the receiver 200 may enlarge and display the high-resolution AR image to such an enlargement ratio. For example, the receiver 200 displays an AR image having 1920×1080 pixels, instead of an AR image having 640×480 pixels. In this manner, the AR image can be enlarged as if the AR image is actually captured as a subject, and also a high-resolution image which cannot be obtained by optical zoom can be displayed.
First, the receiver 200 starts image capturing for a normal exposure time and a communication exposure time similarly to step S101 illustrated in the flowchart in
Next, the receiver 200 performs AR image superimposing processing which includes processing in steps S102 to S106 illustrated in the flowchart in
Next, the receiver 200 determines whether a resizing instruction has been received (step S404). Here, the receiver 200 determines that a resizing instruction has been received (Yes in step S404), the receiver 200 further determines whether the resizing instruction is an enlargement instruction (step S405). If the receiver 200 determines that the resizing instruction is an enlargement instruction (Yes in step S405), the receiver 200 determines whether an AR image needs to be reobtained (step S406). For example, if the receiver 200 determines that the enlargement ratio of the AR image according to the enlargement instruction will be greater than or equal to a threshold, the receiver 200 determines that an AR image needs to be reobtained. Here, if the receiver 200 determines that an AR image needs to be reobtained (Yes in step S406), the receiver 200 obtains a high-resolution AR image from a server, and replaces the AR image superimposed and displayed, with the high-resolution AR image (step S407).
Then, the receiver 200 resizes the AR image according to the received resizing instruction (step S408). Specifically, if a high-resolution AR image is obtained in step S407, the receiver 200 enlarges the high-resolution AR image. If the receiver 200 determines in step S406 that an AR image does not need to be reobtained (No in step S406), the receiver 200 enlarges the AR image superimposed. If the receiver 200 determines in step S405 that the resizing instruction is a reduction instruction (No in step S405), the receiver 200 reduces the AR image superimposed and displayed, according to the received resizing instruction, namely, the reduction instruction.
On the other hand, if the receiver 200 determines in step S404 that the resizing instruction has not been received (No in step S404), the receiver 200 determines whether a position change instruction has been received (step S409). Here, if the receiver 200 determines that a position change instruction has been received (Yes in step S409), the receiver 200 changes the position of the AR image superimposed and displayed, according to the position change instruction (step S410). Specifically, the receiver 200 moves the AR image. Furthermore, if the receiver 200 determines that the position change instruction has not been received (No in step S409), the receiver 200 repeatedly executes processing from step S404.
If the receiver 200 has changed the size of the AR image in step S408 or has changed the position of the AR image in step S410, the receiver 200 determines whether a light ID periodically obtained from step S401 is no longer obtained (step S411). Here, if the receiver 200 determines that a light ID is no longer obtained (No in step S411), the receiver 200 terminates the processing operation with regard to enlargement and movement of the AR image. On the other hand, if the receiver 200 determines that a light ID is currently being obtained (Yes in step S411), the receiver 200 repeatedly executes the processing from step S404.
The receiver 200 superimposes an AR image P23 on a target region of a captured display image Ppre, as described above. Here, as illustrated in
For example, if the AR image P23 has a quadrilateral shape, the closer a portion of the AR image P23 is to an upper edge, a lower edge, a left edge, or a right edge of the quadrilateral, the higher the transmittance of the portion is. More specifically, the transmittance of the portions at the edges is 100%. Furthermore, the AR image P23 includes, in the center portion, a quadrilateral area which has a transmittance of 0% and is smaller than the AR image P23. The quadrilateral area shows, for example, “Kyoto Station” in English. Specifically, the transmittance changes gradually from 0% to 100% like gradations at the edge portions of the AR image P23.
The receiver 200 superimposes the AR image P23 on the target region of the captured display image Ppre, as illustrated in
Here, as described above, the closer portions of the AR image P23 are to the edges of the AR image P23, the higher the transmittance of the portions is. Accordingly, when the AR image P23 is superimposed on the target region, even if a quadrilateral area in the center portion of the AR image P23 is displayed, the edges of the AR image P23 are not displayed, and the edges of the target region, namely, the edges of the image of the station sign are displayed.
This makes misalignment between the AR image P23 and the target region less noticeable. Specifically, even when the AR image P23 is superimposed on a target region, the movement of the receiver 200, for instance, may cause misalignment between the AR image P23 and the target region. In this case, if the transmittance of the entire AR image P23 is 0%, the edges of the AR image P23 and the edges of the target region are displayed and thus the misalignment will be noticeable. However, with regard to the AR image P23 according to the variation, the closer a portion is to an edge, the higher the transmittance of the portion is, and thus the edges of the AR image P23 are less likely to appear, and as a result, misalignment between the AR image P23 and the target region can be made less noticeable. Furthermore, the transmittance of the AR image P23 changes like gradations at the edge portions of the AR image P23, and thus superimposition of the AR image P23 on the target region can be made less noticeable.
The receiver 200 superimposes an AR image P24 on a target region of a captured display image Ppre as described above. Here, as illustrated in
The receiver 200 recognizes, as a target region, a region larger than the white-framed image and smaller than the black-framed image, within the captured display images Ppre. Then, the receiver 200 adjusts the size of the AR image P24 to the size of the target region and superimposes the resized AR image P24 on the target region.
In this manner, even if the superimposed AR image P24 is misaligned from the target region due to, for instance, the movement of the receiver 200, the AR image P24 can be continuously displayed being surrounded by the black frame. Accordingly, the misalignment between the AR image P24 and the target region can be made less noticeable.
Note that the colors of the frames are black and white in the example illustrated in
For example, the receiver 200 captures, as a subject, an image of a poster in which a castle illuminated in the night sky is drawn. For example, the poster is illuminated by the above-described transmitter 100 achieved as a backlight device, and transmits a visible light signal (namely, a light ID) using backlight. The receiver 200 obtains, by the image capturing, a captured display image Ppre which includes an image of the subject which is the poster, and an AR image P25 associated with the light ID. Here, the AR image P25 has the same shape as the shape of an image of the poster obtained by extracting a region in which the above-mentioned castle is drawn. Stated differently, a region corresponding to the castle in the image of the poster in the AR image P25 is masked. Furthermore, the AR image P25 is obtained such that the closer a portion is to an edge, the higher the transmittance of the portion is, as with the case of the AR image P23 described above. In the center portion whose transmittance is 0% of the AR image P25, fireworks set off in the night sky are displayed as a video.
The receiver 200 adjusts the size of the AR image P25 to the size of the target region which is the image of the subject, and superimposes the resized AR image P25 on the target region. As a result, the castle drawn on the poster is displayed not as an AR image, but as an image of the subject, and a video of the fireworks is displayed as an AR image.
Accordingly, the captured display image Ppre can be displayed as if the fireworks were actually set off in the poster. The closer portions of the AR image P25 to edges, the higher transmittance of the portions of the AR image P25 is. Accordingly, when the AR image P25 is superimposed on the target region, the center portion of the AR image P25 is displayed, but the edges of the AR image P25 are not displayed, and the edges of the target region are displayed. As a result, misalignment between the AR image P25 and the target region can be made less noticeable. Furthermore, at the edge portions of the AR image P25, the transmittance changes like gradations, and thus superimposition of the AR image P25 on the target region can be made less noticeable.
For example, the receiver 200 captures, as a subject, an image of the transmitter 100 achieved as a TV. Specifically, the transmitter 100 displays a castle illuminated in the night sky on the display, and also transmits a visible light signal (namely, light ID). The receiver 200 obtains a captured display image Ppre in which the transmitter 100 is shown and an AR image P26 associated with the light ID, by image capturing. Here, the receiver 200 first displays the captured display image Ppre on the display 201. At this time, the receiver 200 displays, on the display 201, a message m which prompts a user to turn off the light. Specifically, the message m indicates “Please turn off light in room and darkens room”, for example.
The display of the message m prompts the user to turn off the light so that the room in which the transmitter 100 is placed becomes dark, and the receiver 200 superimposes an AR image P26 on the captured display image Ppre, and displays the images. Here, the AR image P26 has the same size as the captured display image Ppre, and a region of the AR image P26 corresponding to the castle in the captured display image Ppre is extracted from the AR image P26. Stated differently, the region of the AR image P26 corresponding to the castle of the captured display image Ppre is masked. Accordingly, the castle of the captured display image Ppre can be shown to the user through the region. At the edge portions of the region of the AR image P26, transmittance may gradually change from 0% to 100% like gradations, similarly to the above. In this case, misalignment between the captured display image Ppre and the AR image P26 can be made less noticeable.
In the above-mentioned example, an AR image having high transmittance at the edge portions is superimposed on the target region of the captured display image Ppre, and thus the misalignment between the AR image and the target region is made less noticeable. However, an AR image which has the same size as the captured display image Ppre, and the entirety of which is semi-transparent (that is, transmittance is 50%) may be superimposed on the captured display image Ppre, instead of such an AR image. Even in such a case, misalignment between the AR image and the target region can be made less noticeable. If the entire captured display image Ppre is bright, an AR image uniformly having low transparency may be superimposed on the captured display image Ppre, whereas if the entire captured display image Ppre is dark, an AR image uniformly having high transparency may be superimposed on the captured display image Ppre.
Note that objects such as fireworks in the AR image P25 and the AR image P26 may be represented using computer graphics (CG). In this case, masking will be unnecessary. In the example illustrated in
For example, the transmitter 100 is configured as a large display installed in a stadium. The transmitter 100 displays a message indicating that, for example, fast food and drinks can be ordered using a light ID, and furthermore transmits a visible light signal (namely, a light ID). If such a message is displayed, a user directs the receiver 200 to the transmitter 100 and captures an image of the transmitter 100. Specifically, the receiver 200 captures, as a subject, an image of the transmitter 100 configured as a large display installed in the stadium.
The receiver 200 obtains a captured display image Ppre and a decode target image Pdec through the image capturing. Then, the receiver 200 obtains a light ID by decoding the decode target image Pdec, and transmits the light ID and the captured display image Ppre to a server.
The server identifies installation information of the large display an image of which has been captured and which is associated with the light ID transmitted from the receiver 200, from among pieces of installation information associated with light IDs. For example, the installation information indicates the position and orientation in which the large display is installed, and the size of the large display, for instance. Furthermore, the server determines the seat number in the stadium where the captured display image Ppre has been captured, based on the installation information and the size and orientation of the large display which is shown in the captured display image Ppre. Then, the server displays, on the receiver 200, a menu screen which includes the seat number.
A menu screen m1 includes, for example, for each item, an input column mal into which the number of the items to be ordered is input, a seat column mb1 indicating the seat number of the stadium determined by the server, and an order button mc1. The user inputs the number of the items to be ordered in the input column mal for a desired item by operating the receiver 200, and selects the order button mc1. Accordingly, the order is fixed, and the receiver 200 transmits, to the server, the detailed order according to the input result.
Upon reception of the detailed order, the server gives an instruction to the staff of the stadium to deliver the ordered item(s), the number of which is based on the detailed order, to the seat having the number determined as described above.
The receiver 200 first captures an image of the transmitter 100 configured as a large display of the stadium (step S421). The receiver 200 obtains a light ID transmitted from the transmitter 100, by decoding a decode target image Pdec obtained by the image capturing (step S422). The receiver 200 transmits, to a server, the light ID obtained in step S422 and the captured display image Ppre obtained by the image capturing in step S421 (step S423).
Upon reception of the light ID and the captured display image Ppre (step S424), the server identifies, based on the light ID, installation information of the large display installed at the stadium (step S425). For example, the server holds a table indicating, for each light ID, installation information of a large display associated with the light ID, and identifies installation information by retrieving, from the table, installation information associated with the light ID transmitted from the receiver 200.
Next, based on the identified installation information and the size and the orientation of the large display shown in the captured display image Ppre, the server identifies the seat number in the stadium at which the captured display image Ppre is obtained (namely, captured) (step S426). Then, the server transmits, to the receiver 200, the uniform resource locator (URL) of the menu screen m1 which includes the number of the identified seat (step S427).
Upon reception of the URL of the menu screen m1 transmitted from the server (step S428), the receiver 200 accesses the URL and displays the menu screen m1 (step S429). Here, the user inputs the details of the order to the menu screen m1 by operating the receiver 200, and settles the order by selecting the order button mc1. Accordingly, the receiver 200 transmits the details of the order to the server (step S430).
Upon reception of the detailed order transmitted from the receiver 200, the server performs processing of accepting the order according to the details of the order (step S431). At this time, for example, the server instructs the staff of the stadium to deliver one or more items according to the number indicated in the details of the order to the seat number identified in step S426.
Accordingly, based on the captured display image Ppre obtained by image capturing by the receiver 200, the seat number is identified, and thus the user of the receiver 200 does not need to specially input his/her seat number when placing an order for items. Accordingly, the user can skip the input of the seat number and order items easily.
Note that although the server identifies the seat number in the above example, the receiver 200 may identify the seat number. In this case, the receiver 200 obtains installation information from the server, and identifies the seat number, based on the installation information and the size and the orientation of the large display shown in the captured display image Ppre.
The receiver 1800a receives a light ID (visible light signal) transmitted from a transmitter 1800b configured as, for example, street digital signage, similarly to the example indicated in
Here, when playing sound as described above, the receiver 1800a adjusts the volume of the sound according to the distance to the transmitter 1800b. Specifically, the receiver 1800a adjusts and decreases the volume with an increase in the distance to the transmitter 1800b, and on the contrary, the receiver 1800a adjusts and increases the volume with a decrease in the distance to the transmitter 1800b.
The receiver 1800a may determine the distance to the transmitter 1800b using the global positioning system (GPS), for instance. Specifically, the receiver 1800a obtains positional information of the transmitter 1800b associated with a light ID from the server, for instance, and further locates the position of the receiver 1800a by the GPS. Then, the receiver 1800a determines a distance between the position of the transmitter 1800b indicated by the positional information obtained from the server and the determined position of the receiver 1800a to be the distance to the transmitter 1800b described above. Note that the receiver 1800a may determine the distance to the transmitter 1800b, using, for instance, Bluetooth (registered trademark), instead of the GPS.
The receiver 1800a may determine the distance to the transmitter 1800b, based on the size of a bright line pattern region of the above-described decode target image Pdec obtained by image capturing. The bright line pattern region is a region which includes a pattern formed by a plurality of bright lines which appear due to a plurality of exposure lines included in the image sensor of the receiver 1800a being exposed for the communication exposure time, similarly to the example shown in
Accordingly, the volume is adjusted according to the distance to the transmitter 1800b, and thus the user of the receiver 1800a can catch the sound played by the receiver 1800a, as if the sound were actually played by the transmitter 1800b.
For example, if the distance to the transmitter 1800b is between L1 and L2 [m], the volume increases or decreases in a range of Vmin to Vmax [dB] in proportion to the distance. Specifically, the receiver 1800a linearly decreases the volume from Vmax [dB] to Vmin [dB] if the distance to the transmitter 1800b is increased from L1 [m] to L2 [m]. Furthermore, although the distance to the transmitter 1800b is shorter than L1 [m], the receiver 1800a maintains the volume at Vmax [dB], and furthermore although the distance to the transmitter 1800b is longer than L2 [m], the receiver 1800a maintains the volume at Vmin [dB].
Accordingly, the receiver 1800a stores the maximum volume Vmax, the longest distance L1 at which the sound of the maximum volume Vmax is output, the minimum sound volume Vmin, and the shortest distance L2 at which the sound of the minimum sound volume Vmin is output. The receiver 1800a may change the maximum volume Vmax, the minimum sound volume Vmin, the longest distance L1, and the shortest distance L2, according to the attribute set in the receiver 1800a. For example, if the attribute is the age of the user and the age indicates that the user is an old person, the receiver 1800a sets the maximum volume Vmax to a higher volume than a reference maximum volume, and may set the minimum sound volume Vmin to a higher volume than a reference minimum sound volume. Furthermore, the attribute may be information indicating whether sound is output from a speaker or from an earphone.
As described above, the minimum sound volume Vmin is set in the receiver 1800a, and thus it can be prevented that sound cannot be heard because the receiver 1800a is too far from the transmitter 1800b. Furthermore, the maximum volume Vmax is set in the receiver 1800a, and thus it can be prevented that unnecessarily high volume sound is output because the receiver 1800a is quite near the transmitter 1800b.
The receiver 200 captures an image of an illuminated signboard. Here, the signboard is illuminated by a lighting apparatus which is the above-described transmitter 100 which transmits a light ID. Accordingly, the receiver 200 obtains a captured display image Ppre and a decode target image Pdec by the image capturing. Then, the receiver 200 obtains a light ID by decoding the decode target image Pdec, and obtains, from a server, AR images P27a to P27c and recognition information which are associated with the light ID. The receiver 200 recognizes, as a target region, a peripheral of a region m2 in which the signboard is shown in the captured display image Ppre, based on recognition information.
Specifically, the receiver 200 recognizes a region in contact with the left portion of the region m2 as a first target region, and superimposes an AR image P27a on the first target region, as illustrated in (a) of
Next, the receiver 200 recognizes a region which includes a lower portion of the region m2 as a second target region, and superimposes an AR image P27b on the second target region, as illustrated in (b) of
Next, the receiver 200 recognizes a region in contact with the upper portion of the region m2 as a third target region, and superimposes an AR image P27c on the third target region, as illustrated in (c) of
Here, the AR images P27a to P27c may each be a video showing an image of a character of an abominable snowman, for example.
While continuously and repeatedly obtaining a light ID, the receiver 200 may switch the target region to be recognized to one of the first to third target regions in a predetermined order and at predetermined timings. Specifically, the receiver 200 may switch a target region to be recognized in the order of the first target region, the second target region, and the third target region. Alternatively, the receiver 200 may switch the target region to be recognized to one of the first to third target regions in a predetermined order, each time the receiver 200 obtains a light ID as described above. Specifically, while the receiver 200 continuously and repeatedly obtains a light ID after the receiver 200 first obtains the light ID, the receiver 200 recognizes the first target region and superimposes the AR image P27a on the first target region, as illustrated in (a) of
If the receiver 200 switches between target regions to be recognized each time the receiver 200 obtains a light ID as described above, the receiver 200 may change the color of an AR image to be displayed, at a frequency of once in N times (N is an integer of 2 or more). N times may be the number of times an AR image is displayed, and 200 times, for example. Specifically, the AR images P27a to P27c are all images of the same white character, but an AR image showing a pink character, for example, is displayed at a frequency of once in 200 times. The receiver 200 may give points to the user if user operation directed to the AR image is received while such an AR image showing the pink character is displayed.
Accordingly, switching between target regions on which an AR image is superimposed and changing the color of an AR image at a predetermined frequency can attract the user to capturing an image of a signboard illuminated by the transmitter 100, thus promoting the user to repeatedly obtain a light ID.
The receiver 200 has a function, that is, so-called way finder of presenting the route for a user to take, by capturing an image of a mark M4 drawn on the floor at a position where, for example, a plurality of passages cross in a building. The building is, for example, a hotel, and the presented route is for the user who has checked in to get to his/her room.
The mark M4 is illuminated by a lighting apparatus which is the above-described transmitter 100 which transmits a light ID by changing luminance. Accordingly, the receiver 200 obtains a captured display image Ppre and a decode target image Pdec by capturing an image of the mark M4. The receiver 200 obtains a light ID by decoding the decode target image Pdec, and transmits the light ID and terminal information of the receiver 200 to a server. The receiver 200 obtains, from the server, a plurality of AR images P28 and recognition information associated with the light ID and terminal information. Note that the light ID and the terminal information are stored in the server, in association with the AR images P28 and the recognition information when the user has checked in.
The receiver 200 recognizes, based on recognition information, a plurality of target regions from a region m4 in which the mark M4 is shown and a periphery of the region m4 in the captured display image Ppre. Then, as illustrated in
Specifically, recognition information indicates the route showing that the user is to turn right at the position of the mark M4. The receiver 200 determines a path on the captured display image Ppre, based on such recognition information, and recognizes a plurality of target regions arranged along the path. This path extends from the lower portion of the display 201 to the region m4, and turns right at the region m4. The receiver 200 disposes the AR images P28 at the plurality of recognized target regions as if an animal walked along the path.
Here, the receiver 200 may use the earth's magnetic field detected by a 9-axis sensor included in the receiver 200, when the path on the captured display image Ppre is to be determined. In this case, recognition information indicates the direction to which the user is to proceed from the position of the mark M4, based on the direction of the earth's magnetic field. For example, recognition information indicates west as a direction in which the user is to proceed at the position of the mark M4. Based on such recognition information, the receiver 200 determines a path that extends from the lower portion of the display 201 to the region m4 and extends to the west at the region m4, in the captured display image Ppre. Then, the receiver 200 recognizes a plurality of target regions arranged along the path. Note that the receiver 200 determines the lower side of the display 201 by the 9-axis sensor detecting the gravitational acceleration.
Accordingly, the receiver 200 presents the user's route, and thus the user can readily arrive at the destination by proceeding along the route. Furthermore, the route is displayed as an AR image on the captured display image Ppre, and thus the route can be clearly presented to the user.
Note that the lighting apparatus which is the transmitter 100 illuminates the mark M4 with short pulse light, thus appropriately transmitting a light ID while maintaining the brightness not too high. Although the receiver 200 has captured an image of the mark M4, the receiver 200 may capture an image of the lighting apparatus, using a camera disposed on the display 201 side (a so-called front camera). The receiver 200 may capture images of both the mark M4 and the lighting apparatus.
The receiver 200 decodes a decode target image Pdec using a line-scan time. The line-scan time is from when exposure of one exposure line included in the image sensor is started until when exposure of the next exposure line is started. If the line-scan time is known, the receiver 200 decodes the decode target image Pdec using the known line-scan time. However, if the line-scan time is not known, the receiver 200 calculates the line-scan time from the decode target image Pdec.
For example, the receiver 200 detects a line having the narrowest width as illustrated in
Once the receiver 200 finds the line having the narrowest width, the receiver 200 determines the number of exposure lines corresponding to the line having the narrowest width, or in other words, the pixel count. If a carrier frequency at which the transmitter 100 changes luminance in order to transmit a light ID is 9.6 kHz, the shortest time when luminance of the transmitter 100 is high or low is 104 μs. Accordingly, the receiver 200 calculates a line scanning time by dividing 104 μs by the pixel count for the determined narrowest width.
The receiver 200 may Fourier-transform the bright line pattern of the decode target image Pdec, and calculate the line scanning time, based on a spatial frequency obtained by the Fourier transform.
For example, as illustrated in
In order to select a maximum likelihood candidate, the receiver 200 calculates an acceptable range of a line scanning time, based on the imaging frame rate and the number of exposure lines included in the image sensor. Specifically, the receiver 200 calculates the largest value of the line scanning times from 1×106 [μs]/{(frame rate)×(the number of exposure lines)}. Then, the receiver 200 determines the largest value×constant K (K<1) to the largest value to be the acceptable range of the line scanning time. The constant K is, for example, 0.9 or 0.8.
From among the plurality of line scanning time candidates, the receiver 200 selects a candidate within the acceptable range as a maximum likelihood candidate, namely, a line scanning time.
Note that the receiver 200 may evaluate the reliability of the calculated line scanning time, based on whether the line scanning time calculated in the example shown in
The receiver 200 may obtain a line scanning time by attempting to decode a decode target image Pdec. Specifically, the receiver 200 first starts image capturing (step S441). Next, the receiver 200 determines whether a line scanning time is known (step S442). For example, the receiver 200 may notify the server of the type and the model of the receiver 200, and inquires a line scanning time for the type and model, thus determining whether the line scanning time is known. Here, if the receiver 200 determines that the line scanning time is known (Yes in step S442), the receiver 200 sets reference acquisition times for a light ID to n (n is an integer of 2 or more, and is, for example, 4) (step S443). Next, the receiver 200 obtains a light ID by decoding the decode target image Pdec using the known line scanning time (step S444). At this time, the receiver 200 obtains a plurality of light IDs, by decoding each of a plurality of decode target images Pdec sequentially obtained through image capturing started in step S441. Here, the receiver 200 determines whether the same light ID is obtained for the reference acquisition times (namely, n times) (step S445). If the receiver 200 determines that the light ID has been obtained for n times (Yes in step S445), the receiver 200 trusts the light ID, and starts processing (for example, superimposing an AR image) using the light ID (step S446). On the other hand, if the receiver 200 determines that the light ID has not been obtained for n times (No in step S445), the receiver 200 does not trust the light ID, and terminates the processing.
In step S442, if the receiver 200 determines that the line scanning time is not known (No in step S442), the receiver 200 sets the reference acquisition time for a light ID to n+k (k is an integer of 1 or more) (step S447). Specifically, if the line scanning time is not known, the receiver 200 sets more reference acquisition times than the times when the line scanning time is known. Next, the receiver 200 determines a temporary line scanning time (step S448). Then, the receiver 200 obtains a light ID by decoding the decode target image Pdec using the temporary line scanning time determined (step S449). At this time, the receiver 200 obtains a plurality of light IDs, by decoding each of a plurality of decode target images Pdec sequentially obtained through image capturing started in step S441 similarly to the above. Here, the receiver 200 determines whether the same light ID has been obtained for the reference acquisition times (that is, (n+k) times) (step S450).
If the receiver 200 determines that the same light ID has been obtained for (n+k) times (Yes in step S450), the receiver 200 determines that the temporary line scanning time determined is the right line scanning time. Then, the receiver 200 notifies the server of the type and the model of the receiver 200, and the line scanning time (step S451). Accordingly, the server stores, for each receiver, the type and the model of the receiver and a line scanning time suitable for the receiver in association. Thus, once another receiver of the same type and the model starts image capturing, the other receiver can determine the line scanning time for the other receiver by making an inquiry to the server. Specifically, the other receiver can determine that the line scanning time is known in the determination of step S442.
Then, the receiver 200 trusts the light ID obtained for the (n+k) times, and starts processing (for example, superimposing an AR image) using the light ID (step S446).
In step S450, if the receiver 200 determines that the same light ID has not been obtained for the (n+k) times (No in step S450), the receiver 200 further determines whether a terminating condition has been satisfied (step S452). The terminating condition is that, for example, a predetermined time has elapsed since image capturing starts or a light ID has been obtained for more than the maximum acquisition times. If the receiver 200 determines that such a terminating condition has been satisfied (Yes in step S452), the receiver 200 terminates the processing. On the other hand, if the receiver 200 determines that such a terminating condition has not been satisfied (No in step S452), the receiver 200 changes the temporary line scanning time (step S453). Then, the receiver 200 repeatedly executes the processing from step S449, using the changed temporary line scanning time.
Accordingly, the receiver 200 can obtain the line scanning time even if the line scanning time is not known, as in the examples shown in
The receiver 200 captures an image of the transmitter 100 configured as a TV. The transmitter 100 transmits a light ID and a time code periodically, by changing luminance while displaying a TV program, for example. The time code may be information indicating, whenever transmitted, a time at which the time code is transmitted, and may be a time packet shown in
The receiver 200 periodically obtains a captured display image Ppre and a decode target image Pdec by image capturing described above. The receiver 200 obtains a light ID and a time code as described above, by decoding a decode target image Pdec while displaying, on the display 201, the captured display image Ppre periodically obtained. Next, the receiver 200 transmits the light ID to the server 300. Upon reception of the light ID, the server 300 transmits sound data, AR start time information, an AR image P29, and recognition information associated with the light ID to the receiver 200.
On obtaining the sound data, the receiver 200 plays the sound data, in synchronization with a video of a TV program shown by the transmitter 100. Specifically, sound data includes pieces of sound unit data each including a time code. The receiver 200 starts playback of the pieces of sound unit data from a piece of sound unit data in the sound data which includes a time code showing the same time as the time code obtained from the transmitter 100 together with the light ID. Accordingly, the playback of sound data is in synchronization with a video of a TV program. Note that such synchronization of sound with a video may be achieved by the same method as or a similar method to the audio synchronous reproduction shown in
On obtaining the AR image P29 and the recognition information, the receiver 200 recognizes, from the captured display images Ppre, a region according to the recognition information as a target region, and superimposes the AR image P29 on the target region. For example, the AR image P29 shows cracks in the display 201 of the receiver 200, and the target region is a region of the captured display image Ppre, which lies across the image of the transmitter 100.
Here, the receiver 200 displays the captured display image Ppre on which the AR image P29 as mentioned above is superimposed, at the timing according to the AR start time information. The AR start time information indicates the time when the AR image P29 is displayed. Specifically, the receiver 200 displays the captured display image Ppre on which the above AR image P29 is superimposed, at a timing when a time code indicating the same time as the AR start time information is received, among time codes occasionally transmitted from the transmitter 100. For example, the time indicated by the AR start time information is when a TV program comes to a scene in which a witch girl uses ice magic. At this time, the receiver 200 may output sound of the cracks of the AR image P29 being generated, through the speaker of the receiver 200, by playback of the sound data.
Accordingly, the user can view the scene of the TV program, as if the user were actually in the scene.
Furthermore, at the time indicated by the AR start time information, the receiver 200 may vibrate a vibrator included in the receiver 200, cause the light source to emit light like a flash, make the display 201 bright momentarily, or cause the display 201 to blink. Furthermore, the AR image P29 may include not only an image showing cracks, but also a state in which dew condensation on the display 201 has frozen.
The receiver 200 captures an image of the transmitter 100 configured as, for example, a toy cane. The transmitter 100 includes a light source, and transmits a light ID by the light source changing luminance.
The receiver 200 periodically obtains a captured display image Ppre and a decode target image Pdec by the image capturing described above. The receiver 200 obtains a light ID as described above, by decoding a decode target image Pdec while displaying the captured display image Ppre obtained periodically on the display 201. Next, the receiver 200 transmits the light ID to the server 300. Upon reception of the light ID, the server 300 transmits an AR image P30 and recognition information which are associated with the light ID to the receiver 200.
Here, recognition information further includes gesture information indicating a gesture (namely, movement) of a person holding the transmitter 100. The gesture information indicates a gesture of the person moving the transmitter 100 from the right to the left, for example. The receiver 200 compares a gesture of the person holding the transmitter 100 shown in the captured display image Ppre with a gesture indicated by the gesture information. If the gestures match, the receiver 200 superimposes AR images P30 each having a star shape on the captured display image Ppre such that, for example, many of the AR images P30 are arranged along the trajectory of the transmitter 100 moved according to the gesture.
The receiver 200 captures an image of the transmitter 100 configured as, for example, a toy cane, similarly to the above description.
The receiver 200 periodically obtains a captured display image Ppre and a decode target image Pdec by the image capturing. The receiver 200 obtains a light ID as described above, by decoding a decode target image Pdec while displaying the captured display image Ppre obtained periodically on the display 201. Next, the receiver 200 transmits the light ID to the server 300. Upon reception of the light ID, the server 300 transmits an AR image P31 and recognition information which are associated with the light ID to the receiver 200.
Here, the recognition information includes gesture information indicating a gesture of a person holding the transmitter 100, as with the above description. The gesture information indicates a gesture of a person moving the transmitter 100 from the right to the left, for example. The receiver 200 compares a gesture of the person holding the transmitter 100 shown in the captured display image Ppre with a gesture indicated by the gesture information. If the gestures match, the receiver 200 superimposes, on a target region of the captured display image Ppre in which the person holding the transmitter 100 is shown, the AR image P31 showing a dress costume, for example.
Accordingly, with the display method according to the variation, gesture information associated with a light ID is obtained from the server. Next, it is determined whether a movement of a subject shown by captured display images periodically obtained matches a movement indicated by gesture information obtained from the server. Then, when it is determined that the movements match, a captured display image Ppre on which an AR image is superimposed is displayed.
Accordingly, an AR image can be displayed according to, for example, the movement of a subject such as a person. Specifically, an AR image can be displayed at an appropriate timing.
For example, as illustrated in (a) of
For example, the user changes the orientation of the receiver 200 from the lateral orientation to the longitudinal orientation, as illustrated in (b) of
Accordingly, a light ID may not be appropriately obtained depending on the orientation of the receiver 200, and thus when the receiver 200 is caused to obtain a light ID, the orientation of the receiver 200, an image of which is being captured, may be changed as appropriate. When the orientation is being changed, the receiver 200 can appropriately obtain a light ID, at a timing when the receiver 200 is in an orientation in which the receiver 200 readily obtains a light ID.
For example, the transmitter 100 is configured as digital signage of a coffee shop, displays an image showing an advertisement of the coffee shop during an image display period, and transmits a light ID by changing luminance during a light ID transmission period. Specifically, the transmitter 100 alternately and repeatedly executes display of the image during the image display period and transmission of the light ID during the light ID transmission period.
The receiver 200 periodically obtains a captured display image Ppre and a decode target image Pdec by capturing an image of the transmitter 100. At this time, a decode target image Pdec which includes a bright line pattern region may not be obtained due to synchronization of a repeating cycle of the image display period and the light ID transmission period of the transmitter 100 and a repeating cycle of obtaining a captured display image Ppre and a decode target image Pdec by the receiver 200. Furthermore, a decode target image Pdec which includes a bright line pattern region may not be obtained depending on the orientation of the receiver 200.
For example, the receiver 200 captures an image of the transmitter 100 in the orientation as illustrated in (a) of
Here, if a timing at which the receiver 200 obtains the captured display image Ppre is in the image display period of the transmitter 100, the receiver 200 appropriately obtains the captured display image Ppre in which the transmitter 100 is shown.
Even if the timing at which the receiver 200 obtains the decode target image Pdec overlaps both the image display period and the light ID transmission period of the transmitter 100, the receiver 200 can obtain the decode target image Pdec which includes a bright line pattern region Z1.
Specifically, exposure of the exposure lines included in the image sensor starts from the vertically top exposure line to the vertically bottom exposure line. Accordingly, the receiver 200 cannot obtain a bright line pattern region even if the receiver 200 starts exposing the image sensor in the image display period, in order to obtain a decode target image Pdec. However, when the image display period switches to the light ID transmission period, the receiver 200 can obtain a bright line pattern region corresponding to the exposure lines to be exposed during the light ID transmission period.
Here, the receiver 200 captures an image of the transmitter 100 in the orientation as illustrated in (b) of
On the other hand, the receiver 200 captures an image of the transmitter 100 while being away from the transmitter 100, such that the image of the transmitter 100 is projected only on a lower region of the image sensor of the receiver 200, as illustrated in (c) of
As described above, a light ID may not be appropriately obtained depending on the orientation of the receiver 200, and thus when the receiver 200 obtains a light ID, the receiver 200 may prompt a user to change the orientation of the receiver 200. Specifically, when the receiver 200 starts image capturing, the receiver 200 displays or audibly outputs a message such as, for example, “Please move” or “Please shake” so that the orientation of the receiver 200 is to be changed. In this manner, the receiver 200 captures images while changing the orientation, and thus can obtain a light ID appropriately.
For example, the receiver 200 determines whether the receiver 200 is being shaken, while capturing an image (step S461). Specifically, the receiver 200 determines whether the receiver 200 is being shaken, based on the output of the 9-axis sensor included in the receiver 200. Here, if the receiver 200 determines that the receiver 200 is being shaken while capturing an image (Yes in step S461), the receiver 200 increases the rate at which a light ID is obtained (step S462). Specifically, the receiver 200 obtains, as decode target images (that is, bright line images) Pdec, all the captured images obtained per unit time during image capturing, and decodes each of all the obtained decode target images. Furthermore, when all the captured images are obtained as the captured display images Ppre, specifically, when obtaining and decoding decode target images Pdec are stopped, the receiver 200 starts obtaining and decoding decode target images Pdec.
On the other hand, if the receiver 200 determines that the receiver 200 is not being shaken while image capturing (No in step S461), the receiver 200 obtains decode target images Pdec at a low rate at which a light ID is obtained (step S463). Specifically, if the rate at which a light ID is obtained is increased in step S462 and is still high, the receiver 200 decreases the rate at which a light ID is obtained because the current rate is high. This lowers a frequency at which the receiver 200 performs decoding processing on a decode target image Pdec, and thus power consumption can be maintained low.
Then, the receiver 200 determines whether a terminating condition for terminating processing for adjusting a rate at which a light ID is obtained is satisfied (step S464), and if the receiver 200 determines that the terminating condition is not satisfied (No in step S464), the receiver 200 repeatedly executes processing from step S461. On the other hand, if the receiver 200 determines that the terminating condition is satisfied (Yes in step S464), the receiver 200 terminates the processing of adjusting the rate at which a light ID is obtained.
The receiver 200 may include a wide-angle lens 211 and a telephoto lens 212 as camera lenses. A captured image obtained by the image capturing using the wide-angle lens 211 is an image corresponding to a wide angle of view, and shows a small subject in the image. On the other hand, a captured image obtained by the image capturing using the telephoto lens 212 is an image corresponding to a narrow angle of view, and shows a large subject in the image.
The receiver 200 as described above may switch between camera lenses used for image capturing, according to one of the uses A to E illustrated in
According to the use A, when the receiver 200 is to capture an image, the receiver 200 uses the telephoto lens 212 at all times, for both normal imaging and receiving a light ID. Here, normal imaging is the case where all captured images are obtained as captured display images Ppre by image capturing. Also, receiving a light ID is the case where a captured display image Ppre and a decode target image Pdec are periodically obtained by image capturing.
According to the use B, the receiver 200 uses the wide-angle lens 211 for normal imaging. On the other hand, when the receiver 200 is to receive a light ID, the receiver 200 first uses the wide-angle lens 211. The receiver 200 switches the camera lens from the wide-angle lens 211 to the telephoto lens 212, if a bright line pattern region is included in a decode target image Pdec obtained when the wide-angle lens 211 is used. After such switching, the receiver 200 can obtain a decode target image Pdec corresponding to a narrow angle of view and thus showing a large bright line pattern.
According to the use C, the receiver 200 uses the wide-angle lens 211 for normal imaging. On the other hand, when the receiver 200 is to receive a light ID, the receiver 200 switches the camera lens between the wide-angle lens 211 and the telephoto lens 212. Specifically, the receiver 200 obtains a captured display image Ppre using the wide-angle lens 211, and obtains a decode target image Pdec using the telephoto lens 212.
According to the use D, the receiver 200 switches the camera lens between the wide-angle lens 211 and the telephoto lens 212 for both normal imaging and receiving a light ID, according to user operation.
According to the use E, the receiver 200 decodes a decode target image Pdec obtained using the wide-angle lens 211, when the receiver 200 is to receive a light ID. If the receiver 200 cannot appropriately decode the decode target image Pdec, the receiver 200 switches the camera lens from the wide-angle lens 211 to the telephoto lens 212. Furthermore, the receiver 200 decodes a decode target image Pdec obtained using the telephoto lens 212, and if the receiver 200 cannot appropriately decode the decode target image Pdec, the receiver 200 switches the camera lens from the telephoto lens 212 to the wide-angle lens 211. Note that when the receiver 200 determines whether the receiver 200 has appropriately decoded a decode target image Pdec, the receiver 200 first transmits, to a server, a light ID obtained by decoding the decode target image Pdec. If the light ID matches a light ID registered in the server, the server notifies the receiver 200 of matching information indicating that the light ID matches a registered light ID, and if the light ID does not match a registered light ID, notifies the receiver 200 of non-matching information indicating that the light ID does not match a registered light ID. The receiver 200 determines that the decode target image Pdec has been appropriately decoded if the information notified from the server is matching information, whereas if the information notified from the server is non-matching information, the receiver 200 determines that the decode target image Pdec has not been appropriately decoded. The receiver 200 determines that the decode target image Pdec has been appropriately decoded if a light ID obtained by decoding the decode target image Pdec satisfies a predetermined condition. On the other hand, if the light ID obtained by decoding the decode target image Pdec does not satisfy the predetermined condition, the receiver 200 determines that the receiver 200 has failed to appropriately decode the decode target image Pdec.
Such switching between the camera lenses allows an appropriate decode target image Pdec to be obtained.
For example, the receiver 200 includes an in-camera 213 and an out-camera (not illustrated in
Such a receiver 200 captures an image of the transmitter 100 configured as a lighting apparatus by the in-camera 213 while the in-camera 213 is facing up. The receiver 200 obtains a decode target image Pdec by the image capturing, and obtains a light ID transmitted from the transmitter 100 by decoding the decode target image Pdec.
Next, the receiver 200 obtains, from a server, an AR image and recognition information associated with the light ID, by transmitting the obtained light ID to the server. The receiver 200 starts processing of recognizing a target region according to the recognition information, from captured display images Ppre obtained by the out-camera and the in-camera 213. Here, if the receiver 200 does not recognize a target region from any of the captured display images Ppre obtained by the out-camera and the in-camera 213, the receiver 200 prompts a user to move the receiver 200. The user prompted by the receiver 200 moves the receiver 200. Specifically, the user moves the receiver 200 so that the in-camera 213 and the out-camera face backward and forward of the user, respectively. As a result, the receiver 200 recognizes a target region from a captured display image Ppre obtained by the out-camera. Specifically, the receiver 200 recognizes a region in which a person is projected as a target region, superimposes an AR image on the target region of the captured display images Ppre, and displays the captured display image Ppre on which the AR image is superimposed.
The receiver 200 obtains a light ID transmitted from the transmitter 100 by the in-camera 213 capturing an image of the transmitter 100 which is a lighting apparatus, and transmits the light ID to the server (step S471). The server receives the light ID from the receiver 200 (step S472), and estimates the position of the receiver 200, based on the light ID (step S473). For example, the server has stored a table indicating, for each light ID, a room, a building, or a space in which the transmitter 100 which transmits the light ID is disposed. The server estimates, as the position of the receiver 200, a room or the like associated with the light ID transmitted from the receiver 200, from the table. Furthermore, the server transmits an AR image and recognition information associated with the estimated position to the receiver 200 (step S474).
The receiver 200 obtains the AR image and the recognition information transmitted from the server (step S475). Here, the receiver 200 starts processing of recognizing a target region according to the recognition information, from captured display images Ppre obtained by the out-camera and the in-camera 213. The receiver 200 recognizes a target region from, for example, a captured display image Ppre obtained by the out-camera (step S476). The receiver 200 superimposes an AR image on a target region of the captured display image Ppre, and displays the captured display image Ppre on which the AR image is superimposed (step S477).
Note that in the above example, if the receiver 200 obtains an AR image and recognition information transmitted from the server, the receiver 200 starts processing of recognizing a target region from captured display images Ppre obtained by the out-camera and the in-camera 213 in step S476. However, the receiver 200 may start processing of recognizing a target region from a captured display image Ppre obtained by the out-camera only, in step S476. Specifically, a camera for obtaining a light ID (the in-camera 213 in the above example) and a camera for obtaining a captured display image Ppre on which an AR image is to be superimposed (the out-camera in the above example) may play different roles at all times.
In an above example, the receiver 200 captures an image of the transmitter 100 which is a lighting apparatus using the in-camera 213, yet may capture an image of the floor illuminated by the transmitter 100 using the out-camera. The receiver 200 can obtain a light ID transmitted from the transmitter 100 even by such image capturing using the out-camera.
The receiver 200 captures an image of the transmitter 100 configured as a microwave provided in, for example, a store such as a convenience store. The transmitter 100 includes a camera for capturing an image of the inside of the microwave and a lighting apparatus which illuminates the inside of the microwave. The transmitter 100 recognizes food/drink (namely, object to be heated) in the microwave by image capturing using a camera. When heating the food/drink, the transmitter 100 causes the above lighting apparatus to emit light and also to change luminance, whereby the transmitter 100 transmits a light ID indicating the recognized food/drink. Note that the lighting apparatus illuminates the inside of the microwave, yet light from the lighting apparatus exits from the microwave through a light-transmissive window portion of the microwave. Accordingly, a light ID is transmitted to the outside of the microwave through the window portion of the microwave from the lighting apparatus.
Here, a user purchases food/drink at a convenience store, and puts the food/drink in the transmitter 100 which is a microwave to heat the food/drink. At this time, the transmitter 100 recognizes the food/drink using the camera, and starts heating the food/drink while transmitting a light ID indicating the recognized food/drink.
The receiver 200 obtains a light ID transmitted from the transmitter 100, by capturing an image of the transmitter 100 which has started heating, and transmits the light ID to a server. Next, the receiver 200 obtains, from the server, AR images, sound data, and recognition information associated with the light ID.
The AR images include an AR image P32a which is a video showing a virtual state inside the transmitter 100, an AR image P32b showing in detail the food/drink in the microwave, an AR image P32c which is a video showing a state in which steam rises from the transmitter 100, and an AR image P32d which is a video showing a remaining time until the food/drink is heated.
For example, if the food in the microwave is a pizza, the AR image P32a is a video showing that a turntable on which the pizza is placed is rotating, and a plurality of dwarves are dancing around the pizza. For example, if the food in the microwave is a pizza, the AR image P32b is an image showing the name of the item “pizza” and the ingredients of the pizza.
The receiver 200 recognizes, as a target region of the AR image P32a, a region showing the window portion of the transmitter 100 in the captured display image Ppre, based on the recognition information, and superimposes the AR image P32a on the target region. Furthermore, the receiver 200 recognizes, as a target region of the AR image P32b, a region above the region in which the transmitter 100 is shown in the captured display image Ppre, based on the recognition information, and superimposes the AR image P32b on the target region. Furthermore, the receiver 200 recognizes, as a target region of the AR image P32c, a region between the target region of the AR image P32a and the target region of the AR image P32b, in the captured display image Ppre, based on the recognition information, and superimposes the AR image P32c on the target region. Furthermore, the receiver 200 recognizes, as a target region of the AR image P32d, a region under the region in which the transmitter 100 is shown in the captured display image Ppre, based on the recognition information, and superimposes the AR image P32d on the target region.
Furthermore, the receiver 200 outputs sound generated when the food is heated, by playing sound data.
Since the receiver 200 displays the AR images P32a to P32d and further outputs sound as described above, the user's interest can be attracted to the receiver 200 until heating the food is completed. As a result, a burden on the user waiting for the completion of heating can be reduced. Furthermore, the AR image P32c showing steam or the like is displayed, and sound generated when food/drink is heated is output, thus giving an appetite stimulus to the user. The display of the AR image P32d can readily inform the user of the remaining time until heating the food/drink is completed. Accordingly, the user can take a look at, for instance, a book in the store away from the transmitter 100 which is a microwave. Furthermore, the receiver 200 can inform the user of the completion of heating when the remaining time is 0.
Note that in the above example, the AR image P32a is a video showing that a turntable on which a pizza is placed is rotating, and a plurality of dwarves are dancing around the pizza, yet may be an image, for example, virtually showing a temperature distribution inside the microwave. Furthermore, the AR image P32b shows the name of the item and ingredients of the food/drink in the microwave, yet may show nutritional information or calories. Alternatively, the AR image P32b may show a discount coupon.
As described above, with the display method according to this variation, a subject is a microwave which includes the lighting apparatus, and the lighting apparatus illuminates the inside of the microwave and transmits a light ID to the outside of the microwave by changing luminance. To obtain a captured display image Ppre and a decode target image Pdec, a captured display image Ppre and a decode target image Pdec are obtained by capturing an image of the microwave transmitting a light ID. When recognizing a target region, a window portion of the microwave shown in the captured display image Ppre is recognized as a target region. When displaying the captured display image Ppre, a captured display image Ppre on which an AR image showing a change in the state of the inside of the microwave is superimposed is displayed.
In this manner, the change in the state of the inside of the microwave is displayed as an AR image, and thus the user of the microwave can be readily informed of the state of the inside of the microwave.
First, the microwave recognizes food/drink inside the microwave, using a camera (step S481). Next, the microwave transmits a light ID indicating the recognized food/drink to the receiver 200 by changing luminance of the lighting apparatus.
The receiver 200 receives a light ID transmitted from the microwave by capturing an image of the microwave (step S483), and transmits the light ID and card information to the relay server. The card information is, for instance, credit card information stored in advance in the receiver 200, and necessary for electronic payment.
The relay server stores a table indicating, for each light ID, an AR image, recognition information, and item information associated with the light ID. The item information indicates, for instance, the price of food/drink indicated by the light ID. Upon receipt of the light ID and the card information transmitted from the receiver 200 (step S486), such a relay server finds item information associated with the light ID from the above table. The relay server transmits the item information and the card information to the electronic payment server (step S486). Upon receipt of the item information and the card information transmitted from the relay server (step S487), the electronic payment server processes an electronic payment, based on the item information and the card information (step S488). Upon completion of the processing of the electronic payment, the electronic payment server notifies the relay server of the completion (step S489).
When the relay server checks the notification of the completion of the payment from the electronic payment server (step S490), the relay server instructs a microwave to start heating food/drink (step S491). Furthermore, the relay server transmits, to the receiver 200, an AR image and recognition information associated with the light ID received in step S485 in the above-mentioned table (step S493).
Upon receipt of the instruction to start heating from the relay server, the microwave starts heating the food/drink in the microwave (step S492). Upon receipt of the AR image and the recognition information transmitted from the relay server, the receiver 200 recognizes a target region according to the recognition information from captured display images Ppre periodically obtained by image capturing started in step S483. The receiver 200 superimposes the AR image on the target region (step S494).
Accordingly, by putting food/drink in the microwave and capturing an image of the food/drink, the user of the receiver 200 can readily make the payment and start heating the food/drink. If the payments cannot be made, it is possible to prohibit the user from heating the food/drink. Furthermore, when heating is started, the AR image P32a and others illustrated in
First, the user of the receiver 200 selects, at a store, food/drink which is an item, and goes to a spot where the POS terminal is provided to purchase the food/drink. A salesclerk of the store operates the POS terminal and receives money for the food/drink from the user. The POS terminal obtains operation input data and sales information through the operation of the POS terminal by the salesclerk (step S501). The sales information indicates the name and the price of the item, the number of item(s) sold, and when and where the item(s) is sold, for example. The operation input data indicates, for example, the user's gender and age, for instance, input by the salesclerk. The POS terminal transmits the operation input data and sales information to the server (step S502). The server receives the operation input data and the sales information transmitted from the POS terminal (step S503).
On the other hand, if the user of the receiver 200 pays the salesclerk for the food/drink, the user puts the food/drink in the microwave, in order to heat the food/drink. The microwave recognizes the food/drink inside the microwave, using the camera (step S504). Next, the microwave transmits a light ID indicating the recognized food/drink to the receiver 200 by changing luminance of the lighting apparatus (step S505). Then, the microwave starts heating the food/drink (step S507).
The receiver 200 receives a light ID transmitted from the microwave by capturing an image of the microwave (step S508), and transmits the light ID and terminal information to the server (step S509). The terminal information is stored in advance in the receiver 200, and indicates, for example, the type of a language (for example, English, Japanese, or the like) to be displayed on the display 201 of the receiver 200.
If the server accesses from the receiver 200, and receives the light ID and the terminal information transmitted from the receiver 200, the server determines whether the access from the receiver 200 is the initial access (step S510). The initial access is the access first made within a predetermined period since the processing of step S503 is performed. Here, if the server determines that the access from the receiver 200 is the initial access (Yes in step S510), the server stores the operation input data and the terminal information in association (step S511).
Note that although the server determines whether the access from the receiver 200 is the initial access, the server may determine whether the item indicated by the sales information matches food/drink indicated by the light ID. Furthermore, not only the server associates operation input data and terminal information, but also the server may store sales information also in association with the operation input data and the terminal information in step S511.
(Indoor Utilization)
The receiver 200 receives a light ID transmitted by the transmitter 100 configured as a lighting apparatus, and estimates the current position of the receiver 200. Furthermore, the receiver 200 guides the user by displaying the current position on a map, or displays information of neighboring stores.
By transmitting disaster information and refuge information from the transmitter 100 in case of the emergency, even if a communication line is busy, a communication base station has a trouble, or the receiver is at a spot where a radio wave from the communication base station cannot reach, the user can obtain such information. This is effective when the user fails to catch emergency broadcast, or is effective for a hearing-impaired person who cannot hear emergency broadcast.
The receiver 200 obtains a light ID transmitted from the transmitter 100 by image capturing, and further obtains, from the server, an AR image P33 and recognition information associated with the light ID. The receiver 200 recognizes a target region according to the recognition information from a captured display image Ppre obtained by the above image capturing, and superimposes an AR image P33 having the arrow shape on the target region. Accordingly, the receiver 200 can be used as the way finder described above (see
(Display of Augmented Reality Object)
A stage 2718e for augmented reality display is configured as the transmitter 100 described above, and transmits, through a light emission pattern and a position pattern of light emitting units 2718a, 2718b, 2718c, and 2718d, information on an augmented reality object, and a reference position at which an augmented reality object is to be displayed.
Based on the received information, the receiver 200 superimposes an augmented reality object 2718f which is an AR image on a captured image, and displays the image.
It should be noted that these general and specific aspects may be implemented using an apparatus, a system, a method, an integrated circuit, a computer program, a computer-readable recording medium such as a CD-ROM, or any combination of apparatuses, systems, methods, integrated circuits, computer programs, or recording media. A computer program for executing the method according to an embodiment may be stored in a recording medium of the server, and the method may be achieved in such a manner that the server delivers the program to a terminal in response to a request from the terminal.
Although the above is a description of exemplary embodiments, the scope of the claims of the present application is not limited to those embodiments. Without departing from novel teaching and advantages of a subject matter described in the appended claims, various modifications may be made to the above embodiments, and elements in the above embodiments may be arbitrarily combined to achieve another embodiment, which is readily understood by a person skilled in the art. Therefore, such modifications and other embodiments are also included in the present disclosure.
The display method according to the present disclosure yields advantageous effects of displaying an image useful to a user, and for example, can be used for display apparatuses such as smartphones, glasses, and tablet terminals.
Number | Date | Country | Kind |
---|---|---|---|
2012-286339 | Dec 2012 | JP | national |
2013-070740 | Mar 2013 | JP | national |
2013-082546 | Apr 2013 | JP | national |
2013-110445 | May 2013 | JP | national |
2013-158359 | Jul 2013 | JP | national |
2013-180729 | Aug 2013 | JP | national |
2013-222827 | Oct 2013 | JP | national |
2013-224805 | Oct 2013 | JP | national |
2013-237460 | Nov 2013 | JP | national |
2013-242407 | Nov 2013 | JP | national |
2014-192032 | Sep 2014 | JP | national |
2014-232187 | Nov 2014 | JP | national |
2014-258111 | Dec 2014 | JP | national |
2015-029096 | Feb 2015 | JP | national |
2015-029104 | Feb 2015 | JP | national |
2015-245738 | Dec 2015 | JP | national |
2016-100008 | May 2016 | JP | national |
2016-123067 | Jun 2016 | JP | national |
2016-145845 | Jul 2016 | JP | national |
2016-220024 | Nov 2016 | JP | national |
The present application is a continuation application of U.S. application Ser. No. 15/381,940, filed Dec. 16, 2016, which is a continuation-in-part of U.S. application Ser. No. 14/973,783 filed on Dec. 18, 2015, and claims the benefit of U.S. Provisional Patent Application No. 62/338,071 filed on May 18, 2016, U.S. Provisional Patent Application No. 62/276,454 filed on Jan. 8, 2016, Japanese Patent Application No. 2016-220024 filed on Nov. 10, 2016, Japanese Patent Application No. 2016-145845 filed on Jul. 25, 2016, Japanese Patent Application No. 2016-123067 filed on Jun. 21, 2016, and Japanese Patent Application No. 2016-100008 filed on May 18, 2016. U.S. application Ser. No. 14/973,783 filed on Dec. 18, 2015 is a continuation-in-part of U.S. application Ser. No. 14/582,751 filed on Dec. 24, 2014, and claims the benefit of U.S. Provisional Patent Application No. 62/251,980 filed on Nov. 6, 2015, Japanese Patent Application No. 2014-258111 filed on Dec. 19, 2014, Japanese Patent Application No. 2015-029096 filed on Feb. 17, 2015, Japanese Patent Application No. 2015-029104 filed on Feb. 17, 2015, Japanese Patent Application No. 2014-232187 filed on Nov. 14, 2014, and Japanese Patent Application No. 2015-245738 filed on Dec. 17, 2015. U.S. application Ser. No. 14/582,751 is a continuation-in-part of U.S. patent application Ser. No. 14/142,413 filed on Dec. 27, 2013, and claims benefit of U.S. Provisional Patent Application No. 62/028,991 filed on Jul. 25, 2014, U.S. Provisional Patent Application No. 62/019,515 filed on Jul. 1, 2014, and Japanese Patent Application No. 2014-192032 filed on Sep. 19, 2014. U.S. application Ser. No. 14/142,413 claims benefit of U.S. Provisional Patent Application No. 61/904,611 filed on Nov. 15, 2013, U.S. Provisional Patent Application No. 61/896,879 filed on Oct. 29, 2013, U.S. Provisional Patent Application No. 61/895,615 filed on Oct. 25, 2013, U.S. Provisional Patent Application No. 61/872,028 filed on Aug. 30, 2013, U.S. Provisional Patent Application No. 61/859,902 filed on Jul. 30, 2013, U.S. Provisional Patent Application No. 61/810,291 filed on Apr. 10, 2013, U.S. Provisional Patent Application No. 61/805,978 filed on Mar. 28, 2013, U.S. Provisional Patent Application No. 61/746,315 filed on Dec. 27, 2012, Japanese Patent Application No. 2013-242407 filed on Nov. 22, 2013, Japanese Patent Application No. 2013-237460 filed on Nov. 15, 2013, Japanese Patent Application No. 2013-224805 filed on Oct. 29, 2013, Japanese Patent Application No. 2013-222827 filed on Oct. 25, 2013, Japanese Patent Application No. 2013-180729 filed on Aug. 30, 2013, Japanese Patent Application No. 2013-158359 filed on Jul. 30, 2013, Japanese Patent Application No. 2013-110445 filed on May 24, 2013, Japanese Patent Application No. 2013-082546 filed on Apr. 10, 2013, Japanese Patent Application No. 2013-070740 filed on Mar. 28, 2013, and Japanese Patent Application No. 2012-286339 filed on Dec. 27, 2012. The entire disclosures of the above-identified applications, including the specifications, drawings and claims are incorporated herein by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
4807031 | Broughton et al. | Feb 1989 | A |
4812909 | Yokobayashi et al. | Mar 1989 | A |
5484998 | Bejnar et al. | Jan 1996 | A |
5734328 | Shinbori | Mar 1998 | A |
5765176 | Bloomberg | Jun 1998 | A |
5822310 | Chennakeshu et al. | Oct 1998 | A |
5974348 | Rocks | Oct 1999 | A |
6062481 | Storch et al. | May 2000 | A |
6345104 | Rhoads | Feb 2002 | B1 |
6347163 | Roustaei | Feb 2002 | B2 |
6801719 | Szajewski et al. | Oct 2004 | B1 |
6933956 | Sato et al. | Aug 2005 | B2 |
7308194 | Iizuka et al. | Dec 2007 | B2 |
7415212 | Matsushita et al. | Aug 2008 | B2 |
7429983 | Islam | Sep 2008 | B2 |
7502053 | Kagawa et al. | Mar 2009 | B2 |
7570246 | Maniam et al. | Aug 2009 | B2 |
7715723 | Kagawa et al. | May 2010 | B2 |
7728893 | Kagawa et al. | Jun 2010 | B2 |
7787012 | Scales et al. | Aug 2010 | B2 |
RE42848 | Sato et al. | Oct 2011 | E |
8054357 | Tay | Nov 2011 | B2 |
8093988 | Takene et al. | Jan 2012 | B2 |
8248467 | Ganick et al. | Aug 2012 | B1 |
8256673 | Kim | Sep 2012 | B1 |
8264546 | Witt | Sep 2012 | B2 |
8331724 | Rhoads | Dec 2012 | B2 |
8334901 | Ganick et al. | Dec 2012 | B1 |
RE44004 | Sato et al. | Feb 2013 | E |
8451264 | Yamaguchi et al. | May 2013 | B2 |
8493483 | Nishihara | Jul 2013 | B2 |
8493485 | Hirose | Jul 2013 | B2 |
8542906 | Persson et al. | Sep 2013 | B1 |
8550366 | Myodo et al. | Oct 2013 | B2 |
8571217 | Ishii et al. | Oct 2013 | B2 |
8587680 | Okumura et al. | Nov 2013 | B2 |
8594840 | Chiappetta et al. | Nov 2013 | B1 |
8634725 | Jang et al. | Jan 2014 | B2 |
8648911 | Okumura | Feb 2014 | B2 |
8720779 | Asami | May 2014 | B2 |
8731301 | Bushman et al. | May 2014 | B1 |
8749470 | Furihata et al. | Jun 2014 | B2 |
8773577 | Velarde et al. | Jul 2014 | B2 |
8780342 | DiBernardo et al. | Jul 2014 | B2 |
8823852 | Yamada et al. | Sep 2014 | B2 |
8908074 | Oshima et al. | Dec 2014 | B2 |
8913144 | Oshima et al. | Dec 2014 | B2 |
8922666 | Oshima et al. | Dec 2014 | B2 |
8953072 | Nishihara | Feb 2015 | B2 |
8965216 | Oshima et al. | Feb 2015 | B2 |
8994841 | Oshima et al. | Mar 2015 | B2 |
9058764 | Persson et al. | Jun 2015 | B1 |
9083543 | Oshima et al. | Jul 2015 | B2 |
9083544 | Oshima et al. | Jul 2015 | B2 |
9085927 | Oshima et al. | Jul 2015 | B2 |
9087349 | Oshima et al. | Jul 2015 | B2 |
9143339 | Oshima et al. | Sep 2015 | B2 |
9153010 | Tezuka | Oct 2015 | B2 |
9166810 | Oshima et al. | Oct 2015 | B2 |
9184838 | Oshima et al. | Nov 2015 | B2 |
9258058 | Oshima et al. | Feb 2016 | B2 |
9277154 | Nishihara | Mar 2016 | B2 |
9300845 | Oshima et al. | Mar 2016 | B2 |
9380227 | Oshima et al. | Jun 2016 | B2 |
9560284 | Oshima et al. | Jan 2017 | B2 |
9608725 | Aoyama et al. | Mar 2017 | B2 |
10303945 | Aoyama | May 2019 | B2 |
20020018446 | Huh et al. | Feb 2002 | A1 |
20020167701 | Hirata | Nov 2002 | A1 |
20020171639 | Ben-David | Nov 2002 | A1 |
20030026422 | Gerheim et al. | Feb 2003 | A1 |
20030058262 | Sato et al. | Mar 2003 | A1 |
20030067660 | Oda et al. | Apr 2003 | A1 |
20030076338 | Hashimoto | Apr 2003 | A1 |
20030171096 | Ilan et al. | Sep 2003 | A1 |
20030193699 | Tay | Oct 2003 | A1 |
20040101309 | Beyette, Jr. et al. | May 2004 | A1 |
20040125053 | Fujisawa | Jul 2004 | A1 |
20040137898 | Crandell, Jr. et al. | Jul 2004 | A1 |
20040161246 | Matsushita et al. | Aug 2004 | A1 |
20050018058 | Aliaga et al. | Jan 2005 | A1 |
20050162584 | Yamamoto et al. | Jul 2005 | A1 |
20050190274 | Yoshikawa et al. | Sep 2005 | A1 |
20050265731 | Keum et al. | Dec 2005 | A1 |
20060044741 | Bussan | Mar 2006 | A1 |
20060056855 | Nakagawa et al. | Mar 2006 | A1 |
20060145824 | Frenzel et al. | Jul 2006 | A1 |
20060171360 | Kim et al. | Aug 2006 | A1 |
20060239675 | Iizuka et al. | Oct 2006 | A1 |
20060239689 | Ashdown | Oct 2006 | A1 |
20060242908 | McKinney | Nov 2006 | A1 |
20070024571 | Maniam et al. | Feb 2007 | A1 |
20070046789 | Kirisawa | Mar 2007 | A1 |
20070058987 | Suzuki | Mar 2007 | A1 |
20070070060 | Kagawa et al. | Mar 2007 | A1 |
20070091055 | Sakuda | Apr 2007 | A1 |
20070092264 | Suzuki et al. | Apr 2007 | A1 |
20070126909 | Kuruma | Jun 2007 | A1 |
20070222743 | Hirakata | Sep 2007 | A1 |
20070273610 | Baillot | Nov 2007 | A1 |
20070276590 | Leonard et al. | Nov 2007 | A1 |
20080007512 | Honbo | Jan 2008 | A1 |
20080018751 | Kushida | Jan 2008 | A1 |
20080023546 | Myodo et al. | Jan 2008 | A1 |
20080044188 | Kagawa et al. | Feb 2008 | A1 |
20080048968 | Okada et al. | Feb 2008 | A1 |
20080055041 | Takene et al. | Mar 2008 | A1 |
20080063410 | Irie | Mar 2008 | A1 |
20080074424 | Carignano | Mar 2008 | A1 |
20080106608 | Clark et al. | May 2008 | A1 |
20080122994 | Cernasov | May 2008 | A1 |
20080180547 | Hirose | Jul 2008 | A1 |
20080187318 | Osanai | Aug 2008 | A1 |
20080205848 | Kobayashi | Aug 2008 | A1 |
20080290988 | Crawford | Nov 2008 | A1 |
20080297360 | Knox et al. | Dec 2008 | A1 |
20080297615 | Kagawa et al. | Dec 2008 | A1 |
20090002265 | Kitaoka et al. | Jan 2009 | A1 |
20090033757 | Shimada | Feb 2009 | A1 |
20090052902 | Shinokura | Feb 2009 | A1 |
20090066689 | Yamaguchi et al. | Mar 2009 | A1 |
20090129781 | Irie et al. | May 2009 | A1 |
20090135271 | Kurane | May 2009 | A1 |
20090214225 | Nakagawa et al. | Aug 2009 | A1 |
20090274381 | Kirenko | Nov 2009 | A1 |
20090297156 | Nakagawa et al. | Dec 2009 | A1 |
20090297157 | Nakagawa | Dec 2009 | A1 |
20090297166 | Nakagawa et al. | Dec 2009 | A1 |
20090297167 | Nakagawa et al. | Dec 2009 | A1 |
20090310976 | Nakagawa et al. | Dec 2009 | A1 |
20090317088 | Niiho et al. | Dec 2009 | A1 |
20100020970 | Liu et al. | Jan 2010 | A1 |
20100034540 | Togashi | Feb 2010 | A1 |
20100107189 | Steelberg et al. | Apr 2010 | A1 |
20100111538 | Arita et al. | May 2010 | A1 |
20100116888 | Asami | May 2010 | A1 |
20100129087 | Kim et al. | May 2010 | A1 |
20100135669 | Kim et al. | Jun 2010 | A1 |
20100157121 | Tay | Jun 2010 | A1 |
20100164922 | Nose et al. | Jul 2010 | A1 |
20100315395 | Kang et al. | Dec 2010 | A1 |
20100328359 | Inoue et al. | Dec 2010 | A1 |
20110002695 | Choi et al. | Jan 2011 | A1 |
20110007160 | Okumura | Jan 2011 | A1 |
20110007171 | Okumura et al. | Jan 2011 | A1 |
20110019016 | Saito et al. | Jan 2011 | A1 |
20110025730 | Ajichi | Feb 2011 | A1 |
20110052214 | Shimada et al. | Mar 2011 | A1 |
20110063510 | Lee et al. | Mar 2011 | A1 |
20110064416 | Rajagopal et al. | Mar 2011 | A1 |
20110069971 | Kim et al. | Mar 2011 | A1 |
20110080510 | Nishihara | Apr 2011 | A1 |
20110096192 | Niikura | Apr 2011 | A1 |
20110105134 | Kim et al. | May 2011 | A1 |
20110135317 | Chaplin | Jun 2011 | A1 |
20110150285 | Kimura | Jun 2011 | A1 |
20110164881 | Rajagopal et al. | Jul 2011 | A1 |
20110221779 | Okumura et al. | Sep 2011 | A1 |
20110227827 | Solomon et al. | Sep 2011 | A1 |
20110229147 | Yokoi | Sep 2011 | A1 |
20110243325 | Ishii et al. | Oct 2011 | A1 |
20110299857 | Rekimoto | Dec 2011 | A1 |
20120032977 | Kim et al. | Feb 2012 | A1 |
20120069131 | Abelow | Mar 2012 | A1 |
20120076509 | Gurovich et al. | Mar 2012 | A1 |
20120077431 | Fyke et al. | Mar 2012 | A1 |
20120080515 | van Der Merwe | Apr 2012 | A1 |
20120133815 | Nakanishi et al. | May 2012 | A1 |
20120155889 | Kim et al. | Jun 2012 | A1 |
20120169605 | Lin et al. | Jul 2012 | A1 |
20120188442 | Kennedy | Jul 2012 | A1 |
20120206648 | Casagrande et al. | Aug 2012 | A1 |
20120220311 | Rodriguez et al. | Aug 2012 | A1 |
20120224743 | Rodriguez et al. | Sep 2012 | A1 |
20120257082 | Kato et al. | Oct 2012 | A1 |
20120281987 | Schenk et al. | Nov 2012 | A1 |
20120320101 | Goden et al. | Dec 2012 | A1 |
20120328302 | Iizuka et al. | Dec 2012 | A1 |
20130055072 | Arnold et al. | Feb 2013 | A1 |
20130109961 | Bose et al. | May 2013 | A1 |
20130127980 | Haddick et al. | May 2013 | A1 |
20130136457 | Park et al. | May 2013 | A1 |
20130141555 | Ganick et al. | Jun 2013 | A1 |
20130169663 | Seong et al. | Jul 2013 | A1 |
20130170695 | Anan et al. | Jul 2013 | A1 |
20130201369 | Hirose | Aug 2013 | A1 |
20130212453 | Gudai et al. | Aug 2013 | A1 |
20130249900 | Lee et al. | Sep 2013 | A1 |
20130251374 | Chen et al. | Sep 2013 | A1 |
20130251375 | Ozaki et al. | Sep 2013 | A1 |
20130256422 | Osborne et al. | Oct 2013 | A1 |
20130271631 | Tatsuzawa et al. | Oct 2013 | A1 |
20130272717 | Deguchi et al. | Oct 2013 | A1 |
20130299677 | Nishihara | Nov 2013 | A1 |
20130317916 | Gopalakrishnan et al. | Nov 2013 | A1 |
20130329440 | Tsutsumi et al. | Dec 2013 | A1 |
20130330088 | Oshima et al. | Dec 2013 | A1 |
20130335592 | Yamada et al. | Dec 2013 | A1 |
20130337787 | Yamada et al. | Dec 2013 | A1 |
20140010549 | Kang | Jan 2014 | A1 |
20140022547 | Knox et al. | Jan 2014 | A1 |
20140035952 | Mikuni | Feb 2014 | A1 |
20140037296 | Yamada et al. | Feb 2014 | A1 |
20140055420 | Yokoi et al. | Feb 2014 | A1 |
20140079281 | Williams et al. | Mar 2014 | A1 |
20140093238 | Roberts | Apr 2014 | A1 |
20140093249 | Roberts et al. | Apr 2014 | A1 |
20140117074 | Kim | May 2014 | A1 |
20140125852 | Baer et al. | May 2014 | A1 |
20140184883 | Shimamoto | Jul 2014 | A1 |
20140184914 | Oshima et al. | Jul 2014 | A1 |
20140185860 | Oshima et al. | Jul 2014 | A1 |
20140186026 | Oshima et al. | Jul 2014 | A1 |
20140186047 | Oshima et al. | Jul 2014 | A1 |
20140186048 | Oshima et al. | Jul 2014 | A1 |
20140186049 | Oshima et al. | Jul 2014 | A1 |
20140186050 | Oshima et al. | Jul 2014 | A1 |
20140186052 | Oshima et al. | Jul 2014 | A1 |
20140186055 | Oshima et al. | Jul 2014 | A1 |
20140192185 | Oshima et al. | Jul 2014 | A1 |
20140192226 | Oshima et al. | Jul 2014 | A1 |
20140193162 | Iizuka et al. | Jul 2014 | A1 |
20140204129 | Oshima et al. | Jul 2014 | A1 |
20140205136 | Oshima et al. | Jul 2014 | A1 |
20140207517 | Oshima et al. | Jul 2014 | A1 |
20140212145 | Oshima et al. | Jul 2014 | A1 |
20140212146 | Oshima et al. | Jul 2014 | A1 |
20140232896 | Oshima et al. | Aug 2014 | A1 |
20140232903 | Oshima et al. | Aug 2014 | A1 |
20140270793 | Bradford | Sep 2014 | A1 |
20140286644 | Oshima et al. | Sep 2014 | A1 |
20140290138 | Oshima et al. | Oct 2014 | A1 |
20140294397 | Oshima et al. | Oct 2014 | A1 |
20140294398 | Oshima et al. | Oct 2014 | A1 |
20140307155 | Oshima et al. | Oct 2014 | A1 |
20140307156 | Oshima et al. | Oct 2014 | A1 |
20140307157 | Oshima et al. | Oct 2014 | A1 |
20140314420 | De Bruijn et al. | Oct 2014 | A1 |
20140321859 | Guo et al. | Oct 2014 | A1 |
20140376922 | Oshima et al. | Dec 2014 | A1 |
20150023673 | Iizuka et al. | Jan 2015 | A1 |
20150030335 | Son et al. | Jan 2015 | A1 |
20150050027 | Oshima et al. | Feb 2015 | A1 |
20150071439 | Liu et al. | Mar 2015 | A1 |
20150108330 | Nishihara | Apr 2015 | A1 |
20150139552 | Xiao et al. | May 2015 | A1 |
20150160175 | Knox et al. | Jun 2015 | A1 |
20150235423 | Tobita | Aug 2015 | A1 |
20150249496 | Muijs et al. | Sep 2015 | A1 |
20150263807 | Yamasaki | Sep 2015 | A1 |
20160028478 | Rietman et al. | Jan 2016 | A1 |
Number | Date | Country |
---|---|---|
2007253450 | Nov 2007 | AU |
2187863 | Jan 1995 | CN |
1702984 | Nov 2005 | CN |
100340903 | Oct 2007 | CN |
101088295 | Dec 2007 | CN |
101099186 | Jan 2008 | CN |
101105920 | Jan 2008 | CN |
101159799 | Apr 2008 | CN |
101350669 | Jan 2009 | CN |
101355651 | Jan 2009 | CN |
101358846 | Feb 2009 | CN |
101395901 | Mar 2009 | CN |
101432997 | May 2009 | CN |
101490985 | Jul 2009 | CN |
101647031 | Feb 2010 | CN |
101710890 | May 2010 | CN |
101751866 | Jun 2010 | CN |
101959016 | Jan 2011 | CN |
101960508 | Jan 2011 | CN |
102006120 | Apr 2011 | CN |
102036023 | Apr 2011 | CN |
102053453 | May 2011 | CN |
102224728 | Oct 2011 | CN |
102654400 | Sep 2012 | CN |
102679200 | Sep 2012 | CN |
102684869 | Sep 2012 | CN |
102739940 | Oct 2012 | CN |
102811284 | Dec 2012 | CN |
102842282 | Dec 2012 | CN |
102843186 | Dec 2012 | CN |
1912354 | Apr 2008 | EP |
2503852 | Sep 2012 | EP |
07-200428 | Aug 1995 | JP |
2002-144984 | May 2002 | JP |
2002-290335 | Oct 2002 | JP |
2003-179556 | Jun 2003 | JP |
2003-281482 | Oct 2003 | JP |
2004-72365 | Mar 2004 | JP |
2004-306902 | Nov 2004 | JP |
2004-334269 | Nov 2004 | JP |
2005-160119 | Jun 2005 | JP |
2006-020294 | Jan 2006 | JP |
2006-092486 | Apr 2006 | JP |
2006-121466 | May 2006 | JP |
2006-227204 | Aug 2006 | JP |
2006-237869 | Sep 2006 | JP |
2006-319545 | Nov 2006 | JP |
2006-340138 | Dec 2006 | JP |
2007-19936 | Jan 2007 | JP |
2007-036833 | Feb 2007 | JP |
2007-043706 | Feb 2007 | JP |
2007-049584 | Feb 2007 | JP |
2007-060093 | Mar 2007 | JP |
2007-082098 | Mar 2007 | JP |
2007-096548 | Apr 2007 | JP |
2007-124404 | May 2007 | JP |
2007-189341 | Jul 2007 | JP |
2007-201681 | Aug 2007 | JP |
2007-221570 | Aug 2007 | JP |
2007-228512 | Sep 2007 | JP |
2007-248861 | Sep 2007 | JP |
2007-264905 | Oct 2007 | JP |
2007-274052 | Oct 2007 | JP |
2007-295442 | Nov 2007 | JP |
2007-312383 | Nov 2007 | JP |
2008-015402 | Jan 2008 | JP |
2008-033625 | Feb 2008 | JP |
2008-057129 | Mar 2008 | JP |
2008-124922 | May 2008 | JP |
2008-187615 | Aug 2008 | JP |
2008-192000 | Aug 2008 | JP |
2008-224536 | Sep 2008 | JP |
2008-252466 | Oct 2008 | JP |
2008-252570 | Oct 2008 | JP |
2008-282253 | Nov 2008 | JP |
2008-292397 | Dec 2008 | JP |
2009-88704 | Apr 2009 | JP |
2009-117892 | May 2009 | JP |
2009-130771 | Jun 2009 | JP |
2009-206620 | Sep 2009 | JP |
2009-212768 | Sep 2009 | JP |
2009-232083 | Oct 2009 | JP |
2009-538071 | Oct 2009 | JP |
2009-290359 | Dec 2009 | JP |
2010-103746 | May 2010 | JP |
2010-117871 | May 2010 | JP |
2010-147527 | Jul 2010 | JP |
2010-152285 | Jul 2010 | JP |
2010-226172 | Oct 2010 | JP |
2010-232912 | Oct 2010 | JP |
2010-258645 | Nov 2010 | JP |
2010-268264 | Nov 2010 | JP |
2010-278573 | Dec 2010 | JP |
2010-287820 | Dec 2010 | JP |
2011-023819 | Feb 2011 | JP |
2011-029735 | Feb 2011 | JP |
2011-29871 | Feb 2011 | JP |
2011-119820 | Jun 2011 | JP |
4736397 | Jul 2011 | JP |
2011-223060 | Nov 2011 | JP |
2011-250231 | Dec 2011 | JP |
2011-254317 | Dec 2011 | JP |
2012-010269 | Jan 2012 | JP |
2012-043193 | Mar 2012 | JP |
2012-95214 | May 2012 | JP |
2012-169189 | Sep 2012 | JP |
2012-195763 | Oct 2012 | JP |
2012-205168 | Oct 2012 | JP |
2012-244549 | Dec 2012 | JP |
2013-042221 | Feb 2013 | JP |
2013-197849 | Sep 2013 | JP |
2013-223043 | Oct 2013 | JP |
2013-223047 | Oct 2013 | JP |
2013-223209 | Oct 2013 | JP |
2013-235505 | Nov 2013 | JP |
5393917 | Jan 2014 | JP |
5395293 | Jan 2014 | JP |
5405695 | Feb 2014 | JP |
5521125 | Jun 2014 | JP |
5541153 | Jul 2014 | JP |
2015-524103 | Aug 2015 | JP |
9426063 | Nov 1994 | WO |
96036163 | Nov 1996 | WO |
99044336 | Sep 1999 | WO |
0007356 | Feb 2000 | WO |
01093473 | Dec 2001 | WO |
03036829 | May 2003 | WO |
2005001593 | Jan 2005 | WO |
2006013755 | Feb 2006 | WO |
2006123697 | Nov 2006 | WO |
2007004530 | Jan 2007 | WO |
2007032276 | Mar 2007 | WO |
2007135014 | Nov 2007 | WO |
2008114104 | Sep 2008 | WO |
2008133303 | Nov 2008 | WO |
2009113415 | Sep 2009 | WO |
2009113416 | Sep 2009 | WO |
2009144853 | Dec 2009 | WO |
2010071193 | Jun 2010 | WO |
2011034346 | Mar 2011 | WO |
2011086517 | Jul 2011 | WO |
2011155130 | Dec 2011 | WO |
2012026039 | Mar 2012 | WO |
2012120853 | Sep 2012 | WO |
2012123572 | Sep 2012 | WO |
2012127439 | Sep 2012 | WO |
2013109934 | Jul 2013 | WO |
2013171954 | Nov 2013 | WO |
2013175803 | Nov 2013 | WO |
Entry |
---|
Office Action, dated Jun. 28, 2019 in U.S. Appl. No. 16/380,190. |
Office Action dated Aug. 2, 2019 in U.S. Appl. No. 16/380,053. |
Office Action dated Sep. 6, 2019 in U.S. Appl. No. 16/380,515. |
Office Action dated Nov. 21, 2014 in U.S. Appl. No. 14/261,572. |
Office Action dated Jan. 30, 2015 in U.S. Appl. No. 14/539,208. |
Office Action dated Mar. 6, 2015 in U.S. Appl. No. 14/087,707. |
International Search Report dated Feb. 3, 2015 in International Application No. PCT/JP2014/006448. |
Dai Yamanaka et al., “An investigation for the Adoption of Subcarrier Modulation to Wireless Visible Light Communication using Imaging Sensor”, The Institute of Electronics, Information and Communication Engineers IEICE Technical Report, Jan. 4, 2007, vol. 106, No. 450, pp. 25-30, with English translation. |
International Search Report and Written Opinion in PCT/JP2013/007708, dated Feb. 10, 2014. |
International Search Report (Appl. No. PCT/JP2013/006895), dated Feb. 25, 2014. |
English translation of Written Opinion of the International Search Authority, dated Feb. 25, 2014 in International Application No. PCT/JP2013/006895. |
International Search Report (Appl. No. PCT/JP2013/003319), dated Jun. 18, 2013. |
Office Action from U.S.A. (U.S. Appl. No. 13/902,436), dated Nov. 8, 2013. |
English translation of Written Opinion of the International Search Authority, dated Jun. 18, 2013 in International Application No. PCT/JP2013/003319. |
International Search Report (Appl. No. PCT/JP2013/006858), dated Feb. 4, 2014. |
International Search Report (Appl. No. PCT/JP2013/006857), dated Feb. 4, 2014. |
International Search Report (Appl. No. PCT/JP2013/006861), dated Feb. 4, 2014. |
International Search Report (Appl. No. PCT/JP2013/006863), dated Feb. 4, 2014. |
International Search Report (Appl. No. PCT/JP2013/006859), dated Feb. 10, 2014. |
International Search Report (Appl. No. PCT/JP2013/006860), dated Feb. 10, 2014. |
International Search Report (Appl. No. PCT/JP2013/006871), dated Feb. 18, 2014. |
Takao Nakamura et al., “Fast Watermark Detection Scheme from Analog Image for Camera-Equipped Cellular Phone”, IEICE Transactions, D-II, vol. J87-D-II, No. 12, pp. 2145-2155, Dec. 2004 with English translation. |
International Search Report (Appl. No. PCT/JP2013/003318), dated Jun. 18, 2013. |
Office Action from U.S.A. (U.S. Appl. No. 13/902,393), dated Jan. 29, 2014. |
English translation of Written Opinion of the International Search Authority, dated Feb. 4, 2014 in International Application No. PCT/JP2013/006894. |
International Search Report (Appl. No. PCT/JP2013/006869), dated Feb. 10, 2014. |
International Search Report (Appl. No. PCT/JP2013/006870), dated Feb. 10, 2014. |
English translation of Written Opinion of the International Search Authority, dated Feb. 10, 2014 in International Application No. PCT/JP2013/006870. |
International Search Report (Appl. No. PCT/JP2013/007709), dated Mar. 11, 2014. |
English translation of Written Opinion of the International Search Authority, dated Mar. 11, 2014 in International Application No. PCT/JP2013/007709. |
International Search Report (Appl. No. PCT/JP2013/007684), dated Feb. 10, 2014. |
International Search Report (Appl. No. PCT/JP2013/007675), dated Mar. 11, 2014. |
English translation of Written Opinion of the International Search Authority, dated Mar. 11, 2014 in International Application No. PCT/JP2013/007675. |
International Search Report (Appl. No. PCT/JP2013/006894), dated Feb. 4, 2014. |
Office Action from U.S.A. (U.S. Appl. No. 14/087,635), dated Jun. 20, 2014. |
Office Action from U.S.A. (U.S. Appl. No. 14/087,645), dated May 22, 2014. |
Office Action from U.S.A. (U.S. Appl. No. 14/141,833), dated Jul. 3, 2014. |
Office Action from U.S.A. (U.S. Appl. No. 13/911,530), dated Apr. 14, 2014. |
Office Action from U.S.A. (U.S. Appl. No. 13/902,393), dated Apr. 16, 2014. |
English translation of Written Opinion of the International Search Authority, dated Feb. 18, 2014 in International Application No. PCT/JP2013/006871. |
English translation of Written Opinion of the International Search Authority, dated Feb. 4, 2014 in International Application No. PCT/JP2013/006857. |
English translation of Written Opinion of the International Search Authority, dated Feb. 4, 2014 in International Application No. PCT/JP2013/006858. |
English translation of Written Opinion of the International Search Authority, dated Feb. 10, 2014 in International Application No. PCT/JP2013/006860. |
English translation of Written Opinion of the International Search Authority, dated Feb. 4, 2014 in International Application No. PCT/JP2013/006861. |
English translation of Written Opinion of the International Search Authority, dated Feb. 10, 2014 in International Application No. PCT/JP2013/006869. |
Office Action from U.S.A. (U.S. Appl. No. 14/210,688), dated Aug. 4, 2014. |
Office Action from U.S.A. (U.S. Appl. No. 13/911,530), dated Feb. 4, 2014. |
Office Action from U.S.A. (U.S. Appl. No. 14/087,619), dated Jul. 2, 2014. |
Office Action from U.S.A. (U.S. Appl. No. 14/261,572), dated Jul. 2, 2014. |
Office Action from U.S.A. (U.S. Appl. No. 14/087,639), dated Jul. 29, 2014. |
Office Action from U.S.A. (U.S. Appl. No. 13/902,393), dated Aug. 5, 2014. |
Office Action from U.S.A. (U.S. Appl. No. 13/911,530), dated Aug. 5, 2014. |
Office Action from U.S.A. (U.S. Appl. No. 14/315,509), dated Aug. 8, 2014. |
Office Action, dated Aug. 25, 2014, in U.S. Appl. No. 13/902,215. |
Office Action, dated Sep. 18, 2014, in U.S. Appl. No. 14/142,372. |
Office Action, dated Oct. 1, 2014, in U.S. Appl. No. 14/302,913. |
Office Action, dated Oct. 14, 2014, in U.S. Appl. No. 14/087,707. |
Gao et al., “Understanding 2D-BarCode Technology and Applications in M-Commerce—Design and Implementation of a 2D Barcode Processing Solution”, IEEE Computer Society 31st Annual International Computer Software and Applications Conference (COMPSAC 2007), Aug. 2007. |
Jiang Liu et al., “Foundational Analysis of Spatial Optical Wireless Communication Utilizing Image Sensor”, Imaging Systems and Techniques (IST), 2011 IEEE International Conference on Imaging Systems and Techniques, IEEE, May 17, 2011, pp. 205-209, XP031907193. |
Christos Danakis et al., “Using a CMOS Camera Sensor for Visible Light Communication”, 2012 IEEE Globecom Workshops, U.S., Dec. 3, 2012, pp. 1244-1248. |
Extended European Search Report, dated May 21, 2015 in European Patent Application No. 13793716.5. |
Extended European Search Report, dated Jun. 1, 2015 in European Patent Application No. 13793777.7. |
USPTO Office Action, dated Jun. 23, 2015, in U.S. Appl. No. 14/142,413. |
USPTO Office Action, dated Apr. 28, 2015 in U.S. Appl. No. 14/141,833. |
Office Action issued in Japan Patent Application No. 2015-129247, dated Jul. 28, 2015. |
Extended European Search Report, dated Nov. 10, 2015, in European Application No. 13869757.8. |
Extended European Search Report, dated Nov. 10, 2015, in European Application No. 13868814.8. |
Extended European Search Report, dated Nov. 10, 2015, in European Application No. 13868307.3. |
Extended European Search Report, dated Nov. 10, 2015, in European Application No. 13868118.4. |
Extended European Search Report, dated Nov. 10, 2015, in European Application No. 13867350.4. |
Extended European Search Report, dated Nov. 23, 2015, in European Application No. 13867905.5. |
Extended European Search Report, dated Nov. 23, 2015, in European Application No. 13866705.0. |
Extended European Search Report, dated Nov. 23, 2015, in European Application No. 13869275.1. |
Extended European Search Report, dated Nov. 27, 2015, in European Application No. 13869196.9. |
U.S. Office Action dated Sep. 4, 2015 in U.S. Appl. No. 14/141,829. |
U.S. Office Action dated Nov. 16, 2015 in U.S. Appl. No. 14/142,413. |
U.S. Office Action dated Jan. 4, 2016 in U.S. Appl. No. 14/711,876. |
U.S. Office Action dated Jan. 14, 2016 in U.S. Appl. No. 14/526,822. |
U.S. Office Action dated Jan. 22, 2016 in U.S. Appl. No. 14/141,829. |
USPTO Office Action, dated Mar. 11, 2016 in U.S. Appl. No. 14/087,605. |
Singapore Office Action, dated Apr. 20, 2016, in Singapore Patent Application No. 11201505027U. |
Extended European Search Report, dated May 19, 2016, in European Patent Application No. 13868645.6. |
China Office Action, dated May 27, 2016, in Chinese Patent Application 201380002141.0, with an English language translation of a Search Report. |
USPTO Office Action, dated Jun. 2, 2016 in U.S. Appl. No. 15/086,944. |
USPTO Office Action, dated Jun. 10, 2016 in U.S. Appl. No. 14/087,605. |
USPTO Office Action, dated Jun. 30, 2016, in U.S. Appl. No. 14/141,829. |
USPTO Office Action, dated Jul. 6, 2016 in U.S. Appl. No. 14/957,800. |
Singapore Office Action, dated Jun. 29, 2016, in Singapore Patent Application No. 11201504980T. |
USPTO Office Action, dated Jul. 15, 2016 in U.S. Appl. No. 14/973,783. |
Singapore Office Action, dated Jul. 8, 2016, in Singapore Patent Application No. 11201504985W. |
USPTO Office Action, dated Jul. 22, 2016, in U.S. Appl. No. 14/582,751. |
USPTO Office Action, dated Aug. 22, 2016, in U.S. Appl. No. 15/161,657. |
USPTO Office Action, dated Jan. 13, 2017, in U.S. Appl. No. 15/333,328. |
U.S. Office Action dated Feb. 24, 2017 in U.S. Appl. No. 15/393,392. |
U.S. Office Action, dated Mar. 22, 2017, in U.S. Appl. No. 15/161,657. |
U.S. Office Action, dated May 5, 2017, in U.S. Appl. No. 15/403,570. |
U.S. Office Action, dated Jun. 2, 2017, in U.S. Appl. No. 15/384,481. |
Japan Office Action, dated Nov. 14, 2017, in Japan Patent Application No. 2014-49554, together with an English language translation thereof. |
Japan Office Action, dated Nov. 28, 2017, in Japan Patent Application No. 2014-57304, together with an English language translation thereof. |
Japan Office Action, dated Dec. 5, 2017, in Japan Patent Application No. 2014-56211. |
Office Action, dated Mar. 7, 2018, in U.S. Appl. No. 15/386,814. |
Office Action, dated Apr. 10,2018, in European Patent Application No. 13868043.4. |
Office Action dated Jun. 1, 2018 in U.S. Appl. No. 15/813,244. |
Office Action dated Jun. 14, 2018 in EP application No. 13869196.9. |
Office Action dated Jun. 20, 2018 in EP application No. 13868814.8. |
Office Action dated Sep. 25, 2018 in EP application No. 13867350.4. |
Extended European Search Report issued for European Patent Application No. 16875643.5 dated Dec. 13, 2018. |
USPTO Office Action, dated Mar. 8, 2019, in U.S. Appl. No. 16/217,515. |
USPTO Office Action, dated Apr. 2, 2019, in U.S. Appl. No. 15/843,790. |
USPTO Office Action, dated Mar. 8, 2019, in U.S. Appl. No. 15/838,791. |
Number | Date | Country | |
---|---|---|---|
20190236369 A1 | Aug 2019 | US |
Number | Date | Country | |
---|---|---|---|
62338071 | May 2016 | US | |
62276454 | Jan 2016 | US | |
62251980 | Nov 2015 | US | |
62028991 | Jul 2014 | US | |
62019515 | Jul 2014 | US | |
61904611 | Nov 2013 | US | |
61896879 | Oct 2013 | US | |
61895615 | Oct 2013 | US | |
61872028 | Aug 2013 | US | |
61859902 | Jul 2013 | US | |
61810291 | Apr 2013 | US | |
61805978 | Mar 2013 | US | |
61746315 | Dec 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15381940 | Dec 2016 | US |
Child | 16383286 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14973783 | Dec 2015 | US |
Child | 15381940 | US | |
Parent | 14582751 | Dec 2014 | US |
Child | 14973783 | US | |
Parent | 14142413 | Dec 2013 | US |
Child | 14582751 | US |