1. Field of the Invention
A present invention relates to an image display device, which can display an image in a good timing with a viewing intention of a person by detecting the viewing intention of the person with a sensor etc. and elaborating control of a display unit.
2. Description of the Related Art
Conventionally, as a device that detects a person with a sensor and controls a display unit, there has been a device that detects existence of a person with an infrared sensor etc., and controls ON/OFF of a power for the display unit. For example, Reference 1 (Published Japanese patent application H07-44144) discloses a technique of attaching such a sensor in front of a display unit.
According to Reference 1, the sensor detects whether or not a person exists in front of the display unit. When the person does not exist in front of the display unit, the power of the display unit is set to OFF. Moreover, the sensor detects whether or not a distance between the front of the display unit and the person is shorter than a predetermined distance. When the distance is shorter than the predetermined distance, the power of the display is set to ON.
Reference 2 (Published Japanese patent application H11-288259) discloses a technique of detecting a person with a sensor and changing display contents. In other words, when the sensor detects a person, a display device reproduces image data, such as an advertisement. According to the technique, when the sensor detects a person, the display device displays an electronic advertising image that is stored in a storage unit beforehand.
Reference 3 (Published Japanese patent application 2002-123237) discloses a technique of detecting a movement of a person with a shooting device, and displaying an effective image according to the movement of the person. Then, whether or not the person is moving is detected by detecting the position of the person, and a composed image is generated according to a change in the position of the person.
Reference 4 (Published Japanese patent application H11-24603) discloses a technique of detecting, by an eye-direction detecting device, an attention point on a display unit where a person is looking at, and displaying predetermined information (for example, an electronic advertising image etc.) around the attention point.
However, in References 1-3, ON/OFF of the power for the display unit and the display content of the display unit are controlled by the existence/non-existence and motion of a person. Therefore, even when there is no viewing intention (for example, in a case where the person is just passing by), the power for the display unit is switched on and off, or the display content is changed.
Therefore, in an environment (for example, a living room at home, a store, an exhibition hall, etc.) where a person frequently moves near a display unit, ON/OFF of the power or ON/OFF of the power-saving mode is frequently repeated, or the display content is frequently changed. Consequently, against the intension of these References, effect of the power-saving or effect of the image display changing in accordance with the movement of the person can not be obtained.
In Reference 2, image data stored in the storage unit are reproduced. However, if a sensor detects a person in the middle of reproducing, the image data is not reproduced to the last, but going back to the beginning of the image data and the same image data is reproduced again. Since this phenomenon occurs every time when the sensor detects a person, it is hard to think that the original advertising effect can be demonstrated by installing such a device as described in Reference 2 in places where many people pass by (the places are where the advertising effect can be thought generally high).
In an information display device using the attention point as described in Reference 4, information is unilaterally displayed around the attention point; however, the information device does not consider an interface with a person. For example, an operation menu is always displayed around the attention point, and moves in accordance with the movement of the attention point. This situation causes inconvenience on the person when the person wants to look at an image other then the operation menu, since the operation menu hinders the image. In short, it is hard to say that the effective display control is performed.
An object of the present invention is to provide an image display device, which can display an interface timely in accordance with a viewing intention of a person.
A first aspect of the present invention provides an image display device, comprising: an input image control unit operable to control an input image from the exterior; an operation input unit operable to handle an operation input; an element-image generating/selecting unit operable to generate a user interface element image, and further operable to handle a selection input for the generated user interface element image; a composing unit operable to compose the user interface element image generated by the element-image generating/selecting unit and the input image inputted by the input image control unit, thereby generating a composed image; a display unit operable to display the composed image generated by the composing unit; a person state acquiring unit operable to acquire information on a state of a person near the display unit, the person state acquiring unit being related to the display unit; a viewing intention judging unit operable to judge viewing intention of the person near the display unit according to the information acquired by the person state acquiring unit, thereby generating a judgment result; and an operation control unit operable to control the element-image generating/selecting unit and the operation input unit when the judgment result of the viewing intention judging unit indicates that the viewing intention of the person has changed from a state with viewing intention to a state without viewing intention or when the judgment result of the viewing intention judging unit indicates that the viewing intention of the person has changed from the state without viewing intention to the state with viewing intention.
The term “user interface element image” described in the present specification means a graphical image relevant to a user interface displayed through a display unit, such as an operation menu and a personified computer graphics character.
According to a structure of the present invention, when an audience shows a viewing intention, an interface for operation can be displayed in a good timing; thereby the image display device becomes in a state where operation input is possible. Thus, the image display device having an easy-to-use interface can be provided.
A second aspect of the present invention provides the image display device as defined in the first aspect, wherein the operation control unit releases control that has been exercised over the element-image generating/selecting unit and the operation input unit when the operation input unit has not received the operation input for a predetermined time since a time when the judgment result of the viewing intention judging unit indicates that the viewing intention of the person has changed from the state without viewing intention to the state with viewing intention.
According to a structure of the present invention, in a case where a plurality of audiences exist, if one of the audiences stops operation and the other of the audiences does not perform operation for a certain time, the interface automatically disappears. Therefore, the interface does not disturb viewing of the other of the audiences.
A third aspect of the present invention provides the image display device as defined in the first aspect, wherein the person state acquiring unit acquires information indicating whether a person exists in a predetermined area and whether the person in the predetermined area keeps still, and wherein the viewing intention judging unit judges that the person in the predetermined area is in the state with viewing intention when the information acquired by the person state acquiring unit indicates that the person exists in the predetermined area and further that the person in the predetermined area keeps still.
According to a structure of the present invention, since it can be judged that there is no viewing intention even when a person is just passing near a display unit, the power-saving effectiveness is improved, and the operation screen can be shown effectively.
A fourth aspect of the present invention provides the image display device as defined in the third aspect, wherein the person state acquiring unit further acquires information indicating whether a face of the person in the predetermined area looks toward a predetermined direction, and wherein the viewing intention judging unit judges that the person in the predetermined area is in the state with viewing intention when the information further acquired by the person state acquiring unit indicates that the face of the person in the predetermined area looks toward the predetermined direction.
According to a structure of the present invention, in a case where a plurality of display units exist around an audience, if the audience turns his/her face to a display unit which he/she wants to view, the display unit to which his/her viewing intention is directed is resultantly specified. Therefore, the operation menu can be exactly displayed to the audience.
The above, and other objects, features and advantages of the present invention will become apparent from the following description read in conjunction with the accompanying drawings, in which like reference numerals designate the same elements.
FIGS. 3 (a) and (b) illustrate examples of a display in Embodiment 1 of the present invention;
FIGS. 9 (a) and (b) are illustrations showing frame images in Embodiment 1 of the present invention;
FIGS. 9 (d) to (f) explain how the difference image in Embodiment 1 of the present invention is processed;
FIGS. 10 (a) and (b) explain how the difference image in Embodiment 1 of the present invention is processed;
Hereinafter, a description is given of embodiments of the invention with reference to the accompanying drawings.
In
A person state acquiring unit 103 comprises one of an infrared sensor, an ultrasonic sensor, a camera, etc., senses a state of a person 100, and outputs information showing the sensed result. In the present embodiment, the following explanation will be focused mainly on the camera, and partly on the infrared sensor and the ultrasonic sensor, when necessary.
A viewing intention judging unit 104 analyzes the sensed result, such as a camera image, which the person state acquiring unit 103 has acquired. Then the viewing intention judging unit 104 judges whether or not a person exists and the person has a viewing intention. The detail of the viewing intention judging unit 104 is explained later on, referring to
An operation control unit 105 controls decision, generation, start, end, discontinuation, and resumption of an interface for operation, and controls both an element-image generating/selecting unit 107 and an operation input unit 106, which are mentioned later.
The operation input unit 106 is an element by which a person can actually perform input for operation. Specifically, the operation input unit 106 is constituted by a touch panel operable on the display unit 102, a button, a remote controller operable to remotely control, a portable terminal, etc. The operation control unit 105 controls the operational start/operational end by the operation input unit 106.
In response to an order from the operation control unit 105, the element-image generating/selecting unit 107 generates a screen for a user interface, such as an operation menu and a personified agent (an example of computer graphics characters), and handles a selection of the generated user interface.
A composing unit 108 composes the image that is inputted by the input image control unit 101 and the user interface screen that is created by the element-image generating/selecting unit 107, thereby outputs the composed image. The composed image is displayed on the display unit 102.
Based on the image inputted with the camera 202, the viewing intention judging unit 104 analyzes existence, movement, a face direction, etc. of a person and judges if there is a viewing intention of the person. Incidentally, the camera 202 and the display unit 201 may be constituted as one piece. Alternatively, the camera 202 and the display unit 201 may be connected through a network. Moreover, the camera 202 and the display unit 201 may not be necessarily related by one-to-one correspondence, but may be related by one-to-many correspondence, or by many-to-one correspondence.
When the viewing intention judging unit 104 judges that there is a new viewing intention based on the information that the person state acquiring unit 103 has acquired, the display contents of the display unit 201 become as follows.
As shown in
In addition, as shown in FIGS. 3 (a) and (b), it is desirable that the operation menu 301 and the personified agent 302 are superposed on images already displayed on the display unit 201, thereby they are displayed distinctly. It is also desirable that the operation menu 301 and the personified agent 302 are arranged near the edge of the display unit 201.
In addition to the operation menu 301 and the personified agent 302, attribute information of contents (a title, remaining time, a caption, etc.) or other additional information useful for the users (a weather forecast, date, time, etc.) may be displayed simultaneously with or separately from the operation menu 301 and the personified agent 302. Such cases are included in the present invention, as long as the present state can be changed to a state where the operations can be handled.
Moreover, sound volume may be operated simultaneously. For example, it is useful to turn down the sound volume while the menu is displayed in order to let an audience concentrate on the operation more easily. Alternatively, a message may be sounded to encourage the audience to operate.
The operation input unit 106 handles input, such as input by sound input, input by a button or a touch panel (when the display unit 201 can directly detect that an audience touches the panel by a finger) located near the display unit 201, and input by operation on a remote controller or a portable terminal, which is held by or located near a person.
The operation menu 301 shown in
Next, operation of the image display device of the present embodiment is explained, referring to
In the present embodiment, whether or not a person has a viewing intention is defined as follows. “The state with viewing intention” is a state where the person 100 enters the definition area 502 and then keeps still, with the face of the person 100 turned to the display unit 201; otherwise, it is “the state without viewing intention”.
Next, the flow of processing of the image display device in the present embodiment is explained, referring to
First, the person state acquiring unit 103 acquires the information of a state of a person (Step S401). The viewing intention judging unit 104 judges whether there is viewer intention, based on the acquired person state acquisition result (Step S402). The operation control unit 105 changes processing to be controlled, based on the result of transition of the viewer intention. (Step S403).
When the state of viewer intention has transited from the state without viewing intention to the state with viewing intention, or from the state with viewing intention to the state without viewing intention, the operation control unit 105 controls the element-image generating/selecting unit 107 to generate a screen, on which operation is performed (Step S404). At the same time, the operation control unit 105 sets the operation input unit 106 ready to handle the operation input (Step S405). When the state transits from the state without viewing intention to the state with viewing intention and a state of the power of the display unit 201 is a first state (the power of the display unit 201 is OFF or the display unit 201 is in a power-saving mode) (Step S410), the state of the display unit 201 is changed to a second state (the power of the display unit is ON or the display unit is in a regular mode) (Step S 411). The composing unit 108 composes the image which the input image control unit 101 inputted and the screen which the element-image generating/selecting unit 107 generated. Then, the composed image is displayed on the display unit 102. The person 100 becomes possible to operate the image display device using the operation input unit 106.
On the other hand, when transition of viewing intention is not confirmed, the operation control unit 105 checks continuously, during a certain period of time after the last transition of viewing intention (Step S406), whether or not there remains no operation input (Step S407).
If the checked result is “yes”, the operation control unit 105 orders the element-image generating/selecting unit 107 to end the operation screen (Step S408), and sets at the same time the operation input unit 106 not-ready to handle operation input (Step S409). Then, the operation control unit 105 returns the processing to Step S401.
If the checked result is “No”, the operation control unit 105 moves the processing to Step S401 immediately.
Next, the person state acquiring unit 103 and the viewing intention judging unit 104 are explained more concretely.
Next, operation of the person state acquiring unit 103 is explained. First, the camera unit 601 shoots the shooting area 501 shown in
The standstill judging unit 603 does not necessarily judge a state where the body of the person keeps thoroughly still. The standstill judging unit 603 judges the movement of the entire body, or a state where the entire body is moving by such as walking. In other words, the standstill judging unit 603 does not judge whether or not a partial body, such as head or the upper half of the body, is still in action.
Hereafter, using
First, when the power of the device is turned on, the camera unit 601 and the person detecting unit 602 start (Step S701). The person detecting unit 601 detects whether or not the person 100 exists within the definition area 502 (Step S702).
Next, the operation control unit 105 changes processing based on the transition between existence and non-existence of the person 100 (Step S703). First, when the state of the person 100 transits from non-existence to existence, the operation control unit 105 makes the standstill judging unit 603 start (Step S704). The standstill judging unit 603 judges a standstill state of the person 100 (Step S705). When the person 100 continuously exists in the definition area, the standstill judging unit 603 is already started, and judges that the person is in the standstill state (Step S705). On the other hand, when the person does not exist, the operation control unit 105 moves the processing to Step S702.
When the standstill judgment result of the person 100 indicates that the state of the person 100 has changed from moving to standstill, the operation control unit 105 performs the following processing (Step S706). The operation control unit 105 starts the face-direction judging unit 604 (Step S707). The face-direction judging unit 604 judges whether or not the face looks toward the predetermined direction (Step S708). When the person is continuously in the standstill state, since the face-direction judging unit 604 is already started, the face-direction judging unit 604 judges whether or not the face looks toward the predetermined direction (Step S708). On the other hand, when the person is in the moving state, the operation control unit 105 moves the processing to Step S702.
The face-direction judging unit 604 judges whether or not the face looks toward the predetermined direction (Step S709). The face-direction judging unit 604 judges whether or not the face direction has transited (Step S710 and Step 711). The viewing intention judging unit 104 judges whether the state has transited to the state with viewing intention (Step S712), whether the state remains in the state with viewing intention (Step S 713), whether the state has transited to the state without viewing intention (Step S715), or whether the state remains in the state without viewing intention (step S714). Then, the viewing intention judging unit 104 outputs the result. The information indicating that the state is either one of the four kinds of states is outputted to the operation control unit 105. The operation control unit 105 uses the information for operation control.
Next, more details of the person detecting unit 602, the standstill judging unit 603, and the face-direction judging unit 604 are explained, referring to
The standstill judging unit 603 obtains two images from the moving image acquired by the camera unit 601. One of the two images is an image F (t) in a marked frame at time t, as shown in
As shown in
The person detecting unit 602 obtains a ridgeline image (or an envelope image), removing small noises from the edge image. Specifically, the person detecting unit 602 performs the same processing as that of the standstill judging unit 603 from Step S801 to Step S802 as shown in
Furthermore, the person detecting unit 602 judges whether or not the person 100 exists within the definition area 502. If the person exists within the definition area 502, the person detecting unit 602 judges that the person 100 is at the position where the person 100 can look at the display unit 201 (Step S805).
In addition, the person detecting unit 602 and the standstill judging unit 603 do not have to necessarily use the image that is acquired using the camera unit 601. Instead, the person detecting unit 602 and the standstill judging unit 603 may use a photoelectric sensor or an ultrasonic sensor. For example, when a photoelectric sensor composing a light emitting unit and a light receiving unit is used, a light beam, such as an infrared light beam, emitted by the light emitting unit to the predetermined area is reflected and received by the light receiving unit. The light receiving unit outputs the received light beam as the change in output voltage, output current or a resistance value. If the change is greater than the predetermined value, then it is possible to consider that a person is detected. Moreover, using a plurality of such sensors as an array sensor, the resultant outputs are integrated to judge if a person is in a standstill state.
Alternatively, the person detecting unit 602 and the standstill judging unit 603 may be constituted by suitably selecting other sensing devices in harmony with the environment of set-up. The other sensing devices include such as a pressure sensor, which is buried in the floor surface, an infrared area sensor, a thermal image sensor, a beat sensor, an ultrasonic sensor, etc., the latter three sensors being set on the ceiling.
Next, the face-direction judging unit 604 is explained. The face-direction judging unit 604 judges whether or not the person 100 looks toward the predetermined direction. In order to materialize the judgment system, it may be sufficient to set an infrared light emitting unit on the head of the person 100, and to judge whether or not the infrared light emitting unit is turned to the predetermined direction. The face direction can be judged alternatively by an image of the person 100 that is acquired using the camera unit 601. Thus, the person 100 does not need to wear the infrared light emitting unit, and the face-direction judging unit 604 may be constructed simply. Especially when the camera unit 601 can shoot facial parts of the person, such as eyes, a nose, a mouth, etc., the face direction can be detected, using a template image of the facial parts in the face-direction judging unit 604.
In the present embodiment, all of the person detecting unit 602, the standstill judging unit 603, and the face-direction judging unit 604 input the image shot by the camera unit 601. However, this point is not requisite as mentioned above. While demand for safe and relief increases, the camera unit 601 and the display unit 201 may be separately installed in the physically distant locations. In this case, the camera unit 601 and the display unit 201 may be connected via a network. When people do not come and go in the building where the camera unit 601 is installed, the camera unit 601 can be used as a surveillance camera. Moreover, when people come and go, the camera unit 601 can be used as a component of the person state acquiring unit 103 in the present embodiment. Thus, the camera unit 601 can be used for a different purpose (a surveillance camera) from the purpose in the present embodiment. As long as such a case includes the structure equivalent to the present invention, the case is included in the present invention.
Next, the operation menu 301 (refer to
As shown in
When the operation control unit 105 requests the element-image generating/selecting unit 107 to generate an operation screen, the element-image generating/selecting unit 107 defines, as a home position, a hierarchical layer that possesses the current pointer as one of options, thereby generating a screen of the position.
For example, in the case of
On the other hand, in a case of the personified agent 302 shown in
For example, several patterns of the movement and expression for a moment when the personified agent 302 appears are stored, and one of the several patterns is preferably selected to change the movement and expression, and the contents of the voice, according to the time when the personified agent 302 appears (for example, “Good-morning”, if it is in the morning). Furthermore, even in a case of the personified agent 302, if the current state is managed by the home position type menu, it is advantageous when selecting a word or a series of words, which the personified agent 302 pronounces, or when dynamically changing the voice of the personified agent 302.
As mentioned above, the operation control unit 105 controls the element-image generating/selecting unit 107, and also controls the waiting state of handling the operation input unit 106. Although it is already stated, the input by the operation input unit 106 may be touch panel input, sound input, button input, operation by a portable terminal, and input by a remote controller. Even if any of the input is chosen and used, it is included in the present invention.
Embodiment 1 basically treats one display unit 201 in operation and the operation input thereto and the display thereon are controlled according to the state of viewing intention acquired by at least one camera.
The present embodiment assumes a case where a plurality of display units are used to view image inputted in the input image control unit. In the case, one of the plurality of display units which a person is going to look at is turned ON, and the image is controlled to be displayed on the display unit whose power is ON. Controls necessary for the case are explained as a series of flow from the state when the power is OFF. In the present embodiment, the same names of the elements of Embodiment 1 are given to the elements possessing same functions to those of Embodiment 1.
Next, the flow from the state when the power is turned on is explained, referring to
For example, when a “security-ON mode” is selected by a user via the entire control unit 1320, the cameras 1204 and 1205 are used as cameras and sensors for surveillance. When a “security-OFF mode” is selected, the cameras 1204 and 1205 are used for the purpose of the present invention.
The initial state of the “security-OFF mode” is a “standby mode”. At that time, only the person detecting sensor 1209 and the server unit 1207 are activated. When a person comes in the room, the person detecting sensor 1209 detects the person. Then, the person detecting sensor 1209 notifies, via the network interface 1318, the entire control unit 1320 that the person is detected. As a result, the entire control unit 1320 turns on the cameras 1204 and 1205 and the viewing intention judging unit 1206 (this state is called as a “camera-ON mode”). The cameras 1204 and 1205 acquire a person state. When the viewing intention judging unit 1206 judges that there is a viewing intention, the entire control unit 1320 turns on the power of the display unit to which the viewing intention is directed (either one of the first display unit 1201 and the second display unit 1202), and makes a state of the operation control unit of the display unit waiting for operation input.
In addition, when the person detecting sensor 1209 does not detect a person for a certain time, it is judged that there is no person in the room. Then, the entire control unit 1320 changes the current state to the “standby mode”.
Next, the flow of processing after the “camera-ON mode” is explained using
The viewing intention judging unit core 1307 performs existence/non-existence judgment, standstill judgment and face direction judgment of the person in the defined area, thereby generating the judgment results, as described in Embodiment 1. The viewing intention judging unit core 1307 further judges that the judgment results correspond to the result of which camera of which display unit. Although the face-direction judging unit assumes that the facial part (for example, eyes, a nose, a mouth etc.) exists in the description of Embodiment 1, face-direction judging in an arbitrary direction is necessary in the present embodiment. The face-direction judging in the arbitrary direction can be realized by the structure described in Embodiment 1 of Published Japanese patent application 2003-44853. For the details, refer to the publication.
The viewing intention judging unit 1206 judges which display unit the person is going to look at, based on the information such as the acquired results of the person state by each of the cameras 1204 and 1205, and the size and direction of the display unit. For this purpose, the viewing intention judging unit core 1307 generates a map as shown in
Based on the result, the viewing intention judging unit core 1307 judges that the person has a viewing intention to the first display unit 1201, not to the third display unit 1203. The viewing intention judging unit core 1307 turns on the first display unit 1201, and notifies the first display unit 1201 to perform operation control at the same time.
The first display unit 1201, upon notified, makes the state of the operation control unit 1303 ready to handle the operation input by the operation input unit 1301, and requests, via the network, the server unit 1207 to transmit the composed image of the image and the operation element image.
The server unit 1207 receives the request. The composing unit 1311 composes the operation element image, which the element-image generating/selecting unit 1310 has generated, and the image, which the input image control unit 1311 has acquired. The composing unit 1311 transmits the composed image to the display unit 1302 via the network interfaces 1312 and 1304. The first display unit 1302 receives and displays the composed image on the first display unit 1302.
As described above, even in a case where there are a plurality of display units, the first display unit 1202, to which the viewing intention of the person is directed, operates internally the server unit 1207 to acquire the desired result. This operation is performed without especial consciousness of the person. Thereby, the usage of the interface can be remarkably improved.
Furthermore, as shown in
For example, in a case where a first person 1501 and a second person 1502 have turned to the direction as shown in
However, in the case shown in
In the state shown in
According to the present invention, the interface for operation can be displayed in a good timing when a person shows a viewing intention and the image display unit becomes ready to handle operation input. Therefore, even in a location where a person often moves near the display unit (for example, a living room at home, a store, an exhibition hall, etc.), the operation interface is displayed only when the person shows a viewing intention. In addition, the operation interface does not disturb viewing and listening of other audiences. As a result, unnecessary operation input can be prevented. Furthermore, unnecessary ON/OFF of the display unit can be avoided, thereby the power consumption can be reduced.
Having described preferred embodiments of the invention with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope or spirit of the invention as defined in the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2004-199021 | Jul 2004 | JP | national |