This application claims the benefit of Japanese Priority Patent Application JP 2013-008050 filed Jan. 21, 2013, the entire contents of which are incorporated herein by reference.
The present technology which is disclosed in the specification relates to an image display device which is used by being mounted on the head or a face of a user, and an image display method thereof, and relates to an image display device which displays a result which is estimated or diagnosed with respect to a target in a field of vision of a user, for example, and an image display method thereof.
“Fortune-telling” which estimates or diagnoses human nature or fortune of a person based on shapes, sizes, or color (biological information) of visible parts of the person such as a face, ears, hands, nails, has been familiar for a long time. For example, chirognomy is popular since chirognomy can tell a character or fortune which appears on a plurality of types of palm lines such as a line of life, a line of fate, an intelligence line, a feelings line, or a marriage line which is formed on the palm.
The palm line itself is visual biological information which can be easily extracted by viewing even by an amateur. Among a plurality of types of fortune-telling, popularity of palmistry is high. However, there is a case in which it is difficult for an amateur to read somebody's palm since there is much information, for example, each palm line has different meanings of a character or fortune, two or more palm lines should be judged by being correlated with each other, or the like.
For example, a palmistry system in which an image of palm lines which is transmitted from a mobile terminal with a camera through a network is received, palm lines data is extracted from the received image of the palm lines, fortune-telling data which denotes a result of fortune-telling is obtained based on the extracted palm lines data, and the data is transmitted to the mobile terminal with the camera has been proposed (for example, refer to Japanese Unexamined Patent Application Publication No. 2007-183). The palmistry system can provide palmistry in circumstances in which a computer is not available.
However, fortune-telling data which is correlated with palm line data is simply returned to a mobile terminal. For this reason, it is difficult for a user to visually recognize which one of the plurality of palm lines of himself becomes grounds of the fortune-telling data (on palm in field of vision at that time).
In addition, a target characteristic line specification device which accurately specifies a target characteristic line, or the like, in an image of palm lines has been proposed (for example, refer to Japanese Unexamined Patent Application Publication No. 2010-123020). For example, an information output device can perform palmistry by comparing target characteristic line data which is specified by the target characteristic line specification device to a plurality of palm line data items which are obtained from a database, specifying palm line data which denotes a palm line which is closest to the target characteristic line, and outputting palm line data which is correlated with the specified palm line data.
However, palm line data which is output from an information output device is data of a text format in which contents such as fortune of a person (fortune in marriage, fortune in love, fortune in work, fortune in money, or the like) which is judged by a palmist, or the like, in advance, and talents, and health conditions (good or bad), are defined based on types, shapes, the length, and positional relationships of palm lines. That is, a user is not able to visually recognize which one of the plurality of palm lines of himself becomes grounds of the palmistry data (on palm in field of vision of himself).
In addition, there is an opinion that each palm of the left and right hands has a different meaning of palm lines (for example, there is opinion that palm lines on a person's dominant hand tell acquired talent or future of the person, and palm lines on the other hand tells inborn talent or past of the person). For this reason, preferably, it is necessary to read palm lines on both hands in the palmistry. In the system in which an image of palm lines is photographed using the mobile terminal with camera, it takes twice the labor since a user should perform operations of photographing and transmitting of images of respective left and right hands while having a camera.
It is desirable to provide an excellent image display device which can preferably display a result which is estimated or diagnosed with respect to a target in a field of vision of a user, and an image display method thereof.
It is desirable to further provide an image display device which can visually display a result which is estimated or diagnosed based on an image of a specified portion on a body of a person, for example, and an image display method thereof.
According to an embodiment of the present technology, an image display device which is used by being mounted on the head or a face includes an image display unit which displays an image; an image input unit which inputs an image; a judging unit which judges an image which is input to the image input unit, or obtains a judging result with respect to the input image; and a control unit which controls the image display unit based on the judging result of the judging unit.
The image display device according to the embodiment may further include a camera. In addition, the image input unit may be configured so as to input an image which is photographed by the camera.
In the image display device according to the embodiment, the camera may be configured so as to photograph a gaze direction of a user, or at least a part of a body of the user.
The image display device according to the embodiment may further include a storage unit which stores an image. In addition, the image input unit may be configured so as to input an image which is read out from the storage unit.
The image display device according to the embodiment may further include a communication unit which communicates with an external device. In addition, the image input unit may be configured so as to input an image which is obtained from the external device through the communication unit.
In the image display device according to the embodiment, the image display unit may display the image in a see-through mode, the image input unit may input a photographed image in which the gaze direction of a user is photographed using a camera, the judging unit may judge a specified portion in the photographed image, and the control unit may display the judging result on the image display unit so that the judging result is overlapped with the specified portion on a field of vision of the user.
In the image display device according to the embodiment, the image display unit may display the photographed image in which the gaze direction of the user is photographed using the camera, the image input unit may input the photographed image, the judging unit may judge the specified portion in the photographed image, and the control unit may display the judging result by overlapping the result with the specified portion in the photographed image.
The image display device according to the embodiment may further include a storage unit which stores a judging result of the judging unit, or an image which is controlled based on the judging result.
In the image display device according to the embodiment, the judging unit may perform a diagnosis based on characteristics of the specified portion in the input image, and the control unit may display a result of the diagnosis by overlapping the result with a location corresponding to the specified portion in an image which is displayed by the image display unit.
In the image display device according to the embodiment, the judging unit may perform palm reading with respect to palm lines on a palm which is included in the input image, and the control unit may display a result of the palm reading by overlapping the result with a location corresponding to the palm lines in an image which is displayed by the image display unit.
The image display device according to the embodiment may further include a feature amount extraction unit which extracts palm lines from a palm which is included in the input image, and the judging unit may perform palm reading based on the palm lines which are extracted by the feature amount extraction unit.
In the image display device according to the embodiment, the control unit may be configured so as to display the palm lines which are extracted by the feature amount extraction unit by overlapping the palm lines with an image which is displayed by the image display unit.
In the image display device according to the embodiment, the image input unit of the image display device may input an image including the left and right palms, and the judging unit may perform palm reading on the left and right palms from the input image.
The image display device according to the embodiment may further include the feature amount extraction unit which extracts palm lines from a palm which is included in the input image, and the control unit may display the palm lines which are extracted from one of the left and right palms by overlapping the palm lines with palm lines on the other palm, by inverting the palm lines from left to right.
In the image display device according to the embodiment, the judging unit may diagnose a palm hill of a base of at least one finger of a hand which is included in the input image, and the control unit may display a result of the diagnosis by overlapping the result with a location corresponding to the palm hill in an image which is displayed by the image display unit.
In the image display device according to the embodiment, the judging unit may diagnose length of at least one finger of the hand which is included in the input image, and the control unit may display a result of the diagnosis by overlapping the result with a corresponding finger in an image which is displayed by the image display unit.
In the image display device according to the embodiment, the judging unit may perform face reading with respect to a face image which is included in the input image, and the control unit may display a result of the face reading by overlapping the result with a location which becomes grounds of the face reading in an image which is displayed by the image input unit.
In the image display device according to the embodiment, the judging unit may perform a skin diagnosis with respect to the face image which is included in the input image, and the control unit may display a result of the skin diagnosis by overlapping the result with a location which becomes grounds of the skin diagnosis in an image which is displayed by the image display unit.
In the image display device according to the embodiment, the judging unit may specify a position of an acupuncture point from a body of a person which is included in the input photographed image, and the control unit may display the specified position of the acupuncture point by overlapping the position with a corresponding location in an image which is displayed by the image display unit.
According to another embodiment of the present technology, there is provided an image display method which displays an image in an image display device which is used by being mounted on the head or a face, and the method includes inputting of an image; judging an image which is input in the inputting of the image, or obtaining a judging result with respect to the input image; and controlling an image to be displayed based on the judging result of the judging.
According to the embodiment of the present technology which is disclosed in the specification, since a result which is estimated or diagnosed with respect to a target in a field of vision of a user is displayed by being overlapped with the field of vision of the user, the user can easily understand grounds of the result which is estimated or diagnosed.
In addition, according to the embodiment of the present technology which is disclosed in the specification, since a result which is estimated or diagnosed based on an image of a specified portion of a body of a person is displayed by being overlapped with a field of vision in which the portion is viewed, a user can easily understand grounds of the result which is estimated or diagnosed.
Further another object, characteristic, or advantage of the technology which is disclosed in the specification will be clarified by detailed descriptions based on embodiments which will be described later, or accompanying drawings.
Hereinafter, embodiments of the technology which will be disclosed in the specification will be described in detail with reference to drawings.
A. Configuration of Device
The image display device 100 can be used when displaying a result which is estimated or diagnosed with respect to a target in a field of vision of a user, however, this point will be described in detail later.
The illustrated image display device 100 has a structure which is similar to glasses for eyesight correction. Virtual image optical units 101L and 101R which are formed by transparent light guiding units, or the like, are arranged at positions of a main body of the image display device 100, which face left and right eyes of a user, and an image which is observed by the user is displayed in the respective virtual image optical units 101L and 101R. The virtual image optical units 101L and 101R are supported by a glasses frame-shaped support body 102, for example.
An external camera 512 for inputting a surrounding image (field of vision of user) is arranged in a substantially center of the glasses frame-shaped support body 102. It is more preferable to configure the external cameras 512 using a plurality of cameras so as to obtain three-dimensional information of the peripheral image using parallax information. In addition, microphones 103L and 103R are arranged in the vicinity of both ends of the support body 102 on the left and right. Since there are microphones 103L and 103R which are approximately symmetric on the left and right, it is possible to separate out an ambient noise, or a talking voice of another person, and to prevent, for example, a malfunction at a time of operating using a voice input by recognizing only a voice which is oriented in the center (voice of user).
In addition,
The image display device 300 can also be used when displaying a result which is estimated or diagnosed with respect to a target in a field of vision of a user, however, this point will be described in detail later.
The illustrated image display device 300 is a structure which is similar to a hat shape, and is configured so as to directly cover left and right eyes of a user who is wearing the device. Display panels which are observed by a user (not shown in
An external camera 512 for inputting a surrounding image (field of vision of user) is arranged in the approximately center of a front face of a main body of the image display device 300 which has a hat-like shape. In addition, microphones 303L and 303R are respectively arranged in the vicinity of both the left and right ends of the main body of the image display device 300. Since there are microphones 303L and 303R which are approximately symmetric on the left and right, it is possible to separate out an ambient noise, or a talking voice of another person, and to prevent, for example, a malfunction at time of operating using a voice input by recognizing only a voice which is oriented in the center (voice of user).
A control unit 501 includes a Read Only Memory (ROM) 501A, or a Random Access Memory (RAM) 501B. A program code which will be executed in the control unit 501, or various data items are stored in the ROM 501A. The control unit 501 integrally controls the whole operation of the image display device 100 including a display control of an image by executing a program which is downloaded in the RAM 501B. There are an image display control program, an image processing program of an image which is photographed using the external camera 512 (for example, image in which gaze direction of user is photographed), a communication processing program for an external device such as a server on the Internet (not shown), and identification information which is unique to the device 100 as programs or data items which are stored in the ROM 501A. The image processing program of the image which is photographed using the external camera 512 performs, for example, an analysis of the photographed image, and a display control of an analysis result. The analysis of the photographed image includes an estimating process or a diagnosis process with respect to a target in a field of vision of a user such as a diagnosis, or fortune-telling based on physical features such as palm reading when photographing a palm, face reading when photographing a face, a fair skin diagnosis based on a face image, Chinese geomancy in indoor photographing, and layout diagnoses other than these. In addition, the image processing program displays and controls the analysis result by overlapping the result with the field of vision of the user (including seeing-through, and video see-through). The image processing will be described in detail later.
An input operation unit 502 includes one or more operators such as a key, buttons, or a switch with which a user performs an input operation, receives instructions from a user through the operator, and outputs the instructions to the control unit 501. In addition, the input operation unit 502 similarly receives instructions from the user which are formed by a remote controlled command which is received in a remote control reception unit 503, and outputs the instructions to the control unit 501.
A posture/position detection unit 504 is a unit which detects a posture of the head of a user who is wearing the image display device 100. The posture/position detection unit 504 is configured of any one of a gyro sensor, an acceleration sensor, a Global Positioning System (GPS) sensor, and a magnetic field sensor, or a combination of two or more sensors in consideration of good points and bad points of each sensor.
A state detection unit 511 obtains state information on a state of a user who is wearing the image display device 100, and outputs the information to the control unit 501. The unit obtains, for example, a work state of the user (whether or not wearing image display device 100), an action state of the user (moving state such as stopped, walking, running, open and shut state of eyelids, gaze direction), a mental state (excitement degree, awakening degree, feelings, emotions, or the like, for example, whether or not user is devoted to, or concentrating on display image while viewing), and a physical state as state information. In addition, the state detection unit 511 may include various state sensors (none of those is illustrated in figures) such as a wearing sensor which is formed by a machine switch, or the like, an internal camera which photographs a face of a user, a gyro sensor, an acceleration sensor, a speed sensor, a pressure sensor, a body temperature sensor, a sweat sensor, a myoelectricity sensor, an ocular potential sensor, and an electroencephalographic sensor, in order to obtain these pieces of state information from the user.
The external camera 512 is arranged in the approximately center of a front face of the main body of the image display device 100 which has a glasses shape, for example (refer to
A communication unit 505 performs communication processing with an external device such as a server on the Internet (not shown), and processing of modulation and demodulation, and encoding and decoding of a communication signal. In addition, the control unit 501 sends out transmission data to the external device from the communication unit 505. A configuration of the communication unit 505 is arbitrary. For example, the communication unit 505 can be configured according to a communication standard which is used in a transceiving operation with the external device as a communication partner. The communication standard may be either a wired one or a wireless one. As the communication standard, there are a Mobile High-definition Link (MHL), a Universal Serial Bus (UBS), a High Definition Multimedia Interface (HDMI), Wi-Fi (registered trade mark), a Bluetooth (registered trade mark) communication, an infrared communication, and the like.
The storage unit 506 is a mass storage device which is formed by a Solid State Drive (SSD), or the like. The storage unit 506 stores an application program which is executed in the control unit 501, or data such as an image which is photographed using the external camera 512 (which will be described later), or an image which is obtained from a network through the communication unit 505. In addition, the storage unit 506 may accumulate a diagnosis database (which will be described later) relating to a diagnosis, or fortune-telling based on physical characteristics such as palm lines or physiognomy, a skin diagnosis, a position of the foot acupuncture point for performing an estimating process, or a diagnosis process with respect to an image which is photographed in the external camera 512. In addition, the diagnosis result which is obtained by performing the estimating process, or the diagnosis process with respect to the image (including image with which diagnosis result is overlapped) may be accumulated in the storage unit 506 for reuse, or for purposes other than that.
An image processing unit 507 further performs signal processing such as an image quality correction with respect to an image signal which is output from the control unit 501, and converts the image signal into a resolution corresponding to a screen of the display unit 509. In addition, a display driving unit 508 sequentially selects pixels of the display unit 509 in each row, performs line sequential scanning of the pixels, and supplies pixel signals based on the image signal which is subject to the signal processing.
The display unit 509 includes a display panel which is configured of a micro display such as an organic electro-luminescence (EL) device, or a liquid crystal display, for example. A virtual image optical unit 510 projects a display image of the display unit 509 by enlarging the image, and causes a user to observe the image as an enlarged virtual image.
A sound processing unit 513 performs a sound quality correction, or sound amplification with respect to a sound signal which is output from the control unit 501, and further performs signal processing of an input sound signal, or the like. In addition, a sound input/output unit 514 performs outputting of sound which is subject to sound processing to the outside, and performs inputting of sound from the microphone (which was described above).
B. Presentation of Information Using Image Display Device
The image display device 100 which is used by being mounted on the head or a face of a user can present information to a user in a form in which a virtual display image is overlapped with scenery in a real world which the user really views. For example, the user can exactly understand what the information is, when a virtual image which denotes information relating to an object which exists in a field of vision of the user is displayed by being overlapped with the object.
In addition, even in a case of the immersive-type image display device 300 (not see-through type), it is possible to execute the same information presentation as that in the above descriptions by photographing scenery in a real world which exists in a field of vision of a user using the external camera 512, and displaying a virtual image by overlapping the image with an image in a video see-through mode thereof.
On the other hand, a method in which an estimating process, or a diagnosis process based on visual biological information which is extracted from a specified portion on a body of a user, including a diagnosis or fortune telling based on physical characteristics such as palm lines or physiognomy are performed using an image analysis has been used before.
Accordingly, in the image display device 100 (and 300) according to the embodiment, when the image in the gaze direction of the user is photographed using the external camera 512, it is possible to present to a user a result of an estimation or a diagnosis of a target in a field of vision of himself so as to be easily understood, when performing a see-through display (including video see-through display) of the result of the estimation or the diagnosis which is obtained by performing an image analysis of a target (specified portion on body of a person, such as palm or face of user, for example) which is included in a photographed image by overlapping the result with scenery in a real world which the user views.
An image input unit 601 inputs, for example, a photographed image of the external camera 512, a past photographed image which is stored in the storage unit 506, and an image which is taken in from the outside through the communication unit 505 (for example, image in gaze direction of another user, or image which is shown on network).
As described above, the external camera 512 photographs an image in the gaze direction of a user by performing a posture control according to the gaze direction of the user which is detected using the state detection unit 511. In this case, a live image in the gaze direction of the user is input to an image input unit 601. As a matter of course, it is also possible to input the past photographed image in the gaze direction of the user which is stored in the storage unit 506 to the image input unit 601.
The feature amount extraction unit 602 extracts a feature amount which is used in diagnosis processing in the subsequent stage from the image which is input to the image input unit 601. In addition, a diagnosis unit 603 collates the feature amount which is extracted in the feature amount extraction unit 602 with a feature amount for a diagnosis in a diagnosis database 604, performs diagnosis processing, and generates a diagnosis result thereof. The diagnosis unit 603 outputs the diagnosis result to a compositing unit 605 for the subsequent stage. In addition, the diagnosis unit 603 may store the diagnosis result in the storage unit 506 for a purpose of reusing, or the like.
In addition, the diagnosis unit 603 and the diagnosis database 604 (portion surrounded with dotted line in
The compositing unit 605 composites a virtual image in which the diagnosis result which is generated in the diagnosis unit 603 is overlapped with the input image of the image input unit 601. In addition, the compositing unit displays and outputs the virtual image to the display unit 509, and displays the diagnosis result in a see-through mode (including video see-through display) by overlapping the result with the input image (scenery in real world which user views). In addition, when the diagnosis result of the diagnosis unit 603 with respect to the input image is output as sound information, as well, the result is output to the outside from the sound input/output unit 514. In addition, the image which is composited in the compositing unit 605 may be stored in the storage unit 506 for a purpose of reusing, or the like.
In addition,
The processing is started along with power up of the image display device 100, for example, or is started in response to an instruction of starting of the processing which is instructed by a user through the input operation unit 502, or the like.
In addition, the image input unit 601 inputs a live image which is photographed using the external camera 512 during the startup of the processing (step S701). The external camera 512 photographs an image in the gaze direction of a user, for example. However, the image input unit 601 may input the past photographed image which is stored in the storage unit 506, or the image which is taken in from the outside through the communication unit 505. The input image in the processing is displayed, and output to the display unit 509 (step S702).
In addition, the feature amount extraction unit 602 extracts a feature amount which will be used in the diagnosis processing in the subsequent stage from the image which is input to the image input unit 601 (step S703).
Subsequently, the diagnosis unit 603 performs diagnosis processing by collating the feature amount which is extracted from the feature amount extraction unit 602 with the feature amount for diagnosing in the diagnosis database 604, and generates a diagnosis result thereof (step S704). The obtained diagnosis result may be stored in the storage unit 506.
Subsequently, the compositing unit 605 composites a virtual image in which the diagnosis result which is generated in the diagnosis unit 603 is overlapped with the input image of the image input unit 601. In addition, the compositing unit displays, and outputs the virtual image to the display unit 509, and displays the diagnosis result in a see-through mode (including video see-through display) by overlapping the result with the input image (scenery in real world which user views) (step S705). In addition, when the diagnosis unit 603 outputs the diagnosis result with respect to the input image as sound information, as well, the result is output to the outside from the sound input/output unit 514. The image in which the diagnosis result is composited, or the sound information may be stored in the storage unit 506 for a purpose of reusing, or the like.
Thereafter, ending processing is executed in response to an instruction of ending the processing from the user which is performed through the input operation unit 502, or the like (step S706), and the process routine is ended.
In the ending processing, the diagnosis result which is obtained in step S704, storing in the storage unit 506 or uploading to the server of the information such as the virtual image which is composited by the compositing unit 605 in step S705, updating processing of the diagnosis database 604, or the like, may be included.
In the diagnosis processing which is performed in the diagnosis unit 603, there is estimating processing, or a diagnosis processing with respect to a target in a field of vision of a user, such as a diagnosis or fortune-telling based on physical characteristics, for example, palm reading when the user who is wearing the image display device 100 views a palm of his own, or another person, face reading when the user views a face of another person (or his own face reflected on mirror), a diagnosis of physiognomy of a house when observing a house or a building, a fair skin diagnosis when viewing a face of another person (or his own face reflected on mirror), fortune-telling in Chinese geomancy when viewing a room, and layout diagnoses other than these. According to the embodiment, the diagnosis unit 603 performs the above described diagnosis with respect to the input image of the image input unit 601, such as the image which is photographed by the external camera 512 in the gaze direction of the user. In addition, the compositing unit 605 composites the diagnosis result so as to be overlapped with the input image, and displays and outputs the result onto the display unit 509. Accordingly, the user is able to understand the diagnosis contents more specifically and accurately, for example, on which specific portion in the input image, the diagnosis result is grounded, or the like, without merely accepting the diagnosis result.
B-1. Palm Reading
First, a case of performing palm reading will be described as an example of a diagnosis or fortune-telling based on physical characteristics.
The feature amount extraction unit 602 extracts palm lines from the input palm image as a feature amount of the palm, and outputs information of each palm line to the diagnosis unit 603. In addition, the compositing unit 605 displays and outputs each palm line 901 which is extracted from the palm onto the display unit 509 by overlapping the palm line with a palm image 900, as a display image in the middle of the processing, that is, as a display image in the middle of palm reading (refer to
In addition, the diagnosis unit 603 collates each palm line which is extracted in the feature amount extraction unit 602 with the palm line database in the diagnosis database 604, performs palm reading, and generates a diagnosis result thereof. In addition, the compositing unit 605 composites a virtual image in which the diagnosis result which is generated in the diagnosis unit 603 is overlapped with the input image of the image input unit 601, and as illustrated in
In addition, there is an opinion that palm lines on each of the left and right hands have different meanings (for example, there is opinion that palm lines on a person's dominant hand tells acquired talent or future of the person, and palm lines on the other hand tells inborn talent or past of the person). For this reason, it is necessary to perform palm reading on both hands, preferably, when performing the palm reading. However, in a case of a system which is subject to a condition that a user photographs his own palm using a mobile terminal with a camera, or a digital camera (for example, refer to Japanese Unexamined Patent Application Publication No. 2007-183, and Japanese Unexamined Patent Application Publication No. 2010-123020), the user should perform photographing of the respective left and right hands while having a camera, and this takes twice the labor.
In contrast to this, according to the embodiment, an image input is performed using the external camera 512 of the image display device 100 which is mounted on the head or a face of a user. That is, it is possible to input the left and right palm images 1100 of the user at the same time, as illustrated in
In addition, the feature amount extraction unit 602 extracts palm lines 1201L and 1201R at the same time from each of the left and right palms, and displays and outputs the palm lines onto the display unit 509 by overlapping the palm lines with respective palm images 1200L and 1200R (refer to
In addition, the diagnosis unit 603 performs palm reading by collating each of the left and right palm lines 1301L and 1301R which are extracted in the feature amount extraction unit 602 with the palm line database in the diagnosis database 604, and displays meanings of each of the palm lines 1301L and 1301R in forms of balloons 1302 to 1305 which appear from the palm lines (refer to
As illustrated in
In addition, a method has been used in which a talent or a character is read based on uprising portions of the bases of the respective five fingers, that is, the “palm hill”. Accordingly, the feature amount extraction unit 602 may extract information relating to the irregularity on the palm from a palm image which is input from the image input unit 601, as a feature amount. For example, when the external camera 512 is configured of a plurality of cameras, it is possible to obtain three-dimensional information of a palm based on parallax information which is obtained from a photographed image of each camera. In addition, as illustrated in
In addition, the diagnosis unit 603 collates the palm hill of the base of each finger which is extracted in the feature amount extraction unit 602 with a palm hill database in the diagnosis database 604, and generates a diagnosis result thereof. In addition, the compositing unit 605 composites a virtual image in which the diagnosis result which is generated in the diagnosis unit 603 is overlapped with the input image of the image input unit 601, overlaps the virtual image with the original palm image 1600, as illustrated in
In addition, it is by no means chirognomy, however, a method has also been used in which a talent or a character of a person is diagnosed based on the length of fingers as visual information of a palm. In this case, the feature amount extraction unit 602 calculates information on the length of a finger with respect to all of five fingers, or a part of fingers which is focused on from a palm image which is input from the image input unit 601, as a feature amount. As illustrated in
In addition, the diagnosis unit 603 collates the length of each finger which is extracted in the feature amount extraction unit 602 with the finger length database in the diagnosis database 604, performs a diagnosis of a talent or a character of the person, and generates a diagnosis result thereof. In addition, the compositing unit 605 composites a virtual image in which the diagnosis result which is generated by the diagnosis unit 603 is overlapped with an input image of the image input unit 601, and displays and outputs the virtual image onto the display unit 509 by overlapping the image with the original palm image 1800, as illustrated in
B-2. Face Reading
Subsequently, a case of performing face reading will be described as another example of a diagnosis or fortune telling based on physical characteristics.
The image input unit 601 inputs a face image of a user which is photographed using the internal camera (which is described above) which is included in the state detection unit 511, or a face image of a person who is in front of the user's eyes which is photographed using the external camera 512. Alternatively, a face image which is photographed in the past using the internal camera or the external camera 512 may be taken from the storage unit 506, or a face image which is downloaded from an external server through the communication unit 505 may be input.
The feature amount extraction unit 602 extracts a face feature amount such as an outline, a size, a hue, or the like, of a portion which is a diagnosis target from the input face image, and outputs the face feature amount to the diagnosis unit 603. In addition, the diagnosis unit 603 performs face reading by collating the face feature amount which is extracted in the feature amount extraction unit 602 with a physiognomy database in the diagnosis database 604.
When performing the face reading, a shape of the whole face, and a shape or a size of parts of the face such as eyes, a nose, ears, a mouth, eyebrows, a chin, and the like, are referred to in general.
For example, in the face reading using a nose, a size of the whole nose, the height of the nose, color of the nose, a shape of a tip of the nose, nostrils and a vertical groove thereof, and the like, are used.
The feature amount extraction unit 602 extracts a feature amount of a face such as a size of the whole nose, the height of the nose, the color of the nose, the shape of a tip of the nose, the nostrils and the vertical groove thereof, and the like, from the face image which is input in the image input unit 601. The compositing unit 605 displays an image 2001 (denoted by dotted line in the figure) of a virtual nose which has a size, height, a shape, and the like, which are standard by overlapping the image with an input face image 2000, as illustrated in
In addition, the diagnosis unit 603 collates each feature amount of the face such as a size of the whole nose, the height of the nose, the color of the nose, the shape of the tip of the nose, the nostrils and the vertical groove thereof, and the like, with the physiognomy database in the diagnosis database 604, and diagnoses fortune such as fortune in money or family fortune, or a character such as an acting power. In addition, the compositing unit 605 composites a virtual image in which a diagnosis result which is generated in the diagnosis unit 603 is overlapped with an input image of the image input unit 601, and displays and outputs the result onto the display unit 509 by overlapping the result with the face image 2100, as illustrated in
B-3. Fair Skin Diagnosis
In addition, it is possible to perform a skin diagnosis, in addition to the physiognomy. For example, a technology in which a texture analysis, a blemish analysis e, an analysis of skin color, a sebum analysis, or the like, is performed using a measurement of a partial image of a face, or a technology in which a wrinkle/pore analysis, the blemish analysis, a porphyrin analysis, or the like, is performed using a measurement of an image of the whole face (for example, refer to Japanese Unexamined Patent Application Publication No. 2007-130104), or a highly precise skin analysis such as moisture retention using a near infrared camera is performed (for example, refer to Japanese Unexamined Patent Application Publication No. 2011-206513) has been known.
Accordingly, the feature amount extraction unit 602 performs the texture analysis, the blemish analysis, the analysis of skin color, the sebum analysis, or the like, using the measurement of the partial image with respect to the user's own face which is photographed using the internal camera, or a face of another person which is photographed using the external camera 512, or performs the wrinkle/pore analysis, the blemish analysis, the porphyrin analysis, or the like, using the measurement of the image of the whole face, and outputs an analysis result to the diagnosis unit 603. Alternatively, the feature amount extraction unit 602 performs a skin analysis using a near infrared area of a face image which is photographed using the internal camera or the external camera 512, and outputs an analysis result to the diagnosis unit 603. The diagnosis unit 603 can specify a portion of fair skin in a face among other things, or, on the contrary, a portion at which the skin is seriously damaged by collating the face feature amount which is extracted in the feature amount extraction unit 602 with a fair skin database in the diagnosis database 604, and by performing a fair skin diagnosis. In addition, the compositing unit 605 composites a virtual image in which a diagnosis result which is generated in the diagnosis unit 603 is overlapped with an input image of the image input unit 601, and displays and outputs the result onto the display unit 509 by overlapping the result with a face image 2200, as illustrated in
B-4. Display of Acupuncture Point
Since there are a plurality of face parts such as eyes, eyebrows, a nose, a mouth, and the like, on a face, it is relatively easy to specify a location in the face by targeting a face part. In contrast to this, there is no target on the sole surface, or the like, since approximately uniform skin is spread thereon, and accordingly, it is difficult to specify a location on the sole of the foot.
For example, acupuncture points in the whole body, such as the digestive system or the bronchial system, the brain, and joints are gathered on the sole of the foot, and it is possible to promote recovery by performing massage, or a finger-pressure treatment with respect to an acupuncture point corresponding to a portion of which the function is degraded.
However, since there are so many acupuncture points on the sole of the foot, it is difficult for a general user to remember acupuncture points, and a correlation between each acupuncture point and an effective portion, though it is different in a case of an expert. In addition, though there is a foot acupuncture point table, it is difficult to exactly show the acupuncture point in the foot acupuncture point table on the real sole of the foot, since there are individual differences in a shape or an aspect ratio of the sole of the foot. As a matter of course, it is also difficult to remember an acupuncture point in each location such as the back, not only the sole of the foot.
Therefore, according to the embodiment, when an image in the gaze direction of a user who is viewing the sole of the foot is taken in from the image input unit 601 through the external camera 512, and when the diagnosis unit 603 obtains an acupuncture point corresponding to a portion for which is desired to improve a function, the diagnosis unit displays and outputs a diagnosis result thereof onto the display unit 509 so as to be overlapped with an input image. Accordingly, the user is able to accurately understand the acupuncture point, and to accurately perform massage or a finger-pressure treatment, even when the user does not remember, or refer to the acupuncture point table.
Here, the user inputs an instruction denoting a foot acupuncture point corresponding to that portion (for example, stomach) the user is looking for through a sound input, or an operation with respect to the input/output operation unit 502, for example.
The feature amount extraction unit 602 extracts an outline of the sole of the foot 2301 from the input image 2300 as a feature amount, and outputs the outline to the diagnosis unit 603. There are individual differences in a size or a shape, and an aspect ratio of the sole of the foot 2301 which is extracted in feature amount extraction unit.
On the other hand, a table of the sole of the foot (not shown) which denotes each position of the foot acupuncture point on the sole of the foot which has a normalized shape is stored in the diagnosis database 604.
When a foot acupuncture point 2401 corresponding to a portion which is designated by a user (for example, stomach) is found by referring to a foot acupuncture point table 2400 in the diagnosis database 604, the diagnosis unit 603 performs a projection conversion of a location thereof on the outline of the sole of the foot 2402 (on a field of vision of the user) which is extracted in the feature amount extraction unit 602 (refer to
In addition, the compositing unit 605 displays and outputs a foot acupuncture point 2501 which is obtained as a diagnosis result onto the display unit 509 by overlapping the position with an image 2500 of the original sole of the foot (refer to
In addition, only the acupuncture point on the sole of the foot is described in
(1) An image display device which is used by being mounted on the head or a face includes an image display unit which displays an image; an image input unit which inputs an image; a judging unit which judges an image which is input to the image input unit, or obtains a judging result with respect to the input image; and a control unit which controls the image display unit based on the judging result of the judging unit.
(2) The image display device which is described in (1) further including a camera, in which the image input unit inputs an image which is photographed by the camera.
(3) The image display device which is described in (2), in which the camera photographs a gaze direction of a user, or at least a part of a body of the user.
(4) The image display device which is described in (1) further including a storage unit which stores an image, and in which the image input unit inputs an image which is read out from the storage unit.
(5) The image display device which is described in (1) further including a communication unit which communicates with an external device, and in which the image input unit inputs an image which is obtained from the external device through the communication unit.
(6) The image display device which is described in (1), in which the control unit displays an image denoting the judging result by overlapping the image with a corresponding location of the image which is displayed by the image display unit.
(7) The image display device which is described in (1), in which the image input unit inputs an image which is displayed by the image display unit, the judging unit judges a specified portion in the input image, and the control unit displays the judging result by overlapping the result with a location corresponding to the specified portion in the image which is displayed by the image display unit.
(8) The image display device which is described in (1), in which the image display unit displays the image in a see-through mode, the image input unit inputs a photographed image in which the gaze direction of a user is photographed using a camera, the judging unit judges a specified portion in the photographed image, and the control unit displays the judging result on the image display unit so that the judging result is overlapped with the specified portion on a field of vision of the user.
(9) The image display device which is described in (1), in which the image display unit displays the photographed image in which the gaze direction of the user is photographed using the camera, the image input unit inputs the photographed image, the judging unit judges the specified portion in the photographed image, and the control unit displays the judging result by overlapping the result with the specified portion in the photographed image.
(10) The image display device which is described in (1) further including a storage unit which stores a judging result of the judging unit, or an image which is controlled based on the judging result.
(11) The image display device which is described in any one of (6) to (9), in which the judging unit performs a diagnosis based on characteristics of a specified portion in the input image, and the control unit displays a result of the diagnosis by overlapping the result with a location corresponding to the specified portion in an image which is displayed by the image display unit.
(12) The image display device which is described in any one of (6) to (9), in which the judging unit performs palm reading with respect to palm lines on a palm which is included in the input image, and the control unit displays a result of the palm reading by overlapping the result with a location corresponding to the palm lines in an image which is displayed by the image display unit.
(13) The image display device which is described in (12) further including a feature amount extraction unit which extracts palm lines from a palm which is included in the input image, and in which the judging unit performs palm reading based on the palm lines which are extracted by the feature amount extraction unit.
(14) The image display device which is described in (13), in which the control unit displays the palm lines which are extracted by the feature amount extraction unit by overlapping the palm lines with an image which is displayed by the image display unit.
(15) The image display device which is described in (12), in which the image input unit inputs an image including left and right palms, and the judging unit performs palm reading on the left and right palms from the input image.
(16) The image display device which is described in (15) further including a feature amount extraction unit which extracts palm lines from a palm which is included in the input image, and in which the control unit displays palm lines which are extracted from one of the left and right palms by overlapping the palm lines with palm lines on the other palm, by inverting the palm lines from left to right.
(17) The image display device which is described in any one of (6) to (9), in which the judging unit diagnoses a palm hill of a base of at least one finger of a hand which is included in the input image, and the control unit displays a result of the diagnosis by overlapping the result with a location corresponding to the palm hill in an image which is displayed by the image display unit.
(18) The image display device which is described in any one of (6) to (9), in which the judging unit diagnoses a length of at least one finger of the hand which is included in the input image, and the control unit displays a result of the diagnosis by overlapping the result with a corresponding finger in an image which is displayed by the image display unit.
(19) The image display device which is described in any one of (6) to (9), in which the judging unit performs face reading with respect to a face image which is included in the input image, and the control unit displays a result of the face reading by overlapping the result with a location which becomes grounds of the face reading in an image which is displayed by the image input unit.
(20) The image display device which is described in any one of (6) to (9), in which the judging unit performs a skin diagnosis with respect to the face image which is included in the input image, and the control unit displays a result of the skin diagnosis by overlapping the result with a location which becomes grounds of the skin diagnosis in an image which is displayed by the image display unit.
(21) The image display device which is described in any one of (6) to (9), in which the judging unit specifies a position of an acupuncture point from a body of a person which is included in the input photographed image, and the control unit displays the specified position of the acupuncture point by overlapping the position with a corresponding location in an image which is displayed by the image display unit.
(22) An image display method which displays an image in an image display device which is used by being mounted on the head or a face, the method including inputting of an image; judging an image which is input in the inputting of the image, or obtaining a judging result with respect to the input image; and controlling an image which will be displayed based on the judging result of the judging.
(23) A computer program in which processing for displaying an image in a head-mounted image display device, or a face-mounted image display device is described in a computer-readable format, the program causing a computer to function as an image display unit which displays an image; an image input unit which inputs an image; a judging unit which judges an image which is input to the image input unit, or obtains a judging result with respect to the input image; and a control unit which controls an image which will be displayed based on the judging result of the judging unit.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
2013-008050 | Jan 2013 | JP | national |