Aspects of the disclosure relate to a head-mounted display which can visually present a content image indicated by content data to a user so that the user can recognize the content image.
A technique related to a head-mounted display has been proposed which can visually present a content image based on content data to a user so that the user can recognize the content image. For example, it has been proposed a technique to manage the supply of power to a head-mounted display to prevent a user from forgetting to turn off the head-mounted display, and particularly, a technique to automatically turn off the head-mounted display and an image presentation device which presents a content image for the head-mounted display, after a predetermined time period from the user taking off the head-mounted display.
A head-mounted display is expected to be used in, for example, various product manufacturing places. Specifically, for example, in a situation where an operator assembles a product, it is expected to present various operation instructions to the operator by the head-mounted display. In this example, even when the presentation of an image of operation instructions is stopped, the operator may continue to work with the head-mounted display on his head. Then, for starting subsequent operation, an image related to instructions for subsequent operation will be presented through his head-mounted display. While the presentation of an image is stopped, only an image transmission process is stopped (i.e., the head-mounted display continues to display a blank image thereon), and the power is kept being supplied to each component of the head-mounted display.
Accordingly, it is an aspect of the disclosure to provide a head-mounted display which can appropriately manage the supply of power thereto, and particularly, a head-mounted display capable of reducing power consumption.
Another aspect of the disclosure provides a head-mounted display which while a content image obtained by playing contend data is presented by an image presentation unit, detects a segmentation point in the content of the played content data, and when a segmentation point is detected, stops playing the content data and cuts off power supplied to the image presentation unit.
According to this configuration, it may be possible to provide a head-mounted display capable of appropriately managing power supply with a novel method, and particularly, a head-mounted display capable of reducing power consumption.
According to an illustrative embodiment of the disclosure, there is provided a head-mounted display including an image presentation unit, a control unit, a power supply management unit, and a detection unit. The image presentation unit is configured to present an image to an eye of a user. The control unit is configured to play content data and control the image presentation unit to present an image obtained by playing the content data. The power supply management unit is configured to manage supply and cutoff of power to the image presentation unit. The detection unit is configured to detect a segmentation point of content indicated by the content data played by the control unit. The control unit is configured to stop playing the content data in response to the detection unit detecting the segmentation point. The power supply management unit is configured to cut off the power supplied to the image presentation unit in response to the detection unit detecting the segmentation point.
According to another illustrative embodiment of the disclosure, there is provided a power supply management method for a head-mounted display. The method includes: detecting a segmentation point of content indicated by content data played by an image presentation unit configured to visually present an image to an eye of a user; in response to detecting the segmentation point in the detecting step, stopping the playing of the content data; and in response to detecting the segmentation in the detecting step, cutting off power supplied to the image presentation unit.
The above and other aspects of the disclosure will become more apparent and more readily appreciated from the following description of illustrative embodiments of the disclosure taken in conjunction with the attached drawings, in which:
Illustrative embodiments of the disclosure will be described in the following. It is noted that embodiments of the disclosure are not limited to the specific configuration in the following, so that various changes, modifications can be applied within a technical concept of the disclosure. For example, the following description is provided taking, as an example, a head-mounted display including a head-mounted display main body and a control box connected to the head-mounted display main body and configured to provide a content image obtained by playing content data to the head-mounted display main body. However, the head-mounted display may be configured such that the head-mounted display main body and the control box are structured as one body. Hereinafter, the head-mounted display main body may be referred to simply as an HMD.
Referring to
An image presentation device 114 is attached to the framework of the HMD 100 by an attachment unit 122 that is provided near the end piece 106A. When attached to the HMD 100 by the attachment unit 122, the image presentation device 114 is placed on a same level with a left eye 118 of a user wearing the HMD 100. Referring to
The content image, i.e., an image beam 120a, 120b is reflected by the half mirror 116. The reflected content image is incident upon the left eye 118. That is, the content image is presented to or projected onto the left eye 118 so that it can be viewed by the user. Accordingly, the user can recognize the content image. Referring to
Referring to
The optical scanning unit 500 includes an image beam generator 720. The image beam generator 720 reads the image signal provided by the control unit 600 on a dot clock basis, generates an intensity-modulated image beam based on the read image signal, and emits the generated image beam. A collimate optical system 761, a horizontal scanner 770, a vertical scanner 780, a relay optical system 775, and a relay optical system 790 are provided between the image beam generator 720 and the left eye 118. The collimate optical system 761 transforms a laser beam emitted from the image beam generator 720 through an optical fiber 800 into a parallel beam. The horizontal scanner 770 serves as a first optical scanner that scans the parallel beam provided by the collimate optical system 761 reciprocally in a first direction, e.g., a horizontal direction, to display an image. The vertical scanner 780 serves as a second optical scanner that scans an image beam scanned horizontally by the horizontal scanner 770 reciprocally in a second direction substantially perpendicular to the first direction, e.g., a vertical direction. The relay optical system 775 is provided between the horizontal and vertical scanners 770 and 780. The relay optical system 790 emits image beams that are scanned in the horizontal and vertical directions, i.e., image beams that are scanned two-dimensionally, toward a pupil Ea of the left eye 118.
The image beam generator 720 includes a signal processing circuit 721. The signal processing circuit 721 receives a content image signal from the control box 200 via the I/O interface 650 and the control unit 600. The signal processing circuit 721 generates various signals for synthesizing a content image based on the content image signal. For example, the signal processing circuit 721 generates and outputs blue (B), green (G), and red (R) image signals 722a, 722b, and 722c. The signal processing circuit 721 also outputs a horizontal driving signal 723 for use in the horizontal scanner 770 and a vertical driving signal 724 for use in the vertical scanner 780.
The image beam generator 720 includes a light source unit 730 and an optical synthesis unit 740. The light source unit 730 serves as an image beam output unit which emits the R, G, and B image signals 722a, 722b, and 722c output at every dot clock by the signal processing circuit 721 as image beams. The optical synthesis unit 740 synthesizes the three image beams output by the light source unit 730 into a single image beam to generate image light.
The light source unit 730 includes a B laser 734 which generates blue image light, a B laser driver 731 which drives the B laser 734, a G laser 735 which generates green image light, a G laser driver 732 which drives the G driver 735, an R laser 736 which generates red image light, and an R laser driver 733 which drives the R laser 736. For example, the B, G, and R lasers 734, 735, and 736 may be implemented as semiconductor lasers or solid lasers equipped with a harmonic wave generator. When employing semiconductor lasers as the B, G, and R lasers 734, 735, and 736, the intensity of image light can be modulated by directly modulating a driving current. When employing the solid lasers as the B, G, and R lasers 734, 735, and 736, it is necessary to provide an external modulator for each of the lasers to modulate the intensity of image light.
The optical synthesis unit 740 includes collimate optical systems 741, 742, and 743, dichroic mirrors 744, 745, and 746, and a combining optical system 747. The collimate optical systems 741, 742, and 743 respectively collimate image beams incident thereupon from the light source unit 730, thereby obtaining parallel beams. The dichroic mirrors 744, 745, and 746 respectively synthesize the parallel beams provided by the collimate optical systems 741, 742, and 743. The combining optical system 747 directs a synthesized image beam obtained by synthesizing the parallel beams provided by the collimate optical systems 741, 742, and 743 into the optical fiber 800.
Laser beams emitted from the B, G, and R lasers 734, 735, and 736 are collimated by the collimate optical systems 741, 742, and 743, and incident upon the dichroic mirrors 744, 745, and 746. The laser beams incident upon the dichroic mirrors 744, 745, and 746 are selectively reflected by or transmit through the dichroic mirrors 744, 745, and 746 according to their wavelength.
Specifically, a B image beam emitted from the B laser 734 is collimated by the collimate optical system 741, and the collimated B image beam is incident upon the dichroic mirror 744. A G image beam emitted from the G laser 735 is incident upon the dichroic mirror 745 through the collimate optical system 742. An R image beam emitted from the R laser 736 is incident upon the dichroic mirror 746 through the collimate optical system 743.
The three primary-color image beams respectively incident upon the dichroic mirrors 744, 745, and 746 reach the combining optical system 747 by being reflected by or transmitting through the dichroic mirrors 744, 745, and 746, are combined by the combining optical system 747, and then output to the optical fiber 800.
The horizontal and vertical scanners 770 and 780 scan an image beam incident thereupon from the optical fiber 800 in the horizontal and vertical directions, respectively, such that the incident image beam can become projectable as an image.
The horizontal scanner 770 includes a resonant-type deflector 771, a horizontal scanning control circuit 772, and a horizontal scanning angle detection circuit 773. The resonant-type deflector 771 has a reflective surface for scanning an image beam in the horizontal direction. The horizontal scanning control circuit 772 serves as a driving signal generator which generates a driving signal for resonating the resonant-type deflector 771 to vibrate the reflective surface of the resonant-type deflector 771. The horizontal scanning angle detection circuit 773 detects a state of the vibration of the resonant-type deflector 771, such as the degree and frequency of the vibration of the reflective surface of the resonant-type deflector 771, based on a displacement signal output from the resonant-type deflector 771.
In this illustrative embodiment, the horizontal scanning angle detection circuit 773 outputs a signal indicating the state of the vibration of the resonant deflector 771 to the control unit 600.
The vertical scanner 780 includes a deflector 781, a vertical scanning control circuit 782, and a vertical scanning angle detection circuit 783. The deflector 781 scans an image beam in the vertical direction. The vertical scanning control circuit 782 drives the deflector 781. The vertical scanning angle detection circuit 783 detects a state of the vibration of the deflector 781, such as the degree and frequency of the vibration of the reflective surface of the deflector 781.
The horizontal scanning control circuit 772 is driven by the horizontal driving signal 723 that is output from the signal processing circuit 721, and the vertical scanning control circuit 782 is driven by the vertical driving signal 724 that is output from the signal processing circuit 721. The vertical scanning angle detection circuit 783 outputs a signal indicating the state of the vibration of the deflector 781 to the control unit 600.
The control unit 600 adjusts the horizontal and vertical driving signals 723 and 724 by controlling the operation of the signal processing circuit 721. By adjusting the horizontal and vertical driving signals 723 and 724, the control unit 600 can vary the scanning angles of the horizontal and vertical scanners 770 and 780 and thus adjust the brightness of an image to be displayed.
The control unit 600 detects variations in the scanning angles of the horizontal and vertical scanners 770 and 780 based on detection signals provided by the horizontal and vertical scanning angle detection circuits 773 and 783. The results of the detection of the variations in the scanning angles of the horizontal and vertical scanners 770 and 780 are fed back to the horizontal driving signal 723 through the signal processing circuit 721 and the horizontal scanning control circuit 772, and are also fed back to the vertical driving signal 724 through the signal processing unit 721 and the vertical scanning control circuit 782.
The relay optical system 775 relays image light between the horizontal and vertical scanners 770 and 780. For example, an image beam scanned horizontally by the resonant deflector 771 is converged on the reflective surface of the deflector 781 by the relay optical system 775. The image beam converged on the reflective surface of the deflector 781 is scanned vertically by the deflector 781, and is emitted toward the relay optical system 790 as a two-dimensionally scanned image beam.
The relay optical system 790 includes lens systems 791 and 794 having a positive refractive power. The scanned image beams emitted from the vertical scanner 780 are collimated by the lens system 791 such that the center lines of the scanned image beams are parallel to each other, and converted into converged image beams. The converged image beams become substantially parallel by the lens system 794, and converted such that the center lines of the image beams converged on the pupil Ea.
It is noted that in this illustrative embodiment, light emitted from the optical fiber 800 scanned horizontally by the horizontal scanner 770 and then scanned vertically by the vertical scanner 780. However, the order of scan may be opposite. That is, the light emitted from the optical fiber 800 may be scanned vertically by the vertical scanner 780 and then scanned horizontally by the horizontal scanner 770.
(Configuration of Control Box)
The control box 200 is attached to, for example, the waist of the user. Referring to
The memory unit 208 is configured by a hard disc, for example. The content data 2082 stored in the memory unit 208 indicates the content of a manual for the assembly of a product, for example. The operation unit 212 includes one or more keys, and receives instructions to start and end the playing of the content data 2082. For example, the ID acquisition unit 216 is configured by a radio-frequency identification (RFID) reader which reads information from an RFID tag embedded in an IC card using an electromagnetic induction or radio waves. Alternatively, the ID acquisition unit 216 may be configured to read ID information by directly contacting with an IC card, or configured to read ID information from a magnetic stripe card by using a magnetic head.
The CPU 202 acquires a content image by executing a program in the ROM 204 for playing (or rendering) the content data 2082 in the RAM 206. The CPU 202 outputs a content image signal including the content image to the HMD 100 via the I/O interface 210 by executing a program in the ROM 204 for controlling the I/O interface 210 in the RAM 206. The CPU 202 causes the power supply controller 300 to manage the supply and cutoff of power to the HMD 100 by executing, in the RAM 206, a program in the ROM 204 for controlling the power supply controller 300 based on segmentation information related to a segmentation point of the played content. Further, the CPU 202 controls the operation of the HMD 100 based on instructions input through the operation unit 212 and the operation of the HMD 100 based on the segmentation information, by executing a program in the ROM 204 for controlling the HMD 100 in the RAM 206. Accordingly, by the CPU 202 executing, in the RAM 206, the programs in the ROM 204 using various data, such as the content data 2082 and the like, various function units, e.g., a control unit, a power supply management unit, and a detection unit, are realized.
(Process Executed by the Control Box)
In the following, each of three processes which are performed by the CPU 202 executing the programs in the ROM 204 will be further described.
(First Process)
A first process is executed when the content data itself includes segmentation information indicating a segmentation point in the content indicated by the content data 2082. In the first process shown in
Referring to
The image presentation device 114 of the HMD 100 receives the content image signal from the control box 200 via the I/O interface 650 illustrated in
The CPU 202 determines whether the playing of the content data 2082 is completed, that is, whether the content data 2082 has been played to an end point thereof (S104). If the content data 2082 has been played to the end point (S104: Yes), the CPU 202 cuts off the power to the HMD 100 by reading out a program for controlling the power supply controller 300 from the ROM 204 and executing the program in the RAM 206. Then, this process ends. On the other hand, if the content data 2082 has been not yet played to the end point (S104: No), the CPU 202 causes this process to proceed to S106.
At S106, the CPU 202 determines whether a segmentation point in the content indicated by the content data 2082 is detected based on the segmentation information that is included in the content data 2082 as header information. Specifically, the CPU 202 determines whether the content data 2082 has been played to a segmentation point defined by the segmentation information. If any segmentation point is not detected (S106: No), the CPU 202 causes the process to return to S104, and continues to play the content data 2082. On the other hand, if a segmentation point is detected (S106: Yes), the CPU 202 causes the process to proceed to S108.
At S108, the CPU 202 reads out, into the RAM 206, a program in the ROM 204 for controlling the power supply controller 300, and executes the program in the RAM 206 (S108). Based on the program, the CPU 202 cuts off the power supplied by the power supply controller 300 to the HMD 100, and more particularly, the power supplied by the power supply controller 300 to the image presentation device 114. When cutting off the power supply, the CPU 202 stops playing the content data 2082, and stores a point where the playing of the content data 2082 is stopped, in the RAM 206. Here, the CPU 202 measures an elapsed time from the detection of the segmentation point in the content data 2082 by the timer 214.
Then, the CPU 202, which has measured the elapsed time by the timer 214, determines whether a time period that is designated by time information included in the content data 2082 has elapsed (S110). The time information is included in the content data 2028 while being associated with the segmentation point detected at S106, and indicates a time when to resume the playing of the content data 2082, more specifically, a time while the playing of the content data 2082 is stopped. In other words, the time information relates to a time period from a time when the power supply to the image presentation device 114 is cutoff to a time when the power supply is to be resumed. If the designated time period has not elapsed yet (S110: No), the CPU 202 waits until the designated time period elapses. On the other hand, if the designated time period has elapsed (S110: Yes), the CPU 202 executes a program in the ROM 204 for controlling the power supply controller 300 in the RAM 206, and controls the power supply controller 300 to resume the supply of power to the image presentation device 114 (S112). Then, the CPU 202 causes the process to return to S102, and resumes the playing of the content data 2082 from the point stored in the RAM 206 at S108, i.e., the point where the playing of the content data 2082 was stopped.
In the above illustrative embodiment, if a segmentation point in the content data 2082 is detected by the CPU 202 (S106: Yes), the CPU 202 cuts off the power to the image presentation device 114. However, alternatively, the CPU 202 may cut off the power to the whole HMD 100 if a segmentation point in the content data 2082 is detected by the CPU 202 (S106: Yes). That is, the power to components including the image presentation device 114 and the control box 200 may be cut off. Further, it may be determined whether to cut off the power to the image presentation device 114 or the power to the whole HMD 100 according to the remaining power of the battery 400. Specifically, in this case, the CPU 202 may cut off the power to only the image presentation device 114 if the remaining power of the battery 400 exceeds a reference level, and may cut off the power to the whole HMD 100 if the remaining power of the battery 400 does not exceeds the reference level. According to this configuration, it is possible to reduce a time required for the HMD 100 to resume its operation and possible to efficiently drive the HMD 100 for a long time.
In the following, an example will be described where the content data 2082 is 10-minute-long moving image data of a manual for assembling a product.
Referring to (a) of
Referring to (b) of
The user performs a first operation based on the content image corresponding to the first operation segment. A designated time period corresponding to the first time information is set in advance based on an amount of time required for the user to finish the first operation. The user performs a product assembly operation according to the first operation based on information obtained by viewing the content image, within the designated time period corresponding to the first time information. At this time, the user views the outside of the HMD 100 through the half mirror 116. Specifically, the user starts assembling a product while viewing component parts of the product also with the left eye 118 through the half mirror 116.
If the timer 214 measures that the designated time period corresponding to the first segmentation information has elapsed (refer to S110: Yes of
If the timer 214 measures that the designated timer period corresponding to the second segmentation information has elapsed (refer to S110: Yes of
In the above, the CPU 202 resumes playing the content data 2082 after the elapse of designated time period. However, for example, the playing of the content data 2082 may be resumed by an instruction input to the CPU 202 via the operation unit 212 (refer to S314 of
(Advantageous Effects of First Process)
According to the first process, the following advantageous effects would be obtained.
(1) In the first process, a segmentation point of the content is detected based on segmentation information included in the content data 2082 as header information (refer to S106 of
(2) In the first process, supplying of the power to the image presentation device 114 is resumed after the preset designated time period from detection of a segmentation point in the content (S112 of
(Second Process)
In the first process, the content data 2082 includes segmentation information indicating a segmentation point in the content and time information corresponding to the segmentation information as header information. In a second process, the segmentation information and the time information are included in a segmentation table which is different from the content data 2082. A content data body of the content data 2082, which configures the content data 2082 with header information, includes time stamp information. The time stamp information indicates a position (elapsed time) from the beginning of the content data body. During the playing of the content data 2082, the currently played position can be specified by the time stamp information. The second process is different from the first process in the above aspect but same as the first process except for that aspect. Thus, the second process will be described while focusing mainly on the differences from the first process, and details will be omitted. The segmentation table is stored in a region that can be accessed by the control box 200. In the following, the segmentation table is stored in the memory unit 208, for example.
Referring to
At S202, the CPU 202 reads out the first segmentation information, the first time information, the second segmentation information, and the second time information into the RAM 206, and performs S204 to S214. Herein, S204 to S214 correspond to S102 to S112 of
(Advantageous Effects of Second Process)
According to the second process, the following advantageous effects would be obtained in addition to that obtained in the first process.
(1) In the second process, the segmentation information and the time information is included in the segmentation table (
(Third Process)
A third process is same as the second process in using a segmentation table. However, the third process is different from the second process in that a user who wears the HMD 100 is identified, time information is set differently depending on users and in the registering mode of time information in the segmentation table. Thus, the third process will be described while focusing mainly on the differences from the second process.
Firstly, examples of the segmentation table that can be used in the third process are described with reference to
Similarly to user A, a segmentation table shown in
The process flow of the third process executed by the CPU 202 will be described with reference to
At S302, the CPU 202 reads out a segmentation table corresponding to the read out identification information and basic time information from the memory unit 208. For example, if the identification information ‘CCC’ is acquired at S300, the CPU 202 reads out the segmentation table shown in
At S312, the CPU 202 calculates a timing to resume supplying power to the image presentation device 114. The specific calculation method is already described in the above with reference to
If an instruction to resume playing the content data 2082 is input (S314: Yes), the CPU 202 causes the process to proceed to S318. On the other hand, if an instruction to resume playing is not input (S314: No), the CPU 202 determines whether a time period calculated based on the basic time information and the first time information or the second time information at S312 has elapsed (S316). Here, operation S316 corresponds to S212 of
At S318, the CPU 202 updates (or overwrite) the first or second time information with a ratio with respect to the time period indicated by the basic time information, which corresponds to an actual time period from a time when a segmentation information is detected to a time when it is determined to resume playing. Specifically, in a case where it is instructed to resume playing at S314 (S314: Yes), the CPU 202 updates time information with new time information which is a ratio of a time period from a time when a segmentation information is detected to a time when an instruction to resume playing is input, with respect to the time period indicated by the basic time information. For example, in a case where user C finishes a product assembly operation according to the first operation by 36 minutes and then inputs an instruction to resume playing at that time, the CPU 202 updates the first time information registered in the segmentation table for user C as 60%. It is noted that if the calculated time period has elapsed (S316: Yes), the CPU 202 updates time information with the same value as that currently registered as the time information or may not update or overwrite the time information.
After S318, the CPU 202 causes the process to proceed to S320. S320 corresponds to S214 of
(Advantages Effects of Third Process)
According to the third process, the following advantageous effects would be obtained in addition to that obtained in the first and second processes.
(1) In the third process, the segmentation tables are respectively provided for a plurality of users of the HMD 100, and stored in the memory unit 208 together with basic time information. The CPU 202 acquires identification information of a current user at the time of input of an instruction to play the content (S300 of
(2) In the third process, an instruction to resume playing the content is allowed to be input within the designated time period, and the CPU 202 determines whether an instruction to resume playing is input via the operation unit 212 of the control box 200 (S314 of
In the first, second, and third processes, measurement of the elapsed time which is a basis for determining when to resume supplying power to the image presentation device 114 is started in response to the detection of segmentation information. However, the measurement may be started in response to the stop of playing process on the content data 2082 or stop of supplying power to the image presentation device 114.
In the first, second, and third process, the detection of a segmentation point in the content (for example, S106 in the first process) is performed based on the segmentation information included in the content data 2082 as header information or the segmentation information registered in a segmentation table and time stamp information included in the content data body of the content data 2082. However, the disclosure is not limited thereto. A modified illustrative embodiment of the detection of a segmentation point will be described with reference to
Referring to
Herein, generation of a content image having a single color may be detected as a segmentation point. Specifically, in a case where a content image is expressed by R (red), G (green) and B (Blue), if a content image having a value “R=0, G=0 and B=225” is generated by rendering in the playing process, the determination of S106 may be made positive based on the generation of the content image (S106: Yes). Alternatively, a predetermined image is stored in the memory unit 208, and if a content image same as the predetermined image is generated by rendering in the playing process, the determination of S106 may be made positive based on the generation of the content image (S106: Yes). According to this configuration, it is not necessary to set segmentation information for the content data 2082.
Number | Date | Country | Kind |
---|---|---|---|
2008-313795 | Dec 2008 | JP | national |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2009/006589 | Dec 2009 | US |
Child | 13150792 | US |