The present invention relates to an information processing apparatus, an information processing method, and a program.
Known is a tracker type naked-eye stereoscopic display system that dynamically controls a viewing area by tracking positions of the viewer's eyes. The viewing area means a spatial region in which a 3D video can be stereoscopically viewed.
The positions of the viewer's eyes are calculated by applying measurement information of the viewer's face to a standard face model. Therefore, a calculation result includes an error according to an individual difference of a viewer. Patent Literature 1 proposes a technique in which a viewer adjusts a viewing area while observing a test pattern. The test pattern uses an image such as a circle or a double circle indicating whether the viewer is at the center of the viewing area. However, since the test pattern changes only discontinuously according to the viewing position, it is difficult to recognize at which position in the viewing area the viewer is located.
Therefore, the present disclosure proposes an information processing apparatus, an information processing method, and a program capable of recognizing in detail at which position in a viewing area a viewer is located.
According to the present disclosure, an information processing apparatus is provided that comprises a test pattern generation unit configured to generate a test pattern in which appearance of an image element continuously changes depending on a viewing position, the image element indicating a positional relationship between a viewer and a viewing area. According to the present disclosure, an information processing method in which an information process of the information processing apparatus is executed by a computer, and a program for causing the computer to execute the information process of the information processing apparatus, are provided.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. In the following embodiments, the same parts will be denoted by the same reference numerals, and redundant description will be omitted.
It is noted that a description will be given in the following order.
The display system 1 is a naked-eye stereoscopic display system that displays a 3D video in accordance with a viewing position. As illustrated in
The sensor unit 20 includes one or more sensors for sensing the outside world. The sensor unit 20 includes, for example, a visible light camera, a distance measurement sensor, a line-of-sight detection sensor, and the like. The visible light camera captures a visible light image of the outside world. The distance measurement sensor detects a distance of a real object existing in the outside world using a flight time of laser light or the like. The line-of-sight detection sensor detects a line of sight of the viewer AU directed to the video display apparatus 40 using a known eye tracking technique.
The input apparatus 30 includes a plurality of input devices capable of inputting various types of information by an input operation of the viewer AU. Examples of the plurality of input devices include a touch panel, a keyboard, a mouse, a microphone, and a remote controller.
The video display apparatus 40 presents various types of information such as video information and audio information to the viewer AU. The video display apparatus 40 includes a display panel DP and an image separation unit LES. The display panel DP displays a multiplexed image obtained by multiplexing a plurality of viewpoint images corresponding to the right eye EP-R and the left eye EP-L of the viewer AU. The image separation unit LES spatially separates the plurality of viewpoint images displayed on the display panel DP, and supplies the viewpoint images to the right eye EP-R and the left eye EP-L of the viewer AU. The video display apparatus 40 presents the 3D video to the viewer AU by guiding a plurality of spatially separated viewpoint images to the right eye EP-R and the left eye EP-L of the viewer AU.
As the display panel DP, a known 2D display such as a liquid crystal display (LCD) or an organic light-emitting diode (OLED) is used.
A plurality of known image separation means such as a parallax barrier and a lenticular lens are used for the image separation unit LES. In the example of
The image separation unit LES has a function of selecting a pixel PX visible from the viewer AU. For example, a plurality of viewpoints VP (refer to
In the example of
The information processing apparatus 10 processes various types of information and integrally controls the overall display system 1. For example, the information processing apparatus 10 tracks the viewer AU based on sensor information acquired from the sensor unit 20. The information processing apparatus 10 sets a viewing area VA (refer to
The information processing apparatus 10 includes a sensor information acquisition unit 11, a viewer detection unit 12, a coordinate calculation unit 13, a viewpoint image generation unit 14, a viewpoint index calculation unit 15, and an image multiplexing unit 16.
The sensor information acquisition unit 11 acquires sensor information detected by the sensor unit 20. For example, the sensor information acquisition unit 11 sequentially acquires a visible light image CI (refer to
The viewer detection unit 12 detects the viewer AU based on the sensor information. For example, the viewer detection unit 12 extracts a face part FP of the viewer AU from the visible light image CI sequentially acquired from the sensor information acquisition unit 11 using a known object recognition technique. The viewer detection unit 12 extracts, from the visible light image CI, coordinate information of the face part FP in a camera coordinate system.
The camera coordinate system is a coordinate system having an optical center CP of the visible light camera as an origin and an optical axis direction as a z direction. The visible light image CI is generated as an image obtained by projecting the face of the viewer AU on a plane (plane of z=f) separated from the optical center CP by a focal length f. The viewer detection unit 12 detects the position of the viewer AU in the camera coordinate system based on the coordinate information of the face part FP. The position of the viewer AU indicates, for example, the position of a contour or a feature point extracted from the face part FP. The viewer detection unit 12 supplies position information indicating the position of the viewer AU to the coordinate calculation unit 13.
For example, a detection state of the viewer AU includes a tracking state and a search state. The tracking state is a state in which detection is successful when the detection of the face part FP is tried last time. The search state is a state in which the face part FP is to be detected but cannot be detected.
When the face part FP is successfully detected, the detection state transitions to the tracking state. The viewer detection unit 12 tracks the face part FP in the tracking state. The tracking may be performed by extracting the face part FP for each visible light image CI, or may be performed using a known object tracking technique such as optical flow estimation. A post filter may be applied to the tracked face part FP for time stabilization.
When the face part FP is out of a tracking range, not found, or lost determination is made depending on the size of the face, the direction of the face, or the like, the detection state shifts to the search state. In the search state, the viewer detection unit 12 repeats the search until the face part FP is detected.
When the valid face part FP is extracted, the viewer detection unit 12 associates the face part FP with a face model SFM, and estimates a posture and a position of the face model SFM that matches the face part FP (posture position estimation). A 3D face part model of a standard human is used as the face model SFM. The viewer detection unit 12 calculates the position of the viewer AU in the camera coordinate system based on the estimated posture and position of the face model SFM. For the calculation of the posture position estimation, the coordinate information of all the face parts may be used, or only the coordinate information of some face parts may be used.
The coordinate calculation unit 13 calculates the viewing position EP based on the position information of the viewer AU acquired from the viewer detection unit 12. The viewing position EP is calculated as three-dimensional coordinates of the eyes of the viewer AU in a panel coordinate system. The panel coordinate system is a world coordinate system of the real space in which the display panel DP is installed. For example, the coordinate calculation unit 13 performs coordinate conversion from the camera coordinate system to the panel coordinate system using a known perspective projection model. The coordinate calculation unit 13 calculates the position of the eye of the viewer AU in the panel coordinate system based on the coordinate conversion.
The viewpoint image generation unit 14 generates a plurality of viewpoint images corresponding to the respective viewpoints VP based on content information 51. The content information 51 includes information on a video content and an audio content provided by the video display apparatus 40.
For example, the viewpoint image generation unit 14 sets the viewpoint VP of a virtual camera at the position of the eyes of the viewer AU, and localizes a virtual object presented as a 3D video in the display space. The eye position of the viewer AU is calculated based on the coordinate information acquired from the coordinate calculation unit 13. In order to prevent the coordinate values from vibrating over the time, the viewpoint image generation unit 14 processes the coordinate information acquired from the coordinate calculation unit 13 with a digital filter (time stabilization filter) as necessary.
The viewpoint image generation unit 14 generates the viewpoint image in conjunction with the viewing position so that motion parallax is imparted according to the movement of the viewer AU. In a case where the viewpoint image is generated in conjunction with the viewing position, when there is an error in the viewing position, distortion occurs in the perspective, so that correction of the viewing area VA using a test pattern TP to be described later is effective. In a case where the motion parallax is not imparted, when the viewing position is fixed and displayed, and when the image data of the viewpoint image is included in the content information 51, the viewpoint VP can be fixed, or generation of the viewpoint image can be omitted.
The viewpoint image generation unit 14 can perform trapezoid correction on the viewpoint image as necessary. The trapezoid correction is for correcting perspective in a case where the viewer AU and the display panel DP do not face each other, for example, in a case where the display panel DP is inclined obliquely.
The viewpoint index calculation unit 15 assigns a viewpoint index ID to each pixel PX of the display panel DP based on the viewing position EP acquired from the coordinate calculation unit 13. The viewpoint index ID is an index for identifying a viewpoint image. In the example of
The image multiplexing unit 16 assigns the viewpoint image of the viewpoint VP corresponding to the viewpoint index ID to each pixel PX of the display panel DP. As a result, the image multiplexing unit 16 generates a multiplexed image in which a plurality of viewpoint images are multiplexed. For example, in a case where 24 viewpoints VP are set, a right eye image is assigned to the pixel PX having the viewpoint indexes from 1 to 12, and a left eye image is assigned to the pixel PX having the viewpoint indexes from 13 to 24. The multiplexed image is output to the display panel DP.
As described above, the 3D face part model of the standard human is used as the face model SFM. This is because it is difficult to generate a face model for each viewer AU. When the face model SFM is substituted with the standard 3D face part model, an error occurs in the estimation result of the position and the posture of the face due to a difference in size between the actual face and the face model SFM. This error causes an error in the calculation result of the viewing position EP. The coordinates of the face part FP may also include an error, and this error also affects the calculation result of the viewing position EP.
When the error is included in the viewing position EP, the viewing area VA is not correctly set. When a positional relationship between the viewer AU and the viewing area VA deviates, crosstalk may be conspicuous and stereoscopic vision may be difficult. Therefore, in the present disclosure, the viewing position EP is corrected based on the measurement result using the test pattern TP.
The test pattern TP includes an image element indicating a positional relationship between the viewer AU and the viewing area VA. The appearance of the image element continuously changes according to the viewing position EP. In the example of
For example, the test pattern TP is generated by multiplexing a plurality of plane images PI. Information on the plane image PI is defined in test pattern information 52. The plane image PI is generated and multiplexed by the viewpoint image generation unit 14 and the image multiplexing unit 16. The viewpoint image generation unit 14 and the image multiplexing unit 16 function as a test pattern generation unit 17 that generates the test pattern TP.
The viewpoint image generation unit 14 generates the plane image PI for each of the plurality of viewpoints VP set in the viewing area VA. The plane image PI indicates a relative position of the viewpoint VP with respect to the center of the viewing area VA. The image multiplexing unit 16 multiplexes the plurality of generated plane images PI to generate the test pattern TP. The plane image PI is generated, for example, as an image in which a plurality of sub-plane images SPI in which the rotation directions and the rotation amounts of the rod-shaped portions ND are equal are two-dimensionally arranged.
The sub-plane image SPI of the present disclosure is generated by imaging a magnetic azimuth needle. For example, the sub-plane image SPI includes a fixed needle FX, a rod-shaped portion ND, a circular base BS, and a plain portion PL. The fixed needle FX is a needle, the position and direction of which are fixed. The rod-shaped portion ND includes a rotating needle ND that rotates about the center of the base BS as a rotation center CE. The plain portion PL forms a background of the base BS.
The base BS and the plain portion PL form background portions of the fixed needle FX and the rod-shaped portion ND. The fixed needle FX and the rod-shaped portion ND have a higher visual attraction degree than that of the background portions serving as a background. The visual attraction degree indicates a degree of visual attraction. The visual attraction degree is adjusted by brightness, chroma, hue, transparency, and the like. For example, an image region having higher brightness and chroma has a higher visual attraction degree. A warm color has a higher visual attraction degree than that of a cool color. The lower the transparency, the higher the visual attraction degree. For example, the visual attraction degree increases in the order of the plain portion PL, the base BS, and the rotating needle ND. Since the fixed needle FX is generated as an image having the same brightness, chroma, hue, and transparency as those of the rotating needle ND, the fixed needle FX has the same visual attraction degree as that of the rotating needle ND.
For example, one end portion of the fixed needle FX is displayed in red, and the other end portion thereof is displayed in blue. The rotating needle ND includes a red portion and a blue portion facing each other with the rotation center CE interposed therebetween. When observed from the center of the viewing area VA, the color and direction of the rotating needle ND and the fixed needle FX match each other. The direction of the rotating needle ND to be observed varies depending on the viewing position EP. The movement of the rotating needle ND is similar to the movement of a magnet along a line of magnetic force. In the processing of adjusting the viewing area VA using the test pattern TP, the viewer AU adjusts a parameter value so that the colors and the directions of the rotating needle ND and the fixed needle FX match each other.
The viewpoint image generation unit 14 sets a distance between the viewpoints VP closest to an inter-pupillary distance as a reference distance. The viewpoint image generation unit 14 causes the viewpoint images of the two viewpoints VP separated by the reference distance to match each other. The inter-pupillary distance means a distance from the center of the iris of the right eye EP-R to the center of the iris of the left eye EP-L. The inter-pupillary distance of a typical adult male is 64 mm, and the inter-pupillary distance of an adult female is 62 mm. For example, the viewpoint image generation unit 14 calculates the standard inter-pupillary distance as 63 mm.
In the example of
For example, the rod-shaped portion RD of the plane images PI (the 1st plane image PI-1 to the 6th plane image PI-6, and the 13th plane image PI-13 to the 18th plane image PI-18) corresponding to the viewpoints VP (the 1st viewpoint VP-1 to the 6th viewpoint VP-6, and the 13th viewpoint VP-13 to the 18th viewpoint VP-18: refer to
The rod-shaped portion RD of the plane images PI (the 8th plane image PI-8 to the 12th plane image PI-12, and the 20th plane image PI-20 to the 24th plane image PI-24) corresponding to the viewpoints VP (the 8th viewpoint VP-8 to the 12th viewpoint VP-12, and the 20th viewpoint VP-20 to the 24th viewpoint VP-24: refer to
When the coordinates of the viewing position EP deviate from the center of the viewing area VA to the right side, the viewer AU can see the plane image PI in which the rod-shaped portion RD is inclined to the left side. When the coordinates of the viewing position EP deviate from the center of the viewing area VA to the left side, the viewer AU can see the plane image PI in which the rod-shaped portion RD is inclined to the right side. The observed magnitude (rotation amount) of the inclination of the rod-shaped portion RD increases as deviation from the center of the viewing area VA increases. The viewer AU can recognize in which direction and how much the viewing position EP deviates from the center of the viewing area VA based on the rotation direction and the rotation amount of the rod-shaped portion RD with respect to the fixed needle FX.
[4-1. Example in which Viewing Position is Located at Center of Viewing Area]
The left eye EP-L of the viewer AU is disposed at the 19th viewpoint VP-19 at the central portion of a left eye viewing area VA-L. The right eye EP-R of the viewer AU is disposed at the 7th viewpoint VP-7 at the central portion of a right eye viewing area VA-L. In the upper portion of
If the viewpoints VP are not completely separated from each other, crosstalk may occur between the viewpoints VP. In this case, the plane images PI of the plurality of viewpoints VP having the 19th viewpoint VP-19 as a center thereof are observed in the left eye EP-L of the viewer AU. In the present disclosure, the image multiplexing unit 16 causes crosstalk between the adjacent viewpoints VP, thereby allowing each of the right eye EP-R and the left eye EP-L of the viewer AU to simultaneously observe the plane images PI of the two or more viewpoints VP at any position in the viewing area VA.
In the example of
A plurality of rotating needles ND spreads in a fan shape at the end portion of the fixed needle FX. The center line of the fan indicates the average rotation direction of the rotating needle ND. The average rotation direction accurately suggests a deviation direction and a deviation amount of the viewing position EP from the center of the viewing area VA. In the example of
[4-2. Example in which Viewing Position Deviates from Center of Viewing Area to Right Side]
The left eye EP-L of the viewer AU is disposed at the 17th viewpoint VP-17 that has deviated from the center of the left eye viewing area VA-L to the right side. The right eye EP-R of the viewer AU is disposed at the 5th viewpoint VP-5 that has deviated from the center of the right eye viewing area VA-L to the right side. The observation pattern OP of the left eye EP-L of the viewer AU is a plane image PI corresponding to the 17th viewpoint VP-17. Although not illustrated, the observation pattern of the right eye EP-R of the viewer AU is also the same as the observation pattern OP of the left eye EP-L.
The plane images PI of a plurality of viewpoints VP having the 17th viewpoint VP-17 as a center thereof are observed in the left eye EP-L of the viewer AU. In the example of
The average rotation direction of the rotating needle ND is inclined to the left side from the long axis direction of the fixed needle FX. The average rotation direction is substantially equal throughout the display panel DP. Therefore, the viewer AU can recognize that the center of the viewing area VA exists on the left side, which is the inclination direction of the rotating needle ND, that is, the viewing position EP deviates from the center of the viewing area VA to the right side.
[4-3. Example in which Viewing Position Deviates from Center of Viewing Area to Left Side]
The left eye EP-L of the viewer AU is disposed at the 21st viewpoint VP-21 that has deviated from the center of the left eye viewing area VA-L to the left side. The right eye EP-R of the viewer AU is disposed at the 9th viewpoint VP-9 that has deviated from the center of the right eye viewing area VA-L to the right side. The observation pattern OP of the left eye EP-L of the viewer AU is a plane image PI corresponding to the 21st viewpoint VP-21. Although not illustrated, the observation pattern of the right eye EP-R of the viewer AU is also the same as the observation pattern OP of the left eye EP-L.
The plane images PI of a plurality of viewpoints VP having the 21st viewpoint VP-21 as a center thereof are observed in the left eye EP-L of the viewer AU. In the example of
The average rotation direction of the rotating needle ND is inclined to the right side from the long axis direction of the fixed needle FX. The average rotation direction is substantially equal throughout the display panel DP. Therefore, the viewer AU can recognize that the center of the viewing area VA exists on the right side, which is the inclination direction of the rotating needle ND, that is, the viewing position EP deviates from the center of the viewing area VA to the left side.
[4-4. Example in which Viewing Position Deviates from Center of Viewing Area to Front Side]
The left eye EP-L of the viewer AU is disposed at a position that has deviated to the −z side from the 19th viewpoint VP-19 at the central portion of the left eye viewing area VA-L. The right eye EP-R of the viewer AU is disposed at a position that has deviated to the −z side from the 7th viewpoint VP-7 at the central portion of the right eye viewing area VA-L.
The observation pattern OP of the left eye EP-L of the viewer AU varies depending on the position of the display panel DP. The observation pattern OP-C at the central portion of the display panel DP is a plane image PI corresponding to the 19th viewpoint VP-19. The observation pattern OP-L at the left end portion is the plane image PI of the viewpoint VP on the right side of the 19th viewpoint VP-19. The observation pattern OP-R at the right end portion is the plane image PI of the viewpoint VP on the left side of the 19th viewpoint VP-19. Although not illustrated, the observation pattern of the right eye EP-R of the viewer AU is also the same as the observation pattern OP of the left eye EP-L.
At the central portion of the display panel DP, the plane images PI of a plurality of viewpoints VP having the 19th viewpoint VP-19 as a center thereof are observed. In the example of
At the left end portion of the display panel DP, the plane images PI of a plurality of viewpoints VP having the 17th viewpoint VP-17 as a center thereof are observed. In the example of
At the right end portion of the display panel DP, the plane images PI of a plurality of viewpoints VP having the 21st viewpoint VP-21 as a center thereof are observed. In the example of
At the central portion of the display panel DP, the average rotation direction of the rotating needle ND coincides with the long axis direction of the fixed needle FX. At the left end portion of the display panel DP, the average rotation direction of the rotating needle ND is inclined to the left side from the long axis direction of the fixed needle FX. At the right end portion of the display panel DP, the average rotation direction of the rotating needle ND is inclined to the right side from the long axis direction of the fixed needle FX. Therefore, the viewer AU can recognize that the positional deviation in the horizontal direction (y direction) does not occur with respect to the center of the viewing area VA, and only the positional deviation in the forward-and-rearward direction (z direction) occurs.
[4-5. Example in which Viewing Position Deviates from Center of Viewing Area to Back Side]
The left eye EP-L of the viewer AU is disposed at a position that has deviated to the +z side from the 19th viewpoint VP-19 at the central portion of the left eye viewing area VA-L. The right eye EP-R of the viewer AU is disposed at a position that has deviated to the +z side from the 7th viewpoint VP-7 at the central portion of the right eye viewing area VA-L.
The observation pattern OP of the left eye EP-L of the viewer AU varies depending on the position of the display panel DP. The observation pattern OP-C at the central portion of the display panel DP is a plane image PI corresponding to the 19th viewpoint VP-19. The observation pattern OP-L at the left end portion is the plane image PI of the viewpoint VP on the left side of the 19th viewpoint VP-19. The observation pattern OP-R at the right end portion is the plane image PI of the viewpoint VP on the right side of the 19th viewpoint VP-19. Although not illustrated, the observation pattern of the right eye EP-R of the viewer AU is also the same as the observation pattern OP of the left eye EP-L.
At the central portion of the display panel DP, the plane images PI of a plurality of viewpoints VP having the 19th viewpoint VP-19 as a center thereof are observed. In the example of
At the left end portion of the display panel DP, the plane images PI of a plurality of viewpoints VP having the 21st viewpoint VP-21 as a center thereof are observed. In the example of
At the right end portion of the display panel DP, the plane images PI of a plurality of viewpoints VP having the 17th viewpoint VP-17 as a center thereof are observed. In the example of
At the central portion of the display panel DP, the average rotation direction of the rotating needle ND coincides with the long axis direction of the fixed needle FX. At the left end portion of the display panel DP, the average rotation direction of the rotating needle ND is inclined to the right side from the long axis direction of the fixed needle FX. At the right end portion of the display panel DP, the average rotation direction of the rotating needle ND is inclined to the left side from the long axis direction of the fixed needle FX. Therefore, the viewer AU can recognize that the positional deviation in the horizontal direction (y direction) does not occur with respect to the center of the viewing area VA, and only the positional deviation in the forward-and-rearward direction (z direction) occurs.
The viewing area VA is calculated using the viewing position EP. When the error is included in the viewing position EP, the viewing area VA is not correctly set. Therefore, the coordinate calculation unit 13 corrects position information of the viewer AU acquired from the viewer detection unit 12 according to the following correction formulas.
In the above-described correction formulas, (X0, Y0, Z0) is the coordinates of the viewer AU in the camera coordinate system before correction. (X1, Y1, Z1) is coordinates of the viewer AU in the camera coordinate system after the correction. (Xoffset, Yoffset, Zoffset) is a parameter indicating an offset amount. Scale is a parameter indicating a scale value. Parameter values related to the offset amount and the scale value are defined in parameter information 53.
In step S1, the coordinate calculation unit 13 reads the parameter values related to the offset amount and the scale value from the parameter information 53. In step S2, the coordinate calculation unit 13 determines whether a user input event has occurred. The user input event means an input operation for the purpose of correction processing. For example, in a case where an operation of activating a program for correction processing is received from the viewer AU, it is determined that the user input event has occurred.
In a case where it is determined in step S2 that the user input event has not occurred (step S2: No), the processing proceeds to step S4. In step S4, the coordinate calculation unit 13 corrects the coordinates of the viewing position EP based on the parameter values read from the parameter information 53.
In a case where it is determined in step S2 that the user input event has occurred (step S2: Yes), the processing proceeds to step S3. In step S3, the coordinate calculation unit 13 receives user input processing to be described later. The coordinate calculation unit 13 adjusts the parameter values based on error information obtained by the user input processing. Then, in step S4, the coordinate calculation unit 13 corrects the coordinates of the viewer AU based on the adjusted parameter values.
Thereafter, the processing returns to step S2, and the above-described processing is repeated until an end operation is performed.
In step S11, the coordinate calculation unit 13 determines whether an adjustment start event has occurred. The adjustment start event means an input operation for the purpose of adjustment processing of the viewing area VA. For example, in a case where an operation of an “F1” key on the keyboard is received from the viewer AU, it is determined that the adjustment start event has occurred.
In a case where it is determined in step S11 that the adjustment start event has occurred (step S11: Yes), the processing proceeds to step S12. In step S12, the coordinate calculation unit 13 sets the adjustment mode to 1 and an adjustment ID to 0. The image multiplexing unit 16 generates the test pattern TP and outputs the generated test pattern TP to the display panel DP. The assignment of the viewpoint index ID is performed based on the parameter values read from the parameter information 53. Then, the processing proceeds to step S13. In a case where it is determined in step S11 that no adjustment start event has occurred (step S11: No), the processing proceeds to step S13.
In step S13, the coordinate calculation unit 13 determines whether the adjustment mode is set to 1. In a case where it is determined in step S13 that the adjustment mode is not set to 1 (step S13: No), the processing returns to step S11, and the above-described processing is repeated until the adjustment start event occurs. In a case where it is determined in step S13 that the adjustment mode is set to 1 (step S13: Yes), the processing proceeds to step S14.
In step S14, the coordinate calculation unit 13 determines whether an initialization event has occurred. The initialization event means an input operation for the purpose of initializing the parameter values to a default specified value. For example, in a case where an operation of a “R” key on the keyboard is received from the viewer AU, it is determined that the initialization event has occurred.
In a case where it is determined in step S14 that the initialization event has occurred (step S14: Yes), the processing proceeds to step S15. In step S15, the coordinate calculation unit 13 sets the offset amount and the scale value to prescribed values. The image multiplexing unit 16 updates the test pattern TP based on the initialized offset amount and scale value, and outputs the updated test pattern TP to the display panel DP. In a case where it is determined in step S14 that no initialization event has occurred (step S14: No), the processing proceeds to step S16.
In step S16, the coordinate calculation unit 13 determines whether an X offset selection event has occurred. The X offset selection event means an input operation for the purpose of adjusting the viewing area VA in the x direction. For example, in a case where an operation of an “X” key on the keyboard is received from the viewer AU, it is determined that the X offset selection event has occurred.
In a case where it is determined in step S16 that the X offset selection event has occurred (step S16: Yes), the processing proceeds to step S17. In step S17, the coordinate calculation unit 13 sets the adjustment ID to 1, and the processing proceeds to step S18. In a case where it is determined in step S16 that the X offset selection event has not occurred (step S16: No), the processing proceeds to step S18.
In step S18, the coordinate calculation unit 13 determines whether a Y offset selection event has occurred. The Y offset selection event means an input operation for the purpose of adjusting the viewing area VA in the y direction. For example, when an operation of a “Y” key on the keyboard is received from the viewer AU, it is determined that the Y offset selection event has occurred.
In a case where it is determined in step S18 that the Y offset selection event has occurred (step S18: Yes), the processing proceeds to step S19. In step S19, the coordinate calculation unit 13 sets the adjustment ID to 2, and the processing proceeds to step S20. In a case where it is determined in step S18 that the Y offset selection event has not occurred (step S18: No), the processing proceeds to step S20.
In step S20, the coordinate calculation unit 13 determines whether a Z offset selection event has occurred. The Z offset selection event means an input operation for the purpose of adjusting the viewing area VA in the z direction. For example, in a case where an operation of a “Z” key on the keyboard is received from the viewer AU, it is determined that the Z offset selection event has occurred.
In a case where it is determined in step S20 that the Z offset selection event has occurred (step S20: Yes), the processing proceeds to step S21. In step S21, the coordinate calculation unit 13 sets the adjustment ID to 3, and the processing proceeds to step S22. In a case where it is determined in step S20 that the Z offset selection event has not occurred (step S20: No), the processing proceeds to step S22.
In step S22, the coordinate calculation unit 13 determines whether a scale value selection event has occurred. The scale value selection event means an input operation for the purpose of increasing or reducing the scale value. For example, in a case where an operation of a “C” key on the keyboard is received from the viewer AU, it is determined that the scale value selection event has occurred.
In a case where it is determined in step S22 that the scale value selection event has occurred (step S22: Yes), the processing proceeds to step S23. In step S23, the coordinate calculation unit 13 sets the adjustment ID to 4, and the processing proceeds to step S24. In a case where it is determined in step S22 that no scale value selection event has occurred (step S22: No), the processing proceeds to step S24.
In step S24, the coordinate calculation unit 13 determines whether a saving event has occurred. The saving event means an input operation for the purpose of saving the offset amount and the scale value. For example, in a case where an operation of an “S” key on the keyboard is received from the viewer AU, it is determined that the saving event has occurred.
In a case where it is determined in step S24 that the saving event has occurred (step S24: Yes), the processing proceeds to step S25. In step S25, the coordinate calculation unit 13 saves the offset amount and the scale value, updates the parameter information 53, and the processing proceeds to step S26. In a case where it is determined in step S24 that no saving event has occurred (step S24: No), the processing proceeds to step S26.
In step S26, the coordinate calculation unit 13 determines whether an adjustment end event has occurred. The adjustment end event means an input operation for the purpose of ending the adjustment processing. For example, in a case where an operation of a “Q” key on the keyboard is received from the viewer AU, it is determined that the adjustment end event has occurred.
In a case where it is determined in step S26 that the adjustment end event has occurred (step S26: Yes), the processing proceeds to step S27. In step S27, the coordinate calculation unit 13 sets the adjustment mode to 0, and the processing proceeds to step S28. In a case where it is determined in step S26 that no adjustment end event has occurred (step S26: No), the processing proceeds to step S28.
In step S28, the coordinate calculation unit 13 determines whether an addition event has occurred. The addition event means an input operation for the purpose of defining adjustment of a parameter value as addition processing. For example, in a case where an operation of a “→” (rightward arrow) key of the keyboard is received from the viewer AU, it is determined that the addition event has occurred.
In a case where it is determined in step S28 that no addition event has occurred (step S28: No), the processing proceeds to step S37. In a case where it is determined in step S28 that the addition event has occurred (step S28: Yes), the processing proceeds to step S29.
In step S29, the coordinate calculation unit 13 determines whether the adjustment ID is 1. In a case where it is determined in step S29 that the adjustment ID is 1 (step S29: Yes), the processing proceeds to step S30. In step S30, the coordinate calculation unit 13 adds Δoffset to Xoffset. Δoffset means a unit adjustment amount. The image multiplexing unit 16 updates the test pattern TP based on the updated offset amount and outputs the updated test pattern TP to the display panel DP. Then, the processing proceeds to step S31. In a case where it is determined in step S29 that the adjustment ID is not 1 (step S29: No), the processing proceeds to step S31.
In step S31, the coordinate calculation unit 13 determines whether the adjustment ID is 2. In a case where it is determined in step S31 that the adjustment ID is 2 (step S31: Yes), the processing proceeds to step S32. In step S32, the coordinate calculation unit 13 adds Δoffset to Yoffset. The image multiplexing unit 16 updates the test pattern TP based on the updated offset amount and outputs the updated test pattern TP to the display panel DP. Then, the processing proceeds to step S33. In a case where it is determined in step S31 that the adjustment ID is not 2 (step S31: No), the processing proceeds to step S33.
In step S33, the coordinate calculation unit 13 determines whether the adjustment ID is 3. In a case where it is determined in step S33 that the adjustment ID is 3 (step S33: Yes), the processing proceeds to step S34. In step S34, the coordinate calculation unit 13 adds Δoffset to Zoffset. The image multiplexing unit 16 updates the test pattern TP based on the updated offset amount and outputs the updated test pattern TP to the display panel DP. Then, the processing proceeds to step S35. In a case where it is determined in step S33 that the adjustment ID is not 3 (step S33: No), the processing proceeds to step S35.
In step S35, the coordinate calculation unit 13 determines whether the adjustment ID is 4. In a case where it is determined in step S35 that the adjustment ID is 4 (step S35: Yes), the processing proceeds to step S36. In step S36, the coordinate calculation unit 13 adds Δscale to Scale. The image multiplexing unit 16 updates the test pattern TP based on the updated scale value, and outputs the updated test pattern TP to the display panel DP. Then, the processing proceeds to step S37. In a case where it is determined in step S35 that the adjustment ID is not 4 (step S35: No), the processing proceeds to step S37.
In step S37, the coordinate calculation unit 13 determines whether a subtraction event has occurred. The subtraction event means an input operation for the purpose of defining adjustment of a parameter value as subtraction processing. For example, in a case where an operation of a “←” (leftward arrow) key on the keyboard is received from the viewer AU, it is determined that the subtraction event has occurred.
In a case where it is determined in step S37 that the subtraction event has occurred (step S37: Yes), the processing proceeds to step S38. In step S38, the coordinate calculation unit 13 determines whether the adjustment ID is 1. In a case where it is determined in step S38 that the adjustment ID is 1 (step S38: Yes), the processing proceeds to step S39. In step S39, the coordinate calculation unit 13 subtracts Δoffset from Xoffset. The image multiplexing unit 16 updates the test pattern TP based on the updated offset amount and outputs the updated test pattern TP to the display panel DP. Then, the processing proceeds to step S40. In a case where it is determined in step S38 that the adjustment ID is not 1 (step S38: No), the processing proceeds to step S40.
In step S40, the coordinate calculation unit 13 determines whether the adjustment ID is 2. In a case where it is determined in step S40 that the adjustment ID is 2 (step S40: Yes), the processing proceeds to step S41. In step S41, the coordinate calculation unit 13 subtracts Δoffset from Yoffset. The image multiplexing unit 16 updates the test pattern TP based on the updated offset amount and outputs the updated test pattern TP to the display panel DP. Then, the processing proceeds to step S42. In a case where it is determined in step S40 that the adjustment ID is not 2 (step S40: No), the processing proceeds to step S42.
In step S42, the coordinate calculation unit 13 determines whether the adjustment ID is 3. In a case where it is determined in step S42 that the adjustment ID is 3 (step S42: Yes), the processing proceeds to step S43. In step S43, the coordinate calculation unit 13 subtracts Δoffset from Zoffset. The image multiplexing unit 16 updates the test pattern TP based on the updated offset amount and outputs the updated test pattern TP to the display panel DP. Then, the processing proceeds to step S44. In a case where it is determined in step S42 that the adjustment ID is not 3 (step S42: No), the processing proceeds to step S44.
In step S44, the coordinate calculation unit 13 determines whether the adjustment ID is 4. In a case where it is determined in step S44 that the adjustment ID is 4 (step S44: Yes), the processing proceeds to step S45. In step S45, the coordinate calculation unit 13 subtracts Δscale from Scale. The image multiplexing unit 16 updates the test pattern TP based on the updated offset amount and outputs the updated test pattern TP to the display panel DP. Then, the processing returns to step S11. In a case where it is determined in step S44 that the adjustment ID is not 4 (step S44: No), the processing returns to step S11. Then, the above-described processing is repeated until an end operation is performed.
By the above-described processing, the parameter value of each parameter (Xoffset, Yoffset, Zoffset, Scale) is corrected by the unit correction amount. The coordinate calculation unit 13 detects the correction amount for each parameter acquired during the processing period of the user input processing as error information of the viewing position EP. The coordinate calculation unit 13 calculates the coordinates of the viewing position EP using the error information of the viewing position EP acquired based on the observation result of the test pattern TP.
For example, the coordinate calculation unit 13 applies the offset amount and the scale value corrected based on the error information to the above-described formulas, and calculates coordinates (X1, Y1, Z1) of the viewer AU in the camera coordinate system. The coordinate calculation unit 13 converts the coordinates (X1, Y1, Z1) into coordinates of the panel coordinate system, and calculates the coordinates of the viewing position EP based on the converted coordinates. As a result, the coordinates of the viewing position EP corrected based on the error information are obtained. The viewpoint index calculation unit 15 assigns a viewpoint index ID based on the corrected coordinates of the viewing position EP.
Referring back to
The information processing apparatus 10 is, for example, a computer including a processor and a memory. The memory of the information processing apparatus 10 includes a random access memory (RAM) and a read only memory (ROM). The information processing apparatus 10 functions as the sensor information acquisition unit 11, the viewer detection unit 12, the coordinate calculation unit 13, the viewpoint image generation unit 14, the viewpoint index calculation unit 15, and the image multiplexing unit 16 by executing the program 59.
The information processing apparatus 10 includes the test pattern generation unit 17. The test pattern generation unit 17 generates the test pattern TP in which the appearance of an image element indicating a positional relationship between the viewer AU and the viewing area VA continuously changes depending on the viewing position EP. In the information processing method of the present disclosure, processing of the information processing apparatus 10 is executed by a computer. The program 59 of the present disclosure causes a computer to implement the processing of the information processing apparatus 10.
According to this configuration, the test pattern TP continuously changes depending on the viewing position EP. Therefore, it is possible to recognize in detail at which position in the viewing area VA the viewer AU is located.
The test pattern generation unit 17 generates the plane image PI for each of the plurality of viewpoints VP set in the viewing area VA. The plane image PI is an image indicating the relative position of the viewpoint VP with respect to the center of the viewing area VA. The test pattern generation unit 17 multiplexes the plurality of generated plane images PI to generate the test pattern TP.
According to this configuration, it is possible to easily recognize at which position the viewer AU is located with respect to the center of the viewing area VA.
The test pattern generation unit 17 causes each of the right eye EP-R and the left eye EP-L of the viewer AU to simultaneously observe the plane images PI of two or more viewpoints VP.
According to this configuration, the test pattern TP smoothly changes depending on the viewing position EP.
The plane image PI includes, as an image element indicating a positional relationship between the viewer AU and the viewing area VA, the rod-shaped portion RD indicating a rotation direction in accordance with a deviation direction and a deviation magnitude of the viewpoint VP from the center of the viewing area VA.
According to this configuration, how much and in which direction the viewing position EP deviates from the center of the viewing area VA is quantitatively grasped based on the rotation direction of the rod-shaped portion RD.
The test pattern generation unit 17 sets a distance between the viewpoints VP closest to an inter-pupillary distance as a reference distance. The test pattern generation unit 17 causes the viewpoint images of the two viewpoints VP separated by the reference distance to match each other.
According to this configuration, an image without parallax is observed when two viewpoint images are viewed with the left eye EP-L and the right eye EP-R. Therefore, there is no parallax inconsistency, and the visibility of the test pattern TP is enhanced.
The plane image PI is an image in which a plurality of sub-plane images SPI in which the rotation direction and the rotation amount of the rod-shaped portion RD are equal are two-dimensionally arranged.
According to this configuration, a two-dimensional distribution of the rotation direction of the rod-shaped portion RD is observed. Distributions of different rotation directions are observed in a case where the viewing position EP moves in the forward-and-rearward direction with respect to the center of the viewing area VA and in a case where the viewing position EP moves in the left-and-right direction with respect to the center of the viewing area VA. Therefore, by observing the distribution of the rotation direction, it is recognized whether the viewing position EP deviates in the forward-and-rearward direction or the left-and-right direction with respect to the center of the viewing area VA.
The rod-shaped portion RD has a higher visual attraction degree than that of a background portion serving as a background of the rod-shaped portion RD.
According to this configuration, the rotation direction of the rod-shaped portion RD is easily recognized.
The background portion includes a circular base BS and the plain portion PL forming the background of the base BS. The rod-shaped portion RD includes the rotating needle ND that rotates about the center of the base BS as the rotation center CE. The visual attraction degree increases in the order of the plain portion PL, the base BS, and the rotating needle ND.
According to this configuration, the rotation direction of the rod-shaped portion RD is easily recognized.
The rod-shaped portion RD of the plane image PI corresponding to the viewpoint VP deviating from the center of the viewing area VA to the right side has a posture rotated in a counterclockwise direction with respect to the vertical direction. The rod-shaped portion RD of the plane image PI corresponding to the viewpoint VP deviating from the center of the viewing area VA to the left side has a posture rotated in a clockwise direction with respect to the vertical direction.
According to this configuration, the rod-shaped portion RD rotates toward a direction in which the center of the viewing area VA is located. Therefore, it is easy to sensuously understand whether the center of the viewing area VA is on the right side or the left side with respect to the current viewing position EP.
The coordinate calculation unit 13 calculates the coordinates of the viewing position EP using the error information of the viewing position EP acquired based on the observation result of the test pattern TP. The viewpoint index calculation unit 15 assigns a viewpoint index ID based on the corrected coordinates.
According to this configuration, the viewing area VA is appropriately adjusted in accordance with the viewing position EP while maintaining the viewing position EP.
It is noted that the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
In the above embodiment, the viewing area VA is adjusted based on the observation result of the test pattern TP. However, the viewer AU may search for the center of the viewing area VA based on the observation result of the test pattern TP, and may move to the center thereof to perform viewing.
In the above embodiment, the parameter value corrected by the user input processing is stored in the parameter information 53. The corrected parameter value can be used as it is for the next viewing as a parameter value optimized for the viewer AU. In a case where the display system 1 is shared by a plurality of viewers AU, the corrected parameter value can be defined in the parameter information 53 in association with the viewer AU.
In the above-described user input processing, an appropriate parameter value is searched while repeating the change of the parameter value and the update of the test pattern TP. In this method, a parameter value to be corrected is searched through trial and error. However, it is also conceivable to provide a guideline as to which parameter value should be corrected in order to shorten the search time. For example, the coordinate calculation unit 13 intentionally causes each parameter value to deviate, thereby allowing the viewer AU to recognize relevance between a change in the parameter value and a change in the observation pattern OP. As a result, the viewer AU can easily narrow down the parameter value to be corrected.
In the above embodiment, in order to observe the two-dimensional distribution of the rotation direction of the rod-shaped portion RD, the plane image PI includes a plurality of sub-plane images SPI having the same rotation direction of the rod-shaped portion RD. In the example of
It is noted that the present technique can also have the following configurations.
(1)
An information processing apparatus comprising a test pattern generation unit configured to generate a test pattern in which appearance of an image element continuously changes depending on a viewing position, the image element indicating a positional relationship between a viewer and a viewing area.
(2)
The information processing apparatus according to (1), wherein
The information processing apparatus according to (2), wherein
The information processing apparatus according to (2) or (3), wherein
The information processing apparatus according to any one of (2) to (4), wherein
The information processing apparatus according to (5), wherein
The information processing apparatus according to (5) or (6), wherein
The information processing apparatus according to (7), wherein
The information processing apparatus according to any one of (5) to (8), wherein
The information processing apparatus according to any one of (1) to (9), further comprising:
An information processing method executed by a computer, the method comprising generating a test pattern in which appearance of an image element continuously changes depending on a viewing position, the image element indicating a positional relationship between a viewer and a viewing area.
(12)
A program for causing a computer to execute generating a test pattern in which appearance of an image element continuously changes depending on a viewing position, the image element indicating a positional relationship between a viewer and a viewing area.
Number | Date | Country | Kind |
---|---|---|---|
2021-092099 | Jun 2021 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2022/004615 | 2/7/2022 | WO |