The present invention relates to a video system, a video generating method, a video distribution method, a video generating program, and a video distribution program. particularly in the context of a video system comprising a head-mounted display and a gaze detection device.
Conventionally, when gaze detection for specifying a point at which a user is looking is performed, it is necessary to perform calibration. Here, the calibration refers to causing a user to gaze at a specific indicator and specifying a position relationship between a position at which the specific indicator is displayed and a corneal center of the user gazing at the specific indicator. A gaze detection system that performs calibration to perform gaze detection can specify a point at which a user is looking.
However, preparation of the calibration is made under the condition that it is determined that a user gazes at a specific indicator. Accordingly, there is a problem in that when information is acquired in a state in which the user does not gaze at the specific indicator, actual gaze detection cannot be accurately executed. The problem is particularly noticeable because an operator cannot confirm from the surroundings whether or not the user is actually gazing at the specific indicator in the case of a head mounted display in which surroundings of the eyes of the user are covered by the device and a state of the inside cannot be viewed.
The present invention has been made in consideration of the above problems, and an object thereof is to provide a technology capable of accurately executing calibration for realizing gaze detection of a user wearing a head mounted display.
In order to solve the above problem, an aspect of the present invention is a method comprising: measuring head rotation speed in a direction; measuring eye rotation speed in the direction; and performing calibration of a gaze detection unit when the head rotation speed and eye rotation speed is lower than a threshold.
According to the present invention, it is possible to provide a technology for detecting a gaze direction of a user wearing a head mounted display.
In the following, each embodiment of the video system is explained in reference to the drawings. In the following explanations, identical components are referred to by the same symbols, repeated explanations are omitted.
An outline of the first embodiment of the present invention will be hereinafter described.
A gaze detection device 200 detects a gaze direction of at least one of a right eye and a left eye of the user wearing the head mounted display 100 and specifies the user's focal point, that is, a point gazed by the user in a three-dimensional image displayed on the head mounted display. The gaze detection device 200 also functions as a video generation device that generates a video to be displayed by the head mounted display 100. For example, the gaze detection device 200 is a device capable of reproducing videos of stationary game machines, portable game machines, PCs, tablets, smartphones, phablets, video players, TVs, or the like, but the present invention is not limited thereto. The gaze detection device 200 is wirelessly or wiredly connected to the head mounted display 100. In the example illustrated in
The head-mounted display 100 comprises a housing 150, a fitting harness 160, and headphones 170. The housing 150 encloses an image display system, such as an image display element for presenting video images to the user 300, and, not shown in the figure, a Wi-Fi (registered trademark) module, a Bluetooth (registered trademark) module, or other type wireless communication module. The head-mounted display 100 is secured to the head of the user 300 with a fitting harness 160. The fitting harness 160 may be implemented with the help of, for example, belts or elastic bands. When the user 300 secures the head-mounted display 100 with the fitting harness 160, the housing 150 is in a position where the eyes of the user 300 are covered. Thus, when the user 300 wears the head-mounted display 100, the field of view of the user 300 is covered by the housing 150.
The headphones 170 output the audio of the video reproduced by the video generating device 200. The headphones 170 do not need to be fixed to the head-mounted display 100. Even when the head-mounted display 100 is secured with the fitting harness 160, the user 300 may freely put on or remove the headphones 170.
The head-mounted display 100 comprises a video presentation unit 110, an imaging unit 120, and a communication unit 130.
The video presentation unit 110 presents a video to the user 300. The video presentation unit 110 may, for example, be implemented as a liquid crystal monitor or an organic EL (electroluminescence) display.
The imaging unit 120 captures images of the user's eye. The imaging unit 120 may, for example, be implemented as a CCD (charge-coupled device), CMOS (complementary metal oxide semiconductor) or other image sensor disposed in the housing 150.
The communication unit 130 provides a wireless or wired connection to the video generating device 200 for information transfer between the head-mounted display 100 and the video generating device 200. Specifically, the communication unit 130 transfers images captured by the imaging unit 120 to the video generating device 200, and receives video from the video generating device 200 for presentation by the video presentation unit 110. The communication unit 130 may be implemented as, for example, a Wi-Fi module, a Bluetooth (registered trademark) module or another wireless communication module.
The gaze detection device 200 shown
The communication unit 210 provides a wireless or wired connection to the head-mounted display 100. The communication unit 210 receives from the head-mounted display 100 images captured by the imaging unit 120, and transmits video to the head-mounted display 100. The gaze detection unit 220 detects a gaze of the user viewing an image displayed on the display 100, and generates gaze data. The calibration unit 230 performs the calibration of the gaze detection. The storage unit 240 stores data for gaze detection and calibration.
<<Eye-Tracking with Lens Compensation>>
The eye-tracking with lens compensation may be a method comprising:
The method may further comprises:
A standard or fresnel lens is provided between the camera and the human eye. When detecting a gaze direction of the eye, the gaze detection unit 220 detects glints and the pupil on the image of the human eye using rays from the camera to each of the glints and the pupil. In the eye-tracking with lens compensation, the ray transfers through the lens. Therefore, the gaze detection unit 220 must compute such transfer.
The gaze detection unit 220 may compute a ray (ray before lens) from the camera to a glint position detected from the image using intrinsic and extrinsic matrices to give the 3d ray for any 2d point (glint) on the camera image. The gaze detection unit 220 may apply snell's law ray tracing or use a precalculated transfer matrix in order to calculate the ray after lens. The gaze detection unit 220 uses this ray after lens in order to compute eye-tracking (gaze direction).
Lens compensation may be done with polynomial fitting. Assuming (x, y) represents a pixel on the camera image, (xp, yp) represents the x-y position on the lens, and (xd, yd, zd) represents the x-y-z direction of the ray from the lens. Then for any pixel on the camera image the gaze detection unit 220 can find a ray after in pass the lens:
Where ai, bi, ci, di, ei, fi, gi, hi, pi, qi are precalculated polynomial coefficients
Note that (x, y) can be anything that can be directly derived from pixel coordinate, such as angle in spherical coordinates etc. Further, (xd, yd, zd) can also have alternative representation (e.g. spherical coordinates).
At first the gaze detection unit 220 obtains eye images from a camera. Then, the gaze detection unit 220 finds glints and a pupil by conducting image processing. The gaze detection unit 220 uses intrinsic and extrinsic matrices to get rays from the camera to every glint.
Here, in our eye-tracking with lens compensation, the gaze detection unit 220 transfers the rays through the lens. The transfer is computed matrices or polynomial fitting described above.
The gaze detection unit 220 solves the inverse problem to find cornea center/radius.
Then, the gaze detection unit 220 uses intrinsic and extrinsic matrices to get a ray from camera to the pupil.
In our eye-tracking with lens compensation, the gaze detection unit 220 transfers this ray through the lens.
The gaze detection unit 220 intersects this ray with cornea sphere.
Result of the intersection is 3D pupil position. The resulting optical axis is the vector from cornea center to 3D pupil position.
<<Camera Optimization Through Lens Fitting>>
The camera optimization through lens fitting may be a method comprising:
The calibration unit 230 runs numerical optimization to correct camera position and orientation. As an optimization cost function the calibration unit 230 tries to fit the observed lens to the expected lens shape.
<<3d Model Based Pupil, Iris and Glints Prediction>>
3D model based prediction may be a method comprising:
<<Hidden Calibration>>
Calibration process burdens the user with additional effort. By the hidden calibration, the calibration is performed during the user viewing contents.
The hidden calibration may be a method comprising:
In the hidden calibration, the calibration may be performed every time a scene changes.
For example, the video content has a scene during a specific period of time which shows only moving object on the screen such as a logo, a firefly and a bright object. During the scene displayed, the user watches the moving object and the calibration unit can conduct the calibration process. The right side figure of
If the video content has multiple scenes, calibration can be conducted multiple times during the content, and the accuracy of the eye tracking gradually increases.
Then, the application sends 3D location information (3D coordinates) of the object to the eye tracking unit.
Then, the eye tracking unit uses the location information to calibrate in real time. If the eye tracking unit conducts the calibration, the application sends further timestamp information with the 3D location information.
<<Foveated Camera Streaming>>
The foveated camera streaming may be a method comprising:
In the method, a resolution of the region of interest is higher than a resolution of the outer region.
In the method, the image may be a video, in the step of compressing the region of interest, encoding the region of interest in a first video, in the step of compressing the outer region, encoding the outer region into a second video, wherein a frame rate of the first video is higher than a frame rate of the second video.
The foveated camera streaming also may be a method comprising:
The head-mounted display 100 further comprises an external camera. The external camera is fixed with the housing 150 and arranged to record video images of the front direction of the user's head facing. The external camera records video images with full resolution for the whole world where the external camera can record. The video system has two image streams including high resolution images for the gazing area of the user and low resolution images for the other area. The images including the high resolution images and the low resolution images are sent to the cloud server by a public communication network directly from the head-mounted display 100 or via the gaze detection device 200. In this technology, the video system 1 can reduce the bandwidth for sending images because it sends full resolution images for only limited area (a gazing area) where the user looking at, and low resolution images for the other area, instead of sending full resolution images for the whole world where the external camera can record.
Based on the received two types of image information, the cloud server creates contextual information which is used for AR (Augmented Reality) or MR (Mixed Reality) display. The cloud server aggregates information (e.g. object identification, facial recognition, video image, etc.) to create contextual information and sends the contextual information to the head-mounted display 100.
The external camera facing outward on the head mount display takes images of the world (S1101).
Then the control unit splits the video images into two streams based on the eye tracking coordinates (S1102). In this step, the control unit detects gazing point coordinates of the user based on the eye tracking coordinates, and splits the video images into a region of interest area and the other area. The region of interest can be obtained from the video images by splitting a certain sized area including the gazing point.
Then, the two video image streams are sent to the cloud server by a communication network (e.g. 5G network) (S1103). In this step, images of the region of interest area are sent to the server as high resolution images, on the other hand, images of the other area are sent to the server as low resolution images.
Then, the cloud server processes the images and adds contextual information (S1104).
Then, the images and contextual information are sent back to the head mount display to display AR or MR images to the user (S1105).
The external camera obtains video images and the obtained raw video image with high resolution is inputted to the control unit. The eye tracking unit detects looking at point (gaze coordinates) based on eye tracking and input the gaze coordinates information to the control unit. The control unit determines the region of interest in each image based on the gaze coordinates. For example, the region of interest can be obtained from the video images by splitting a certain sized area including the gazing point. The image data of the region of interest is compressed at lower compression ratio and inputted to the communication unit. The communication unit also receives sensing data such as headset angle and other metadata which is obtained by the sensing unit. The sensing unit can be configured by GPS or geomagnetic sensor. The image data of the region of interest is sent to the cloud server with a higher resolution image. The image data outside of the region of interest is compressed at a higher compression ratio and inputted to the communication unit. The image data outside of the region of interest is sent to the cloud server with a lower resolution image.
The general awareness processing unit in the cloud server receives the image data outside of “the region of interest” which is lower resolution (and the headset angle and the metadata), and performs image processing in order to identify objects in the image, such as object type and number.
The High-detail processing unit in the cloud server receives the image data of the region of interest which is higher resolution (and the headset angle and the metadata), and performs image processing in order to identify detail things, such as facial recognition, text recognition.
The information aggregation unit receives the identification result of the general awareness processing unit and the recognition result of the High-detail processing unit. The information aggregation unit aggregates received results in order to create a display image, and send the display image to the head mount display via the communication network.
<<ET Calibration Using Optokinetic Response>>
The eye tracking calibration can be made using optokinetic response, i.e. the calibration method may comprise:
Optokinetic response is an eye movement that occurs in response to the image movement on the retina. When gazing at a point, the sum of the head rotation speed and the eye rotation speed is zero (0) during the head rotations.
The calibration unit 230 can perform the calibration of the gaze detection device 200 when the user gazes at a stable point which can be detected by detecting that the sum of the head rotation speed and the eye rotation speed is zero. That is, when the user rotates his/her head to the right, the user should rotate his/her eyes to the left to gaze at a point.
The head-mounted display 100 comprises an IMU. The IMU can measure the head rotation speed of the user 300. The gaze detection unit can measure the eye rotation speed of the user. The eye rotation speed can be represented by the speed of the movement of the gaze point. The calibration unit 230 may calculate the head rotation speeds in up-down direction and left-right direction from the measured value by the IMU. The calibration unit 230 may also calculate the eye rotation speed in up-down direction and left-right direction from the history of the gaze point. The calibration unit 230 displays a marker in a virtual space rendered in the display. The marker can be moved and be stable. The calibration unit 230 calculates the head rotation speeds in left-right direction and up-down direction and the eye rotation speeds in left-right direction and up-down direction. The calibration unit 230 may perform calibration when the sum of the head rotation speeds and the eye rotation speeds is lower than a pre-determined threshold.
<<Single Point Calibration>>
The single point calibration may be a method comprising:
In the calibration method, determine a direction from a cornea center to the position of the pupil as the gaze direction.
In the calibration method, it may be possible to determine a direction from a eyeball center to the position of the pupil as the gaze direction.
The calibration method may further comprise: compensating a position of the pupil to angle against a direction from the camera to the pupil image.
However, indeed the pupil is positioned at P1 inside the cornea sphere by ACD. The direction from the center of the eyeball (or the center of the cornel sphere) to the center of the pupil is considered to be the gaze direction of the eye. The calibration may be made using the gaze direction and the known calibration point.
The calibration unit 230 may calibrate the parameter of the gaze estimation. The calibration unit 230 may calibrate the ACD. The calibration unit 230 may calibrate the center of the cornea sphere. The calibration unit 230 may calibrate the angle of the camera.
<Refraction Model>
The compensation will be adjusted assuming the cornea refracts the ray. That is the pupil position is compensated taking the anterior chamber depth (ACD) and the horizontal optical to visual axis into consideration.
<<Implicit Calibration>>
Calibration process burdens the user with additional effort. By the implicit calibration, the calibration is performed during the user viewing contents.
In conventional (explicit) calibration:
In contrast, in the implicit calibration:
The implicit calibration may be a method for calibrating gaze detection comprising:
In the implicit calibration, the point to which the gaze point is adjusted may be the point with the highest probability of an edge occurring.
The implicit calibration may further comprises:
It is important to add that we need a screen image. The screen image is referred to as a visual field. The visual field may cover both screen image in VR and outside camera image in AR. It is also important to note that we do not require the user to look at any specific targets. The scene content can be arbitrary. Since we have two types of images: eye images and screen images, it would be better to specify explicitly which image is referred to.
The correlation of gaze points and images of visual field can be defined as an assumption about human behavior: In circumstances A people are likely to look at B, where A and B are something that we can automatically extract from the visual field image. Currently we use an assumption In all circumstances people are likely to look at edges.
There are other assumptions that potentially could be used, for example. When presented a new image people are likely to look at faces of humans/animals/etc first, or When presented a video with an object moving against stationary background people are likely to look at the moving object. We do:
The general nature of the idea is “automatically extracting the ground truth from the image of the visual field”. However this ground truth is not a single point, but a probability distribution. Given a single visual field image, we can not predict actual gaze point, but we can predict that the actual gaze point is located in certain region with a certain probability.
When we take a region of interest, we basically convert the gaze point probability distribution to an eye-tracking parameters probability distribution. After we accumulate probability distributions from visual field images at different moments of time, we can calculate an average probability distribution. This average probability distribution will gradually converge to a single point (single value of eye-tracking parameters): the more images we have, the less the standard deviation of this distribution will be.
The general idea of predicting the gaze from the image of visual field is not new. There is a research field called ‘Saliency prediction’ that studies this topic. The hypothesis that “people are likely to be looking at object edges” also comes from saliency prediction. What is new is the way of integrating saliency prediction into eye-tracking calibration.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2020-172238 | Oct 2020 | JP | national |
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/IB2021/059329 | 10/12/2021 | WO |