OPTICAL SYSTEM PROVIDING ACCURATE EYE-TRACKING AND RELATED METHOD

Information

  • Patent Application
  • 20220413606
  • Publication Number
    20220413606
  • Date Filed
    June 27, 2022
    2 years ago
  • Date Published
    December 29, 2022
    a year ago
Abstract
An optical system includes an eye-tracking module and a head-mounted display. The eye-tracking module includes a first sensor module configured to capture an eye image of a user. The head-mounted display includes a processor configured to provide a user interface based on a gaze point of the user computed based on the eye image, and a display configured to present the user interface. The user interface includes a virtual FoV and at least one UI element for receiving a gaze command from the user. The at least one UI element is arranged within the FoV of the user and located outside a range of the virtual FoV of the user interface.
Description
BACKGROUND OF THE INVENTION
1. Field of the Invention

The present invent is related to an optical system with accurate eye-tracking function and a related method, more particularly, to an optical system which provides accurate eye-tracking in interactive virtual environments and a related method.


2. Description of the Prior Art

Virtual reality (VR) is an interactive computer-generated experience taking place within a simulated environment, that incorporates mainly auditory and visual, but also other types of sensory feedback like haptic. Augmented reality (AR) provides an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information. Mixed reality (MR) is the merging of real and virtual worlds to produce new environments and visualizations, where physical and digital objects co-exist and interact in real time. Most of existing VR/AR/MR applications are controlled by user hands using joysticks or touch screens, but the burden of carry these control devices may cause inconvenience. By incorporating eye-tracking capabilities into VR/AR/MR headsets, the user can use the eyes as an operational interface, wherein various visual elements can trigger certain responses and behaviors.


Eye tracking is a technology for acquiring eye positions and eye movement by measuring either the point of gaze or the motion of an eye relative to the head. Most information presented by a computer is provided to a user through visual information on a user interface presented on a display. However, the ability of the user to perceive visual information is limited. One such limitation is the area of active vision, which is small due to physiological reasons, e.g., the structure of an eye. Another limitation occurs in the displays themselves. For instance, mobile devices in particular have small screens (virtual field of view) that need to present a wide range of information.


Therefore, there is a need of an optical system and a related method for providing accurate eye-tracking in interactive virtual environments.


SUMMARY OF THE INVENTION

The present invention provides an optical system which provides accurate eye-tracking and includes an eye-tracking module and a head-mounted display. The eye-tracking module includes a first sensor module configured to capture one or multiple eye images of a user. The head-mounted display includes a processor configured to control an operation of the head-mounted display and provide a user interface, and a display configured to present the user interface. The user interface includes a virtual field of view and at least one UI element for receiving a gaze command associated with the one or multiple gaze points from the user. The at least one UI element is arranged within estimated field of view of the user and located outside a range of the virtual field of view of the user interface. The one or multiple gaze points of the user are computed based on the one or multiple eye images.


The present invention provides method of providing accurate eye-tracking. The method includes capturing one or multiple eye images of a user, computing one or multiple gaze points of the user based on the one or multiple eye images of the user, and providing a user interface. The user interface includes a virtual field of view and at least one UI element. The at least one UI element is arranged within an estimated field of view of the user and located outside a range of the virtual field of view of the user interface.


These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a functional diagram illustrating an optical system capable of providing accurate eye-tracking in interactive virtual environments according to an embodiment of the present invention.



FIG. 2 is a functional diagram illustrating an optical system capable of providing accurate eye-tracking in interactive virtual environments according to another embodiment of the present invention.



FIG. 3 is a flowchart illustrating the operation of an optical system for providing accurate eye-tracking in interactive virtual environments according to an embodiment of the present invention.



FIG. 4 is a diagram illustrating a user interface presented during an application of the optical system according to an embodiment of the present invention.



FIG. 5 is an enlarged diagram illustrating the user interface presented during the application of the optical system depicted in FIG. 4.



FIG. 6 is a diagram illustrating exemplary functions of the UI elements in an optical system according to an embodiment of the present invention.





DETAILED DESCRIPTION


FIG. 1 is a functional diagram illustrating an optical system 100 capable of providing accurate eye-tracking in interactive virtual environments according to an embodiment of the present invention. FIG. 2 is a functional diagram illustrating an optical system 200 capable of providing accurate eye-tracking in interactive virtual environments according to another embodiment of the present invention.


In the embodiment illustrated in FIG. 1, the optical system 100 includes a head-mounted display 10 and an eye-tracker 20. The head-mounted display 10 includes a processor 12, a display 14, a sensor module 16, an I/O device 18 and a user interface 19. The eye-tracker 20 includes a processor 22, an illumination module 24, and a sensor module 26. The processor 12 is configured to control the operation of the head-mounted display 10, and the processor 22 is configured to control the operation of the eye-tracker 20.


In the embodiment illustrated in FIG. 2, the optical system 200 may be a head-mounted display 10 which includes a processor 12, a display 14, a sensor module 16, an I/O device 18, a user interface 19 and an eye-tracker 20. The eye-tracker 20 includes a processor 22, an illumination module 24, and a sensor module 26. The processor 12 is configured to control the operation of the head-mounted display 10, and the processor 22 is configured to control the operation of the eye-tracker 20.


In another embodiment of the optical systems 100 and 200, the processor 22 may be omitted. More specifically, the head-mounted display 10 and the eye tracker 20 may share the same processor 12 which is configured to control the operation of the head-mounted display 10 and the eye-tracker 20.



FIG. 3 is a flowchart illustrating the operation of the optical system 100 and 200 for providing accurate eye-tracking in interactive virtual environments according to an embodiment of the present invention. The flowchart in FIG. 3 includes the following steps:


Step 310: capture one or multiple eye images of the user.


Step 320: compute one or multiple gaze points of the user based on the one or multiple eye images of the user.


Step 330: acquire the relationship between the field of view of the display 14 and an estimated field of view of the user.


Step 340: provide a virtual field of view and at least one UI element in the user interface 19 based on the one or multiple gaze points of the user and the relationship between the field of view of the display 14 and an estimated field of view of the user.


Step 350: receive one or multiple commands from the user.


Step 360: perform a corresponding action associated with the at least one user interface (UI) element when a received user command triggers the at least one UI element; execute step 310.


In the optical system 100, the sensor module 26 in the eye tracker 20 includes at least one image sensor (eye sensor) which is configured to capture one or multiple eye images of the user in step 310. The processor 22 in the eye tracker 20 is configured to receive the one or multiple eye images captured by the sensor module 26 and compute the one or multiple gaze points of the user in step 320. In addition, the processor 22 in the eye tracker 20 may further compute other eye-tracking related information based on the one or multiple eye images of the user, such as the confidence and the accuracy of estimated gaze point, the eyeball location in 3D coordinate and pupil-related information (e.g., size). The algorithms for eye-tracking operation may be implemented as, but not limited to, a process/software/firmware executed on the processor 22 of the eye tracker 20.


In the optical system 200, the sensor module 26 in the eye tracker 20 includes at least one image sensor (eye sensor) which is configured to capture the one or multiple eye images of the user in step 310. The processor 12 may receive the one or multiple eye images captured by the sensor module 26 of the eye-tracker 20 and compute the one or multiple gaze points of the user in step 320. In addition, the processor 12 may further compute other eye-tracking related information based on the one or multiple eye images of the user, such as the confidence and the accuracy of estimated gaze point, the eyeball location in 3D coordinate and pupil-related information (e.g., size). The algorithms for eye-tracking operation may be implemented as, but not limited to, a process/software/firmware executed on the processor 12 of the optical system 200.


In the optical systems 100 and 200, the sensor module 16 may include at least one scene sensor configured to capture the one or multiple images of the current field of view of the user, at least one audio sensor (such as a microphone) configured to receive the audio signal from the user, and/or at least one motion sensor (such as a gyro and/or an accelerometer) configured to detect the motion of the user (especially the head movement).


In the optical systems 100 and 200, the illumination module 24 may include one or multiple infrared light-emitting diodes (LEDs) for illuminating the eyes of the user in order to guarantee the necessary contrast between the iris and the pupil regardless of the eye color, particularly in a very dark or bright background, thereby increasing the accuracy of the sensor module 26 in the eye tracker 20 when registering the light reflected by the user eyes. However, the implementation of the illumination module 24 does not limit the scope of the present invention.


In the step 330, the processor 12 is configured to acquire the relationship between the field of view of the display 14 and the estimated field of view of the user. The estimated field of view of the user corresponds to the situation when the user looks straight ahead, and may be calculated based on the physical specifications of the optical system 100 or 200 and the data of average periorbital anthropometric measurements of the user. The actual field of view of the user may shift from the estimated field of view with the motion of the user's eye. The relationship between the field of view of the display 14 and the estimated field of view of the user includes at least the ratio in the size of the field of view of the display 14 to the size of the estimated field of view of the user, wherein the above-mentioned ratio is smaller than 1. The relationship between the field of view of the display 14 and the estimated field of view of the user may also include at least the position of the display 14 relative to the estimated field of view of the user, wherein the relative position may be estimated based on the physical specification of the head-mounted display 10 and the data of the average periorbital anthropometric measurements. The relative position may further be refined by the eye-tracking related information computed based on the one or multiple eye images of the user.


In the optical systems 100 and 200, the user interface 19 containing a virtual field of view and one or multiple UI elements UIE1-UIEN may be presented on the display 14, wherein N is a positive integer. In an embodiment, the UI elements UIE1-UIEN may be visual information presented on the display 14. In another embodiment, each of the UI elements UIE1-UIEN may be an abstract element which is invisible to the user. If a UI element is configured to interact with the user, it may further be connected with an event handler which is associated with a specific operation of the optical system 100 or 200 and is handled by the processor 12.


The UI elements UIE1-UIEN are configured to add interactivity to the user interface 19 and provide touch points for the user as they navigate their way around. Each UI element may be associated with, but not limited to, input control, navigation control, and information display. Each UI element includes a graphic element and a hit box. The graphic element determines the appearance of the UI element and may be associated with the function of the UI element. In an embodiment, the graphic element of a UI element maybe a checkbox, a radio button, a dropdown lists, a list box, a toggle button, toggles, or a date/time picker for input control. In another embodiment, the graphic element of a UI element may be a breadcrumb, a slider, a pagination, a slider, an icon, or an image carousel for navigation control In yet another embodiment, the graphic element of a UI element may be a tooltip, a progress bar, a notification, a message box, or a modal window for information display. However, the appearance of the graphic element in each UI element does not limit the scope of the present invention.


The hit box of an UI element is a virtual element which is invisible to the user and is connected to a corresponding event handler. When a gaze command associated with the one or multiple gaze points of the user triggers the hit box of a UI element, a predetermined action associated with the UI element maybe performed. When one or multiple trigger conditions are satisfied, the UI element is triggered by the gaze command. Exemplary trigger conditions include, but not limited to, when the user's gaze is located within the hit box and the user also sends a response from the I/O device or the sensors (i.e., microphone) to confirm the selection, the duration of user's gaze located within the hit box is longer than a specific fixation duration, the user's gaze is located within the hit box and the user sends a response in the form of a voluntary eye movement event (e.g., voluntary saccade, blink, etc.) to confirm the selection immediately or within a specific time interval, and the user's gaze trajectory crosses the response boundary of the hit box. However, the type of the trigger conditions associated with each UI element does not limit the scope of the present invention.


In the optical system 100 or 200, the I/O device 18 is configured to receive the command from the user. In an embodiment, the I/O device 18 may include any type of handed controller (such as a game pad or a game console) and/or any form of haptic device (such as a suit or a glove). The I/O device 18 is configured to detect and transfer the user's motion signal to the processor 12 of the optical system 100 or 200. In an embodiment, the processor 12 may control the operation of the optical system 100 or 200 based solely on the user command received by the I/O device 18. In another embodiment, the processor 12 may control the operation of the optical system 100 or 200 based on both the user commands received by the I/O device 18 and the gaze commands received by the UI elements UIE1-UIEN of the user interface 19.


In step 340, the processor 12 is configured to provide the virtual field of view and at least one UI element in the user interface 19 based on the one or multiple gaze points of the user and the relationship between the field of view of the display 14 and the estimated field of view of the user.



FIG. 4 is a diagram illustrating the user interface 19 presented during an application of the optical systems 100 and 200 according to an embodiment of the present invention. FoV1 represents the field of view of a user 30. FoV2 represents the virtual field of view of the user interface 19 in the optical systems 100 and 200.



FIG. 5 is an enlarged diagram illustrating the user interface 19 presented during the application of the optical system 100 and 200 depicted in FIG. 4. In the present invention, the UI elements UIE1-UIEN are arranged within the field of view FoV1 of the user 30 and located in different regions which are outside the range of the virtual field of view FoV2 of the user interface 19. For illustrative purpose, FIG. 5 depicts the embodiment of N=8, wherein the UI element UIE1 is presented within the field of view FoV1 of the user 30 and located in the upper region outside the virtual field of view FoV2 of the user interface 19, the UI element UIE2 is presented within the field of view FoV1 of the user 30 and located in the lower region outside the virtual field of view FoV2 of the user interface 19, the UI element UI3 is presented within the field of view FoV1 of the user 30 and located in the left region outside the virtual field of view FoV2 of the user interface 19, the UI element UI4 is presented within the field of view FoV1 of the user 30 and located in the right region outside the virtual field of view FoV2 of the user interface 19, and the UI elements UI5-UI8 are presented within the field of view FoV1 of the user 30 and located in the corner regions outside the virtual field of view FoV2 of the user interface 19. However, the number of the UI elements UI1-UIN included in the user interface 19 and the location of each UI element outside the virtual field of view FoV2 do not limit the scope of the present invention.


As depicted in FIGS. 4 and 5, the virtual field of view FoV2 of the user interface 19 in the head-mounted display 10 only occupies a small region of the field of view FoV1 of the user. The limited capacity of the user interface 19 for information display constrains the gaze-based user interactions. Since the eye movement information, more specifically the gaze points, obtained by the sensor module 26 of the eye-tracker 20 may be located anywhere within the field of view FoV1 of the user, the UI elements UIE1-UIEN are arranged outside the range of the virtual field of view FoV2 of the user interface 19 in the optical systems 100 and 200 of the present invention. This way, the implementation of the UI elements UIE1-UIEN can be more flexible and no longer limited by the small range of the virtual field of view FoV2. In addition, more image data may be presented on the virtual field of view FoV2 of the user interface 19.


In step 350, the optical systems 100 and 200 may receive one or multiple commands from the user. As previously stated, a user command may be inputted via the I/O device 18, or a gaze command may be inputted via the UI elements UIE1-UIEN of the user interface 19. When the hit box of a specific UI element is triggered by a gaze command of the user when one or multiple predefined trigger conditions are satisfied, the processor 12 is configured to perform a predefined action. The above-mentioned predefined trigger conditions may include, but not limited to, user's gaze fixed longer than the fixation duration of the specific UI element, another button being pressed, a voice command being issued, intentional user blinks, or certain gaze path/pattern being detected. The above-mentioned predefined actions may include, but not limited to, content selection, previous, next, setting, close, back, home, show notification and lock screen.



FIG. 6 is a diagram illustrating exemplary functions of the UI elements UIE1-UIEN in the optical systems 100 and 200 according to an embodiment of the present invention. The trigger of the UI element UI1 may display a setting menu on the virtual field of view FoV2 of the user interface 19. The trigger of the UI element UIE2 may return the user 30 to the home page of the user interface 19. The trigger of the UI element UIE3 may slide the range of the virtual field of view FoV2 towards the left direction. The trigger of the UI element UIE4 may slide the range of the virtual field of view FoV2 towards the right direction. The trigger of the UI element UIEs may return the user 30 to a previous page or screen. The trigger of the UI element UIE6 may end the current application of the optical system 100 or 200. The trigger of the UI element UIE7 may lock the current status of the user interface 19. The trigger of the UI element UIE8 may display a notification board on the virtual field of view FoV2 of the user interface 19. However, the function of each UI element included in the user interface 19 does not limit the scope of the present invention.


In the present optical system, one or multiple UI elements are arranged outside the range of the virtual field of view of the user interface for providing gaze-based user interactions. The implementation of the UI elements can thus be more flexible and no longer limited by the small range of the virtual field of view of the user interface. In addition, more image data may be presented on the virtual field of view of the user interface. Therefore, the present invention can provides an optical system and a related method for providing accurate eye-tracking in interactive virtual environments.


Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.

Claims
  • 1. An optical system which provides accurate eye-tracking, comprising: an eye-tracking module, comprising: a first sensor module configured to capture one or multiple eye images of a user; anda head-mounted display, comprising: a first processor configured to control an operation of the head-mounted display and provide a user interface; anda display configured to present the user interface, wherein: the user interface includes a virtual field of view and at least one UI element for receiving a gaze command associated with the one or multiple gaze points from the user; andthe at least one UI element is arranged within an estimated field of view of the user and located outside a range of the virtual field of view of the user interface.
  • 2. The optical system of claim 1, wherein: the eye-tracking module further comprise a second processor configured to; receive the one or more eye images captured by the first sensor module; andcompute one or more gaze points of the user based on the one or more eye images; andthe first processor is further configured to receive the one or more gaze points of the user from the second processor.
  • 3. The optical system of claim 2, wherein: the first processor is further configured to perform a corresponding action associated with the at least one UI element when the one or more gaze points of the user trigger the at least one UI element.
  • 4. The optical system of claim 1, wherein the first processor is further configured to: receive the one or more eye images captured by the first sensor module; andcompute one or more gaze point of the user based on the one or more eye images.
  • 5. The optical system of claim 4, wherein: the first processor is further configured to perform a predetermined action associated with the at least one UI element when the at least one UI element is triggered by the one or more gaze points of the user.
  • 6. The optical system of claim 1, wherein the first processor is further configured to: acquire a relationship between a field of view of the display and the estimated field of view of the user; andprovide the user interface based on the relationship between the field of view of the display and the estimated field of view of the user.
  • 7. The optical system of claim 6, wherein: the relationship between the field of view of the display and the estimated field of view of the user includes a ratio in a size of the field of view of the display to a size of the estimated field of view of the user.
  • 8. The optical system of claim 6, wherein: the relationship between the field of view of the display and the estimated field of view of the user includes a position of the display relative to the estimated field of view of the user.
  • 9. The optical system of claim 6, wherein: the estimated field of view of the user corresponds to a situation when the user looks straight ahead; andthe estimated field of view of the user is calculated based on a physical specification of the optical system and data of an average periorbital anthropometric measurement of the user.
  • 10. The optical system of claim 1, wherein: the first sensor module comprises at least one image sensor configured to capture the one or more eye images of the user;and the head-mounted display further comprises a second sensor module which comprises: at least one scene sensor configured to capture one or more images of a current field of view of the user; andat least one motion sensor configured to detect a motion of the user.
  • 11. The optical system of claim 1, wherein: the head-mounted display further comprises at least one I/O device for receiving at least one user command; andthe eye-tracking module further comprises an illumination module for illuminating an eye of the user.
  • 12. A method of providing accurate eye-tracking, comprising: capturing one or multiple eye images of a user;computing one or multiple gaze points of the user based on the one or multiple eye images of the user; andproviding a user interface, wherein: the user interface includes a virtual field of view and at least one UI element; andthe at least one UI element is arranged within an estimated field of view of the user and located outside a range of the virtual field of view of the user interface.
  • 13. The method of claim 12, further comprising: acquiring a relationship between a field of view of a display and the estimated field of view of the user;providing the user interface based on the relationship between the field of view of the display and the estimated field of view of the user; andpresenting the user interface on the display.
  • 14. The method of claim 13, wherein: the relationship between the field of view of the display and the estimated field of view of the user includes a ratio in a size of the field of view of the display to a size of the estimated field of view of the user.
  • 15. The method of claim 13, wherein: the relationship between the field of view of the display and the estimated field of view of the user includes a position of the display relative to the estimated field of view of the user.
  • 16. The method of claim 13, wherein: the estimated field of view of the user corresponds to a situation when the user looks straight ahead; andthe estimated field of view of the user is calculated based on a physical specification of the display and data of an average periorbital anthropometric measurement of the user.
  • 17. The method of claim 12, further comprising: performing a predetermined action associated with the at least one UI element when determining that the at least one UI element is triggered by the one or more gaze points of the user.
  • 18. The method of claim 17, wherein the at least one UI element is triggered by the one or more gaze points of the user when at least one of the following trigger conditions is satisfied: the one or multiple gaze points of the user are located within a hit box of the at least one UI element longer than a predetermined fixation duration;a voluntary eye movement of the user is detected within a predetermined time interval after the one or multiple gaze points of the user are located within the hit box of the at least one UI element; anda gaze trajectory consisting of the one or multiple gaze points of the user crosses a response boundary of the hit box of the at least one UI element.
  • 19. The method of claim 17, wherein the predetermined action includes content selection, previous, next, setting, close, back, home, show notification and lock screen.
  • 20. The method of claim 12, wherein the at least one UI element is located in a upper region, a lower region, a left region, a right region or a corner region outside the range of the virtual field of view of the user interface.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 63/215,984, filed on Jun. 28, 2021. The content of the application is incorporated herein by reference.

Provisional Applications (1)
Number Date Country
63215984 Jun 2021 US