The present application is a U.S. National Stage filing under 35 U.S.C. §371 of international patent cooperation treaty (PCT) application No. PCT/CN2014/088242, filed Oct. 9, 2014, and entitled “INTERACTIVE PROJECTION DISPLAY”, which claims the benefit of priority to Chinese Patent Application No. 201310470128.X, filed on Oct. 10, 2013, which applications are hereby incorporated into the present application by reference herein in their respective entireties.
The present application relates to the field of device interaction technologies, e.g., to interactive projection display.
Wearable devices, such as Google glasses, smart watches, smart gloves, and smart accessories (for example, smart rings and smart bands), are gradually accepted by the public, and these electronic smart devices bring more convenience to people's daily lives. However, because the wearable devices generally have the features of fitting a user, being compact and lightweight, having low energy consumption, and the like, these features determine that most wearable devices (for example, the above-mentioned smart watches, smart gloves, and smart accessories) do not have strong display, processing, and interaction capabilities.
Data in these wearable devices can be exported into a device that has strong display and processing capabilities, such as a computer or a smart phone, so as to implement interaction between these wearable devices and a user. However, such an interaction method is inconvenient, and it may cause that the user cannot interact with these wearable devices in real time.
In addition, because smart glasses are directly related to the eyes of the user and generally have a strong display capability, the smart glasses may also be used to enhance the display capability of other wearable devices. However, the user often wants to switch the sight line between a display interface of the smart glasses and other wearable devices, and the focal point of the eye frequently changes, which brings poor experience such as dizziness to the user.
An objective of the present application is to provide an interactive projection display method and system, wherein virtual display content, used to interact with a user, of a coordination device is projected near the coordination device, to cause interaction between the user and the coordination device to be more convenient.
According to a first aspect of embodiments of the present application, an interactive projection display method is provided, comprising:
According to a second aspect of the embodiments of the present application, an interactive projection display system is further provided, comprising:
According to the method and the system in the embodiments of the present application, device location information and display information that correspond to a coordination device (which particularly is a wearable device having a weak display capability or having no display capability) are obtained, and a corresponding virtual display image is generated according to the device location information and the display information and is then projected to a fundus of a user, to cause that the user sees the virtual display image that is presented near the coordination device, and the user even can feel that the virtual display image is displayed by the coordination device, which greatly facilitates interaction between the user and the coordination device, and improves the user experience.
According to the method and the apparatus in the embodiments of the present application, devices having different functions work in a coordinated manner, that is, a defect that a display capability of the coordination device is inadequate is remedied by using strong projection and display functions of a device near an eye; therefore, the devices are all functioned optimally, and the user experience is improved.
A method and an apparatus in the present application are described in detail below with reference to the accompanying drawings and embodiments.
Some wearable devices cannot have a large size due to their functions and/or a requirement of being conveniently carried; as a result, the wearable devices cannot implement information display with high quality, and the problem cannot be fundamentally solved only by changing designs of the devices. Therefore, this problem needs to be solved starting from at least one other device. Therefore, as shown in
In the method in the embodiment of the present application and the following corresponding apparatus embodiments, the to-be-displayed location information is information related to a to-be-displayed location. The to-be-displayed location is a location at which the virtual display content that the user sees is presented.
In the method in the embodiment of the present application, the virtual display content corresponding to the coordination device is projected to the fundus of the user according to the to-be-displayed location, to cause that the virtual display content that the user sees is imaged at the to-be-displayed location, and the user does not need to switch back and forth his or her location of gaze point between a display device and the coordination device substantially when the user interacts with the coordination device by using the virtual display content displayed at the to-be-displayed location, which more conforms to a use habit of the user and improves the user experience.
In a possible implementation manner of the embodiment of the present application, the coordination device preferably is a wearable device, and more preferably is a wearable device having a weak display capability, such as a smart watch, a smart ring, a smart band, or a smart necklace. Certainly, those skilled in the art may know that, the present application may be further applied to another portable device or a fixed device that has a weak display capability, or applied to a scenario in which the coordination device has a display capability but the display capability needs to be enhanced.
In a possible implementation manner of the embodiment of the present application, in the location information obtaining step S110, information corresponding to a current to-be-displayed location may be obtained in multiple manners, for example, in one of the following several manners:
In the foregoing method 1), there are multiple methods for obtaining the current device location information of the coordination device, for example:
In a possible implementation manner of the embodiment of the present application, the current device location information of the coordination device may be received from the coordination device. That is, the coordination device obtains and sends the current device location information of the coordination device, and then, in the method embodiment, the location information is received. The coordination device, for example, may obtain a location of the coordination device by using a location sensor such as a positioning module (for example, a global positioning system or indoor positioning).
In another possible implementation manner of the embodiment of the present application, the current device location information of the coordination device may also be received from another external device. Herein, the another external device may be another portable or wearable device, for example, a mobile phone with strong processing performance; or the another external device may be a device such as a remote server (for example, a cloud server). The coordination device transfers the current device location information of the coordination device to the another external device, and then, the another external device forwards the information, to cause that, in the method in the embodiment of the present application, the current device location information of the coordination device is received indirectly. In addition, in another possible implementation manner of the embodiment of the present application, the another external device may obtain, by itself, the current device location information of the coordination device, and then, forward the information to be received in the method in the embodiment of the present application, and for example, the another external device may be an external positioning apparatus.
In a possible implementation manner of the embodiment of the present application, the detecting the current device location information of the coordination device comprises:
For example, in a possible implementation manner of the embodiment of the present application, information that the user is currently gazing at the coordination device may be obtained in a manner of interacting with the user; in this case, after the location of the current gaze point of the user is detected, the location of the gaze point is the current device location information of the coordination device.
In another possible implementation manner of the embodiment of the present application, an object corresponding to the location of the gaze point may be determined, and the current device location information of the coordination device is determined when it is determined that the object is the coordination device.
The interaction information of the user may be the current device location information of the coordination device that is manually input by the user; and may also be interaction information detected by identifying an action or a gesture of the user, which is related to the current device location information of the coordination device and used to determine the current device location information of the coordination device.
By using that the coordination device is a smart watch as an example, for example, a relative location relationship, (for example, a distance between the smart watch and an eye, and an angle), between the smart watch and an eye of the user when the user normally looks at the smart watch and needs to interact with the smart watch, is obtained according to a feature of the user, for example, a shape feature or a use habit feature (the watch is worn at which wrist, a location of the watch on the wrist, amplitudes of lifting and bending a shoulder and an elbow when the user looks at the watch, and the like), and then, obtained data is locally stored, and the location related information is obtained locally and directly.
Certainly, those skilled in the art may know that, another method for obtaining a location of the coordination device may also be used in the method in the embodiment of the present application.
There are multiple manners of detecting the location of the current gaze point of the user in the item b), which for example, comprise one or more of the following:
Certainly, those skilled in the art may know that, in addition to the foregoing several methods for detecting the gaze point, another method that can be used to detect the gaze point of the eye of the user can also be used in the method in the embodiment of the present application.
The detecting the location of the current gaze point of the user by using the method iii) comprises:
The optical parameter of the eye when the clearest image is acquired is obtained by analyzing and processing the image at the fundus of the eye, so as to obtain through calculation a location of a current focal point of the sight line, which provides a basis for further detecting an observing behavior of an observer based on the accurate location of the focal point.
The image presented at the “fundus” herein mainly is an image presented on a retina, which may be an image of the fundus, or an image of another object projected to the fundus, such as a light spot pattern mentioned below.
In step S112, a focal length of an optical device on the optical path between the eye and the acquisition location and/or a location of the optical device on the optical path may be adjusted, and the clearest image at the fundus can be obtained when the optical device is at a certain location or in a certain state. The adjustment may be continuous and in real time.
In a possible implementation manner of the method in the embodiment of the present application, the optical device may be a focal length adjustable lens, configured to adjust the focal length of the optical device by adjusting a refractive index and/or a shape of the optical device. Specifically: 1) The focal length is adjusted by adjusting curvature of at least one surface of the focal length adjustable lens, for example, the curvature of the focal length adjustable lens is adjusted by increasing or decreasing a liquid medium in a cavity formed by a two-layer transparent layer; and 2) the focal length is adjusted by changing the refractive index of the focal length adjustable lens, for example, the focal length adjustable lens is filled with a specific liquid crystal medium, an arrangement of the liquid crystal medium is adjusted by adjusting a voltage of an electrode corresponding to the liquid crystal medium, and therefore, the refractive index of the focal length adjustable lens is changed.
In another possible implementation manner of the method in the embodiment of the present application, the optical device may be a group of lenses, configured to adjust a focal length of the group of lenses by adjusting relative locations between the lenses in the group of lenses. Alternatively, one or more lenses in the group of lenses are the foregoing focal length adjustable lenses.
In addition to the foregoing two manners of changing an optical path parameter of a system by using a feature of the optical device, the optical path parameter of the system may also be changed by adjusting the location of the optical device on the optical path.
In addition, in the method in the embodiment of the present application, step S113 further comprises:
The adjustment in step S112 causes that the clearest image can be acquired; however, the clearest image needs to be found by using step S113, and the optical parameter of the eye can be obtained through calculation according to the clearest image and a known optical path parameter.
In the method in the embodiment of the present application, step S113 may further comprise:
In order not to affect normal viewing of the eye, preferably, the light spot is an eye-invisible infrared light spot. In this case, in order to reduce interference from other spectrums, a step of filtering out, in the projected light spot, light except eye-invisible light that transmits a lens may be performed.
Correspondingly, the method in the embodiment of the present application may further comprise the following steps:
It should be noted that, a special case of controlling the luminance of the projected light spot is starting or stopping projection. For example, when an observer keeps gazing at a point, the projection may be stopped periodically; or when a fundus of the observer is bright enough, the projection may be stopped, and fundus information is used to detect a distance between a focal point of a current sight line of an eye and the eye.
In addition, the luminance of the projected light spot may be further controlled according to ambient light.
Preferably, in the method in the embodiment of the present application, step S113 may further comprise:
The optical parameter of the eye that is obtained in step S1132 may comprise an optical axis direction of the eye obtained according to a feature of the eye when the clearest image is acquired. The feature of the eye herein may be obtained from the clearest image, or may be obtained in another manner. The optical axis direction of the eye corresponds to a direction of a sight line at which the eye gazes. Specifically, the optical axis direction of the eye may be obtained according to a feature of the fundus when the clearest image is obtained. Determining the optical axis direction of the eye by using the feature of the fundus has a higher accuracy.
When the light spot pattern is projected to the fundus, a size of the light spot pattern may be greater than that of a fundus visible region or may be less than that of the fundus visible region.
When an area of the light spot pattern is less than or equal to that of the fundus visible region, a classic feature point matching algorithm (for example, the scale invariant feature transform (SIFT) algorithm) may be used to determine the optical axis direction of the eye by detecting a location, of the light spot pattern on the image, relative to the fundus.
When the area of the light spot pattern is greater than or equal to that of the fundus visible region, a location, of the light spot pattern on the obtained image, relative to an original light spot pattern (obtained through image calibration) may be used to determine the optical axis direction of the eye to determine a direction of a sight line of an observer.
In another possible implementation manner of the method in the embodiment of the present application, the optical axis direction of the eye may also be obtained according to a feature of a pupil of the eye when the clearest image is obtained. The feature of the pupil of the eye herein may be obtained from the clearest image, or may be obtained in another manner. Obtaining the optical axis direction of the eye by using the feature of the pupil of the eye is the prior art, and is not described in detail herein again.
In addition, the method in the embodiment of the present application may further comprise a step of calibrating the optical axis direction of the eye, so as to more accurately determine the optical axis direction of the eye.
In the method in the embodiment of the present application, the known imaging parameter comprises a fixed imaging parameter and a real-time imaging parameter, wherein the real-time imaging parameter is parameter information of the optical device when the clearest image is obtained, and the parameter information may be obtained in a manner of real-time recording when the clearest image is obtained.
After a current optical parameter of the eye is obtained, the location of the gaze point of the eye may be obtained with reference to the distance between the focal point of the eye and the eye, which is obtained through calculation (a specific process will be described in detail in combination with the apparatus part).
Preferably, in a possible implementation manner of the embodiment of the present application, the to-be-displayed location determining step comprises:
The preset rule herein, for example, may be that the to-be-displayed location is located above, below, at a left side of, at a right side of, surrounding the current location of the coordination device, or the like; or after ambient environment information of the coordination device is obtained by using a sensor (for example, an image acquisition sensor), the to-be-displayed location information may be determined according to the preset rule and the environment information.
In the embodiment of the present application, the current display information corresponds to an operation of the user on the coordination device, for example, if the user enters a number on the coordination device by using a key, the current display information comprises information corresponding to the entered number, and then, when the current display information is displayed, entering of the number is displayed. There are multiple methods for obtaining the current display information corresponding to the coordination device in step S120, that is, the display information obtaining step, which, for example, are one or more of the following:
In this method, the obtained display information is sent out by the coordination device.
Similar to the foregoing external device, in a possible implementation manner of the embodiment of the present application, the external device may be a portable device such as a mobile phone, a tablet computer, or a notebook computer, or may be a device such as a cloud server.
In a possible implementation manner, when the coordination device works with the embodiment of the present application in a coordinated manner, the external device may obtain the corresponding display information from the coordination device by using a communications module (which comprises a wired communications module and a wireless communications module, and preferably is the wireless communications module), and then sends out the display information.
In another possible implementation manner of the embodiment of the present application, the coordination device may only send operation related information to the external device, then, the external device generates the current display information after processing the operation related information, and then, sends the current display information to a device corresponding to the method in the embodiment of the present application. This manner, for example, may be preferably applied to a scenario in which a processing capability of the coordination device is weak, and information processing needs to be performed by the external device to generate the display information.
Similar to that the current display information is generated through processing of the external device, in this implementation manner, the current display information is generated locally and directly. The method, for example, is preferably applied to a scenario in which a processing capability of the coordination device is weak and a processing capability of a device corresponding to the method in the embodiment of the present application is strong.
In order to enable the virtual display content projected in the projection step to have an effect of more conforming to the visual sense of the user from the user's perspective, in a possible implementation manner of the embodiment of the present application, in the content generating step, the virtual display content is generated according to the to-be-displayed location information and the display information. That is, the to-be-displayed location information is also considered when the virtual display content is generated. For example, in a possible implementation manner of the embodiment of the present application, the virtual display content comprises perspective information corresponding to the current to-be-displayed location. The perspective information, for example, may comprise depth information, perspective angle information, and the like.
In order to make the virtual display content that the user sees have a three-dimensional display effect and be more real, in a possible implementation manner of the embodiment of the present application, virtual display content separately corresponding to two eyes of the user is generated in the content generating step. That is, according to a three-dimensional display principle, virtual display content corresponding to the left eye and virtual display content corresponding to the right eye are separately generated, so that the virtual display content that the user sees has a suitable three-dimensional display effect. For example, it may give the user a feeling that, the virtual display content that the user sees is displayed by the coordination device, which brings better user experience.
In a possible implementation manner of the embodiment of the present application, the projection step S140 comprises:
In the embodiment of the present application, the projection imaging parameter is generated according to the current to-be-displayed location information, to cause the virtual display content that the user sees to be imaged on the to-be-displayed location, which facilitates interaction between the user and the coordination device.
In a possible implementation manner of the embodiment of the present application, step S142 comprises:
The imaging parameter herein comprises a focal length, an optical axis direction, and the like of the optical device. The virtual display content can be appropriately projected to the fundus of the user through the adjustment, for example, the virtual display content is clearly imaged at the fundus of the user by adjusting the focal length of the optical device, or from the perspective of three-dimensional display, in addition to directly generating an image of the left eye and an image of the right eye that are with parallax when the virtual display content is generated, the virtual display content corresponding to the two eyes of the user may be the same, but the virtual display content may also be projected to the two eyes with certain deviation, to achieve three-dimensional projection. In this case, for example, an optical axis parameter of the optical device may be adjusted.
When the user sees the virtual display content, a direction of a sight line of the eye may change, and it is required that the virtual display content be well projected to the fundus of the user when the directions of the sight line of the eye of the user are different. Therefore, in a possible implementation manner of the embodiment of the present application, the projection step S140 further comprises:
In a possible implementation manner of the embodiment of the present application, a curved optical device such as a curved beam splitter may be needed to implement the function in step S143. However, the virtual display content is generally deformed after passing through the curved optical device. Therefore, in a possible implementation manner of the embodiment of the present application, the projection step S140 further comprises:
In a possible implementation manner of the embodiment of the present application, in order to make a size of the virtual display content that the user sees more suitable, the projection step further comprises:
In a possible implementation manner of the embodiment of the present application, the control may be implemented by adjusting the projection imaging parameter in step S142, or in another possible implementation manner, the size of the virtual display content that is to be projected may be controlled before the virtual display content is projected. The control may be automatically performed, or may be performed according to interaction with the user and based on an intention of the user, for example, identified by using a gesture of the user, or adjusted directly by using a key on a device corresponding to the method in the embodiment of the present application, and the like.
Preferably, in a possible implementation manner of the embodiment of the present application, in order to save the energy consumption and avoid occurrence of interactive display that is not needed by the user, coordinated interaction with the coordination device may be triggered by using a specific trigger action, and when the coordinated interaction is not triggered, interaction with the coordination device is not performed; therefore, the method further comprises:
According to the method in the embodiment of the present application, virtual display content corresponding to a coordination device is projected to a fundus of a user, to cause that the user sees the virtual display content near the coordination device, which facilitates interaction between the user and the coordination device, and improves the user experience.
As shown in
The system in the embodiment of the present application projects the virtual display content corresponding to the coordination device to the fundus of the user according to a to-be-displayed location, to cause that the virtual display content that the user sees is imaged at the to-be-displayed location, a sight line does not need to switch back and forth between a display device and the coordination device substantially when the user interacts with the coordination device by using the virtual display content displayed at the to-be-displayed location, which more conforms to a use habit of the user and improves the user experience.
In a possible implementation manner of the embodiment of the present application, the coordination device preferably is a wearable device, and more preferably is a wearable device having a weak display capability, such as a smart watch, a smart ring, a smart band, or a smart necklace. Certainly, those skilled in the art may know that, the present application may be further applied to another portable device or a fixed device that has a weak display capability, or applied to a scenario in which the coordination device has a display capability but the display capability needs to be enhanced.
In a possible implementation manner of the embodiment of the present application, the interactive projection display system may be a wearable device used near an eye, which may be separately implemented, or may be implemented on an existing device, for example, may be an apparatus near the eye, such as smart glasses (comprising: frame glasses, contact lenses, safety goggles, and the like). For a user that already has an eye problem such as a refractive error and needs to wear glasses for correction of the refractive error, or the like, the system in the present application may be directly implemented on the glasses for correction of the refractive error, and does not bring extra burden to the user. Certainly, in another possible implementation manner of the embodiment of the present application, the interactive projection display system may be implemented on, for example, helmet eyepieces or another optical device that is used in coordination with an eye of the user.
There may be multiple forms of the location information obtaining apparatus 310 in the embodiment of the present application, which, for example, comprise at least one of the following multiple forms:
For example, a gesture and a location of a hand of the user are obtained by using a detection sensor, to obtain a current to-be-displayed location set by the hand of the user (for example, the user draws a block by hand, and a region corresponding to the block is the current to-be-displayed location needed by the user), or the to-be-displayed location information may be input manually by the user (for example, data such as a distance between the to-be-displayed location and an eye of the user, an angle, and a size of the to-be-displayed location is input).
In the foregoing method 1), there are multiple structures of the device location obtaining module, which, for example, comprise at least one of the following several structures:
In a possible implementation manner of the embodiment of the present application, the communications submodule may receive the current device location information of the coordination device from the coordination device. That is, the coordination device obtains and sends the current device location information of the coordination device, and then, the communications submodule in the method embodiment receives the location information. The coordination device, for example, may obtain a location of the coordination device by using a location sensor such as a positioning module (for example, a global positioning system or indoor positioning).
In another possible implementation manner of the embodiment of the present application, the communications submodule may also receive the current device location information of the coordination device from another external device. Herein, the another external device may be another portable or wearable device, for example, a mobile phone with strong processing performance; or the another external device may be a device such as a remote server (for example, a cloud server). The coordination device transfers the current device location information of the coordination device to the another external device, then, the another external device forwards the information, and then, the communications submodule in the embodiment of the present application receives the current device location information of the coordination device. In addition, in another possible implementation manner of the embodiment of the present application, the another external device may obtain, by itself, the current device location information of the coordination device, and then, forward the information to be received by the communications submodule in the embodiment of the present application, and for example, the another external device may be an external positioning apparatus.
Preferably, in a possible implementation manner of the embodiment of the present application, the detection submodule comprises:
For example, in a possible implementation manner of the embodiment of the present application, the system in the present application may obtain, in a manner of interacting with the user, information that the user is currently gazing at the coordination device; in this case, after the location of the current gaze point of the user is detected, the location of the gaze point is the current device location information of the coordination device.
In another possible implementation manner of the embodiment of the present application, the system in the present application may determine an object corresponding to the location of the gaze point, and determine the current device location information of the coordination device when determining that the object is the coordination device.
In another possible implementation manner of the embodiment of the present application, the detection submodule comprises:
The interaction detection unit, for example, may be an input signal detection device; the interaction information of the user may be the current device location information of the coordination device that is manually input by the user; the device location determining unit, for example, may be an input signal analysis device, configured to obtain the current device location information of the coordination device according to the input.
The interaction detection unit, for example, may also be an image acquisition apparatus, for example, a camera, and the device location determining unit, for example, may be an image processing apparatus, configured to analyze an image acquired by the image acquisition apparatus, to determine the current device location information of the coordination device. The embodiment of the present application can detect interaction information related to a current location of the coordination device by identifying an action or a gesture of the user, and determine the current device location information of the coordination device according to the interaction information.
By using that the coordination device is a smart watch as an example, for example, a relative location relationship, (for example, a distance between the smart watch and an eye, and an angle), between the smart watch and an eye of the user when the user normally looks at the smart watch and needs to interact with the smart watch, is obtained according to a feature of the user, for example, a shape feature or a use habit feature (the watch is worn at which wrist, a location of the watch on the wrist, amplitudes of lifting and bending a shoulder and an elbow when the user looks at the watch, and the like), and then, obtained data is locally stored, and the location related information is obtained locally and directly.
Certainly, those skilled in the art may know that, another apparatus that can obtain a location of the coordination device may also be used in the method in the embodiment of the present application.
A manner of detecting, by the gaze point detection unit in the item b), a location of a focal point of a sight line of an observer may be any one or more in i) to iii) mentioned in the method embodiment shown in
Certainly, those skilled in the art may know that, in addition to the foregoing several gaze point detection units, another apparatus that can be used to detect the gaze point of the eye of the user can also be used in the system in the embodiment of the present application.
Preferably, in a possible implementation manner of the embodiment of the present application, the to-be-displayed location determining module is configured to determine the current to-be-displayed location information according to the current device location information of the coordination device and a preset rule. The preset rule herein, for example, may be that the to-be-displayed location is located above, below, at a left side of, at a right side of, surrounding the current location of the coordination device, or the like; or after ambient environment information of the coordination device is obtained by using a sensor (for example, an image acquisition sensor), the to-be-displayed location information may be determined according to the preset rule and the environment information.
In the embodiment of the present application, the current display information corresponds to an operation of the user on the coordination device, for example, if the user enters a number on the coordination device by using a key, the current display information comprises information corresponding to the entered number, and then, when the current display information is displayed, entering of the number is displayed. There may be multiple forms of the display information obtaining apparatus 320, which, for example, are one or more of the following:
For the three specific implementation manners of the display information obtaining apparatus 320, refer to corresponding descriptions in step S120 in the method embodiment, which are not described in detail herein again. In addition, functions of the communications modules in the foregoing three implementation manners and the communications submodule recorded in the location information obtaining apparatus 310 may be implemented by a same device.
In order to enable the virtual display content projected in the projection step to have an effect of more conforming to the visual sense of the user from the user's perspective, in a possible implementation manner of the embodiment of the present application, the content generating apparatus 330 generates the virtual display content according to the to-be-displayed location information and the display information. The to-be-displayed location information is also considered when the virtual display content is generated. For example, in a possible implementation manner of the embodiment of the present application, the content generating apparatus 330 generates the virtual display content that comprises perspective information corresponding to the current to-be-displayed location. The perspective information, for example, may comprise depth information, perspective angle information, and the like.
In order to make the virtual display content that the user sees have a three-dimensional display effect and be more real, in a possible implementation manner of the embodiment of the present application, the content generating apparatus 330 generates virtual display content separately corresponding to two eyes of the user. For a specific implementation method, refer to corresponding descriptions in the method embodiment, which is not described in detail herein again.
In a possible implementation manner of the embodiment of the present application, the projection apparatus 340 comprises:
In another possible implementation manner of the embodiment of the present application, the imaging adjustment module comprises:
In another possible implementation manner of the embodiment of the present application, the imaging adjustment module comprises:
In a possible implementation manner of the embodiment of the present application, the projection apparatus comprises:
In a possible implementation manner of the embodiment of the present application, the projection apparatus comprises:
There are some records about the structure of the projection apparatus in implementation manners shown in the following
In a possible implementation manner of the embodiment of the present application, the system further comprises:
In a possible implementation manner of the embodiment of the present application, the gaze point detection unit comprises:
The following further describes, by using implementation manners corresponding to
As shown in
The gaze point detection apparatus 500 analyzes and processes the image at the fundus of the eye, to obtain the optical parameter of the eye when the image acquisition device obtains the clearest image, and can obtain through calculation a location of a current gaze point of the eye, which provides a basis for further implementing a self-adaptive operation of the eye.
The image presented at the “fundus” herein mainly is an image presented on a retina, which may be an image of the fundus, or an image of another object projected to the fundus. The eye herein may be a human eye or may be an eye of another animal.
As shown in
In a possible implementation manner of the embodiment of the present application, the adjustable imaging device 520 comprises: an adjustable lens device 521, located on the optical path between the eye and the image acquisition device 510, a focal length of the adjustable lens device being adjustable and/or a location of the adjustable lens device on the optical path being adjustable. By using the adjustable lens device 521, a system equivalent focal length between the eye and the image acquisition device 510 is adjustable, and the image acquisition device 510 obtains the clearest image at the fundus at a certain location of the adjustable lens device 521 or in a certain state though adjustment of the adjustable lens device 521. In this implementation manner, the adjustable lens device 521 performs the adjustment continuously and in real time in the detection process.
Preferably, in a possible implementation manner of the embodiment of the present application, the adjustable lens device 521 is a focal length adjustable lens, configured to adjust a focal length of the focal length adjustable lens by adjusting a refractive index and/or a shape of the focal length adjustable lens. Specifically: 1) The focal length is adjusted by adjusting curvature of at least one surface of the focal length adjustable lens, for example, the curvature of the focal length adjustable lens is adjusted by increasing or decreasing a liquid medium in a cavity formed by a two-layer transparent layer; and 2) the focal length is adjusted by changing the refractive index of the focal length adjustable lens, for example, the focal length adjustable lens is filled with a specific liquid crystal medium, an arrangement of the liquid crystal medium is adjusted by adjusting a voltage of an electrode corresponding to the liquid crystal medium, and therefore, the refractive index of the focal length adjustable lens is changed.
In another possible implementation manner of the embodiment of the present application, the adjustable lens device 521 may comprise a group of lenses formed by multiple lenses, configured to adjust a focal length of the group of lenses by adjusting relative locations between the lenses in the group of lenses. The group of lenses may also comprise a lens whose imaging parameter such as a focal length is adjustable.
In addition to the foregoing two manners of changing an optical path parameter of a system by using a feature of the adjustable lens device 521, the optical path parameter of the system may also be changed by using the location of the adjustable lens device 521 on the optical path.
Preferably, in a possible implementation manner of the embodiment of the present application, in order not to affect the viewing experience of the user for an observed object, and in order to make the system be portably applied to a wearable device, the adjustable imaging device 520 further comprises: a beam splitting unit 522, configured to form an optical transfer path between the eye and the observed object, and an optical transfer path between the eye and the image acquisition device 510. In this way, the optical path can be folded, a size of the system is reduced, and at the same time, other visual experience of the user is not affected as far as possible.
Preferably, in this implementation manner, the beam splitting unit comprises: a first beam splitting unit, located between the eye and the observed object, and configured to transmit light from the observed object to the eye, and transfer light from the eye to the image acquisition device.
The first beam splitting unit may be a beam splitter, a beam splitting optical waveguide (comprising an optical fiber), or another suitable splitting device.
In a possible implementation manner of the embodiment of the present application, the image processing device 530 in the system comprises an optical path calibration module, configured to calibrate an optical path of the system, for example, perform alignment and calibration on an optical axis of the optical path, to ensure the precision of measurement.
In a possible implementation manner of the embodiment of the present application, the image processing device 530 comprises:
In this implementation manner, the adjustable imaging device 520 causes the image acquisition device 510 to obtain the clearest image. However, the clearest image needs to be found by using the image analysis module 531. In this case, the optical parameter of the eye can be obtained through calculation according to the clearest image and the optical path parameter that is known to the system. Herein, the optical parameter of the eye may comprise an optical axis direction of the eye.
In a possible implementation manner of the embodiment of the present application, preferably, the system further comprises: a projection device 540, configured to project a light spot to the fundus. In a possible implementation manner, a micro projector may be used to implement the function of the projection device.
The projected light spot herein may have no specific pattern and is only be used to illuminate the fundus.
In an exemplary implementation manner of the embodiment of the present application, the projected light spot comprises a pattern with abundant features. The abundant features of the pattern may facilitate detection and improve the detection precision.
In order not to affect normal viewing of the eye, preferably, the light spot is an eye-invisible infrared light spot.
In this case, in order to reduce interference from other spectrums:
Preferably, in a possible implementation manner of the embodiment of the present application, the image processing device 530 further comprises:
For example, the projection control module 534 may adjust the luminance self-adaptively according to a feature of the image obtained by the image acquisition device 510. Herein, the feature of the image comprises a contrast of the feature of the image, a texture feature, and the like.
Herein, a special case of controlling the luminance of the light spot projected by the projection device is turning on or turning off the projection device. For example, when the user keeps gazing at a point, the projection device may be turned off periodically; or when the fundus of the user is bright enough, a light source may be turned off, and a distance between a gaze point of a current sight line of an eye and the eye may be detected by only using fundus information.
In addition, the projection control module 534 may further control, according to ambient light, the luminance of the light spot projected by the projection device.
Preferably, in a possible implementation manner of the embodiment of the present application, the image processing device 530 further comprises: an image calibration module 533, configured to calibrate a fundus image, to obtain at least one reference image corresponding to the image presented at the fundus.
The image analysis module 531 performs comparative calculation on the image obtained by the image acquisition device 530 and the reference image, to obtain the clearest image. Herein, the clearest image may be an obtained image that is the least different from the reference image. In this implementation manner, a difference between a currently obtained image and the reference image is calculated by using an existing image processing algorithm, for example, by using a classic automatic phase difference focusing algorithm.
Preferably, in a possible implementation manner of the embodiment of the present application, the parameter calculation module 532 comprises:
The feature of the eye herein may be obtained from the clearest image, or may be obtained in another manner. The optical axis direction of the eye indicates a direction of a sight line at which the eye gazes.
Preferably, in a possible implementation manner of the embodiment of the present application, the unit 5321 for determining an optical axis direction of an eye comprises: a first determining subunit, configured to obtain the optical axis direction of the eye according to a feature of the fundus when the clearest image is obtained. Compared with obtaining the optical axis direction of the eye by using features of a pupil and an eyeball surface, determining the optical axis direction of the eye by using the feature of the fundus has a higher accuracy.
When the light spot pattern is projected to the fundus, a size of the light spot pattern may be greater than that of a fundus visible region or may be less than that of the fundus visible region.
When an area of the light spot pattern is less than or equal to that of the fundus visible region, a classic feature point matching algorithm (for example, the SIFT algorithm) may be used to determine the optical axis direction of the eye by detecting a location, of the light spot pattern on the image, relative to the fundus.
When the area of the light spot pattern is greater than or equal to that of the fundus visible region, a location, of the light spot pattern on the obtained image, relative to an original light spot pattern (obtained by using the image calibration module) may be used to determine the optical axis direction of the eye to determine a direction of a sight line of the user.
In another possible implementation manner of the embodiment of the present application, the unit 5321 for determining an optical axis direction of an eye comprises: a second determining subunit, configured to obtain the optical axis direction of the eye according to a feature of a pupil of the eye when the clearest image is obtained. The feature of the pupil of the eye herein may be obtained from the clearest image, or may be obtained in another manner. Obtaining the optical axis direction of the eye by using the feature of the pupil of the eye is the prior art, and is not described in detail herein again.
Preferably, in a possible implementation manner of the embodiment of the present application, the image processing device 530 further comprises: a module 535 for calibrating an optical axis direction of an eye, configured to calibrate the optical axis direction of the eye, so as to more accurately determine the foregoing optical axis direction of the eye.
In this implementation manner, the imaging parameter that is known to the system comprises a fixed imaging parameter and a real-time imaging parameter, wherein the real-time imaging parameter is parameter information of the adjustable lens device when the clearest image is acquired, and the parameter information may be obtained in a manner of real-time recording when the clearest image is acquired.
The distance between the gaze point of the eye and the eye is obtained through calculation below after a current optical parameter of the eye is obtained, specifically:
As shown in a formula (3), a distance do between the currently observed object 5010 (the gaze point of the eye) and the eye equivalent lens 5030 may be obtained from (1) and (2):
The optical axis direction of the eye may be obtained because of previous records and according to the foregoing distance, between the observed object 5010 and the eye, that is obtained through calculation, and therefore, the location of gaze point of the eye can be easily obtained, which provides a basis for subsequent further interaction related to the eye.
In this implementation manner, the image processing device is not shown in
Generally, the fundus is not bright enough, and therefore, it is better to illuminate the fundus, and in this implementation manner, the fundus is illuminated by using a light source 640. In order not to affect the user experience, the light source 640 herein preferably is eye-invisible light, preferably, is a near-infrared light source which does not much affect the eye A and to which the camera 610 is relatively sensitive.
In this implementation manner, the light source 640 is located at an outer side of a spectacle frame at a right side, and therefore, a second beam splitter 650 and the first beam splitter 620 are required to jointly transfer, to the fundus, light emitted by the light source 640. In this implementation manner, the second beam splitter 650 is located in front of an incident surface of the camera 610, and therefore, light from the fundus to the second beam splitter 650 also needs to be transmitted.
As can be seen, in this implementation manner, in order to improve the user experience and the acquisition clarity of the camera 610, the first beam splitter 620 may preferably have the features of high reflectivity to infrared rays and high transmissivity to visible light. For example, an infrared reflective film may be disposed at one side, of the first beam splitter 620, facing the eye A, to achieve the features described above.
As can be seen from
In another implementation manner of the embodiment of the present application, the apparatus 600 for detecting a gaze point of an eye may be located at one side, of a lens of the glasses 400, close to the eye A; in this case, an optical feature parameter of the lens needs to be obtained in advance, and an influence factor of the lens needs to be considered when the distance of the gaze point is calculated.
In this embodiment, the light emitted by the light source 640 is reflected by the second beam splitter 650, projected by the focal length adjustable lens 630, and reflected by the first beam splitter 620, and then passes through the lens of the glasses 400 to enter the eye of the user, and finally arrives at a retina of the fundus. The camera 610 shoots an image at the fundus by using a pupil of the eye A and through an optical path formed by the first beam splitter 620, the focal length adjustable lens 630, and the second beam splitter 650.
In a possible implementation manner, other parts of the interactive projection display system in the embodiment of the present application are also implemented on the glasses 400, and both the gaze point detection unit and the projection apparatus may comprise: a device having a projection function (such as the projection module in the foregoing projection apparatus, and the projection device of the gaze point detection apparatus), an imaging device with an imaging parameter being adjustable (such as the imaging adjustment module in the foregoing projection apparatus, and the adjustable imaging device in the gaze point detection apparatus), and the like; in a possible implementation manner of the embodiment of the present application, functions of the gaze point detection apparatus and the projection apparatus are implemented by a same device.
As shown in
In a possible implementation manner of the embodiment of the present application, in addition to functioning as imaging adjustment modules of the projection apparatus, the first beam splitter 620, the second beam splitter 650, and the focal length adjustable lens 630 may further function as adjustable imaging devices of the gaze point detection apparatus. Herein, in a possible implementation manner, a focal length of the focal length adjustable lens 630 may be adjusted according to regions, wherein different regions separately correspond to the gaze point detection apparatus and the projection apparatus, and focal lengths may also be different. Alternatively, a focal length of the focal length adjustable lens 630 is adjusted as a whole. However, a front end of a light sensitive unit (such as a CCD) of the micro camera 610 of the gaze point detection apparatus is further provided with another optical device, configured to implement assisted adjustment of the imaging parameter of the gaze point detection apparatus. In addition, in another possible implementation manner, it may be configured that an optical length from a light emitting surface (that is, a projection location of the virtual display content) of the light source 640 to the eye to is the same as an optical length from the eye to the micro camera 610, and when the focal length adjustable lens 630 is adjusted until the micro camera 610 receives the clearest fundus image, the virtual display content projected by the light source 640 is exactly imaged at the fundus clearly.
As can be seen from the above, in the embodiment of the present application, functions of the apparatus for detecting a gaze point of an eye and the projection apparatus in the interactive projection display system may be implemented by using a same device, to cause the whole system to have a simple structure and a small size, and be carried conveniently.
The curved beam splitter 750 is used herein to transfer, separately corresponding to locations of a pupil when optical axis directions of an eye are different, an image presented at a fundus to an image acquisition device. In this way, the camera can shoot a mixed and superimposed image formed from various angles of an eyeball. However, only a part at the fundus passing through a pupil can be clearly imaged on the camera, and other parts are out of focus and fail to be imaged clearly, and therefore, the imaging of the part at the fundus is not interfered severely, and a feature of the part at the fundus can still be detected. Therefore, compared with the implementation manner shown in
In a possible implementation manner of the embodiment of the present application, other parts of the interactive projection display system in the embodiment of the present application are also implemented on the glasses 400. In this implementation manner, the gaze point detection apparatus and the projection apparatus may also be multiplexed. Similar to the embodiment shown in
In this case, the second beam splitter 750 is further configured to perform optical path transferring, separately corresponding to locations of a pupil when optical axis directions of an eye are different, between the projection module and the fundus. After passing through the curved second beam splitter 750, the virtual display content projected by the projection device 740 is deformed; therefore, in this implementation manner, the projection apparatus comprises:
As shown in
A structure of the second projection apparatus is similar to a structure combined with the function of the gaze point detection apparatus and recorded in the embodiment in
A structure of the first projection apparatus is similar to a structure of the second projection apparatus 820 (the imaging parameter generating module of the projection apparatus is not shown in
By using the method in the embodiment of the present application, content corresponding to the coordination device is presented at the to-be-displayed location in a three-dimensional manner, which brings a better visual effect to the user.
In the implementation manner shown in
When the coordination device 920 interacts with the interactive projection display system 910 in the embodiment of the present application, the to-be-displayed location is determined by using the foregoing method and a method recorded in the system embodiment, and the system 910 obtains display information corresponding to the coordination device 920, and then, generates corresponding virtual display content and projects the virtual display content to a fundus of a user, to cause that the user sees the virtual display content 930 displayed in
The processor 1010, the communications interface 1020, and the memory 1030 complete mutual communication by using the communications bus 1040.
The communications interface 1020 is configured to communication with a network element such as a client.
The processor 1010 is configured to perform a program 1032, and may specifically perform related steps in the foregoing method embodiment.
Specifically, the program 1032 may comprise program code, wherein the program code comprises a computer operation instruction.
The processor 1010 may be a central processing unit (CPU) or an application specific integrated circuit (ASIC), or is configured to be one or more integrated circuits for implementing the embodiments of the present application.
The memory 1030 is configured to store the program 1032. The memory 1030 may comprise a high-speed random access memory (RAM), and may further comprise a non-volatile memory, for example, at least one disk memory. The program 1032 may specifically comprise:
For specific implementation of each unit in the program 1032, reference may be made to corresponding unit in the embodiments shown in
The system in the embodiment of the present application projects virtual display content corresponding to a coordination device to a fundus of a user, to cause that the user sees the virtual display content near the coordination device, which facilitates interaction between the user and the coordination device, and improves the user experience.
Those of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and method steps can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. Those skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of the present application.
When the functions are implemented in a form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the present application essentially, or the part contributing to the prior art, or a part of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium, and comprises several instructions for instructing a computer device (which may be a personal computer, a server, a network device, or the like) to perform all or a part of the steps of the methods described in the embodiments of the present application. The storage medium comprises any medium that can store program code, such as a USB flash drive, a mobile hard disk, a read-only memory (ROM), a RAM, a magnetic disk, or an optical disc.
The foregoing implementation manners are merely intended for describing the present application rather than limiting the present application, and those of ordinary skill in related technical field can make various changes and variations without departing from the spirit and scope of the present application. Therefore, all equivalent technical solutions fall within the scope of the present application, and the patent protection scope of the present application shall be subject to the claims.
Number | Date | Country | Kind |
---|---|---|---|
2013 1 0470128 | Oct 2013 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2014/088242 | 10/9/2014 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2015/051751 | 4/16/2015 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4264154 | Petersen | Apr 1981 | A |
4572616 | Kowel et al. | Feb 1986 | A |
5182585 | Stoner | Jan 1993 | A |
5537163 | Ueno | Jul 1996 | A |
6072443 | Nasserbakht et al. | Jun 2000 | A |
6111597 | Tabata | Aug 2000 | A |
6151061 | Tokuhashi | Nov 2000 | A |
6152563 | Hutchison et al. | Nov 2000 | A |
6325513 | Bergner et al. | Dec 2001 | B1 |
7001020 | Yancey et al. | Feb 2006 | B2 |
7298414 | Stavely et al. | Nov 2007 | B2 |
7334892 | Goodall et al. | Feb 2008 | B2 |
7486988 | Goodall et al. | Feb 2009 | B2 |
7764433 | Kam et al. | Jul 2010 | B2 |
7766479 | Ebisawa | Aug 2010 | B2 |
8104892 | Hillis et al. | Jan 2012 | B2 |
8109632 | Hillis et al. | Feb 2012 | B2 |
8282212 | Hillis et al. | Oct 2012 | B2 |
8384999 | Crosby et al. | Feb 2013 | B1 |
8896632 | MacDougall et al. | Nov 2014 | B2 |
20020101568 | Eberl et al. | Aug 2002 | A1 |
20020113943 | Trajkovic et al. | Aug 2002 | A1 |
20030043303 | Karuta et al. | Mar 2003 | A1 |
20030125638 | Husar et al. | Jul 2003 | A1 |
20050003043 | Sewalt et al. | Jan 2005 | A1 |
20050014092 | Hasegawa et al. | Jan 2005 | A1 |
20050030438 | Nishioka | Feb 2005 | A1 |
20060016459 | McFarlane et al. | Jan 2006 | A1 |
20060103808 | Horie | May 2006 | A1 |
20060122530 | Goodall et al. | Jun 2006 | A1 |
20060146281 | Goodall et al. | Jul 2006 | A1 |
20060164593 | Peyghambarian et al. | Jul 2006 | A1 |
20060122531 | Goodall et al. | Aug 2006 | A1 |
20070019157 | Hills et al. | Jan 2007 | A1 |
20070211207 | Lo et al. | Sep 2007 | A1 |
20080002262 | Chirieleison | Jan 2008 | A1 |
20080106633 | Blum et al. | May 2008 | A1 |
20090066915 | Lai | Mar 2009 | A1 |
20090279046 | Dreher et al. | Nov 2009 | A1 |
20090303212 | Akutsu et al. | Dec 2009 | A1 |
20110018903 | Lapstun | Jan 2011 | A1 |
20110019258 | Levola | Jan 2011 | A1 |
20110213462 | Holladay | Jan 2011 | A1 |
20110051087 | Inoue et al. | Mar 2011 | A1 |
20110242277 | Do et al. | Oct 2011 | A1 |
20110279277 | Li-Chung | Nov 2011 | A1 |
20120013389 | Thomas et al. | Jan 2012 | A1 |
20120092618 | Yoo et al. | Apr 2012 | A1 |
20120113235 | Shintani | May 2012 | A1 |
20120127422 | Tian et al. | May 2012 | A1 |
20120133891 | Jiang | May 2012 | A1 |
20120140044 | Galstian et al. | Jun 2012 | A1 |
20120154277 | Bar-Zeev et al. | Jun 2012 | A1 |
20120169730 | Inoue | Jul 2012 | A1 |
20120206485 | Osterhout | Aug 2012 | A1 |
20120212499 | Haddick | Aug 2012 | A1 |
20120212508 | Kimball | Aug 2012 | A1 |
20120242698 | Haddick | Sep 2012 | A1 |
20120290401 | Neven | Nov 2012 | A1 |
20120307208 | Trousdale | Dec 2012 | A1 |
20130044042 | Olsson et al. | Feb 2013 | A1 |
20130050646 | Nanbara | Feb 2013 | A1 |
20130072828 | Sweis et al. | Mar 2013 | A1 |
20130107066 | Venkatraman et al. | May 2013 | A1 |
20130127980 | Haddick et al. | May 2013 | A1 |
20130135203 | Croughwell, III | May 2013 | A1 |
20130147836 | Small | Jun 2013 | A1 |
20130241805 | Gomez | Sep 2013 | A1 |
20130241927 | Vardi | Sep 2013 | A1 |
20130278631 | Border | Oct 2013 | A1 |
20130335301 | Wong et al. | Dec 2013 | A1 |
20130335404 | Westerinen et al. | Dec 2013 | A1 |
20130342572 | Poulos | Dec 2013 | A1 |
20140078175 | Forutanpour et al. | Mar 2014 | A1 |
20140160157 | Poulos | Jun 2014 | A1 |
20140225915 | Theimer | Aug 2014 | A1 |
20140225918 | Mittal | Aug 2014 | A1 |
20140232746 | Ro | Aug 2014 | A1 |
20140240351 | Scavezze | Aug 2014 | A1 |
20140267400 | Mabbutt | Sep 2014 | A1 |
20140267420 | Schowengerdt | Sep 2014 | A1 |
20140282224 | Pedley | Sep 2014 | A1 |
20140375680 | Ackerman | Dec 2014 | A1 |
20150002542 | Chan | Jan 2015 | A1 |
20150035861 | Salter | Feb 2015 | A1 |
20150070391 | Nishimaki | Mar 2015 | A1 |
20150234184 | Schowengerdt | Aug 2015 | A1 |
20150235632 | Liu | Aug 2015 | A1 |
20160035139 | Fuchs | Feb 2016 | A1 |
20160171772 | Ryznar | Jun 2016 | A1 |
20160189432 | Bar-Zeev | Jun 2016 | A1 |
20160196603 | Perez | Jul 2016 | A1 |
Number | Date | Country |
---|---|---|
1372650 | Oct 2002 | CN |
1470227 | Jan 2004 | CN |
1141602 | Mar 2004 | CN |
1527126 | Sep 2004 | CN |
1604014 | Apr 2005 | CN |
1645244 | Jul 2005 | CN |
1653374 | Aug 2005 | CN |
1901833 | Jan 2007 | CN |
1912672 | Feb 2007 | CN |
2868183 | Feb 2007 | CN |
1951314 | Apr 2007 | CN |
101069106 | Nov 2007 | CN |
101072534 | Nov 2007 | CN |
101097293 | Jan 2008 | CN |
101103902 | Jan 2008 | CN |
201005945 | Jan 2008 | CN |
101116609 | Feb 2008 | CN |
101155258 | Apr 2008 | CN |
101194198 | Jun 2008 | CN |
101430429 | May 2009 | CN |
201360319 | Sep 2009 | CN |
201352278 | Nov 2009 | CN |
101900927 | Jan 2010 | CN |
101662696 | Mar 2010 | CN |
201464738 | May 2010 | CN |
101782685 | Jul 2010 | CN |
101819331 | Sep 2010 | CN |
101819334 | Sep 2010 | CN |
201637953 | Nov 2010 | CN |
101917638 | Dec 2010 | CN |
201754203 | Mar 2011 | CN |
102008288 | Apr 2011 | CN |
102083390 | Jun 2011 | CN |
102203850 | Sep 2011 | CN |
102292017 | Dec 2011 | CN |
102419631 | Apr 2012 | CN |
102481097 | May 2012 | CN |
101149254 | Jun 2012 | CN |
102487393 | Jun 2012 | CN |
202267785 | Jun 2012 | CN |
102572483 | Jul 2012 | CN |
102576154 | Jul 2012 | CN |
202383380 | Aug 2012 | CN |
102918444 | Feb 2013 | CN |
102939557 | Feb 2013 | CN |
102981270 | Mar 2013 | CN |
103054695 | Apr 2013 | CN |
103065605 | Apr 2013 | CN |
103150013 | Jun 2013 | CN |
103190883 | Jul 2013 | CN |
103197757 | Jul 2013 | CN |
103280175 | Sep 2013 | CN |
103297735 | Sep 2013 | CN |
103353663 | Oct 2013 | CN |
103353667 | Oct 2013 | CN |
103353677 | Oct 2013 | CN |
103558909 | Feb 2014 | CN |
19959379 | Jul 2000 | DE |
2646859 | Oct 2013 | EP |
03023431 | Jan 1991 | JP |
2676870 | Nov 1997 | JP |
H09289973 | Nov 1997 | JP |
3383228 | Mar 2003 | JP |
2003307466 | Oct 2003 | JP |
2005058399 | Mar 2005 | JP |
2007129587 | May 2007 | JP |
2011-43876 | Mar 2011 | JP |
2012199621 | Oct 2012 | JP |
2012247449 | Dec 2012 | JP |
201012448 | Apr 2010 | TW |
2004023167 | Mar 2004 | WO |
2005077258 | Aug 2005 | WO |
2012075218 | Jun 2012 | WO |
2012083415 | Jun 2012 | WO |
2013074851 | May 2013 | WO |
Entry |
---|
International Search Report dated Jan. 8, 2015 for PCT Application No. PCT/CN2014/088242, 2 pages. |
International Search report dated Jun. 12, 2014 for PCT Application No. PCT/CN2013/088554, 4 pages. |
Ji et al., “Real-Time Eye, Gaze and Face Pose Tracking for Monitoring Driver Vigilance”, Real-Time Imaging 8, 357-377 (2002) available online at http://www.idealibrary.com, 21 pages. |
International Search Report dated May 5, 2014 for PCT Application No. PCT/CN2013/088544, 4 pages. |
International Search Report dated Jun. 5, 2014 for PCT Application No. PCT/CN2013/088549, 4 pages. |
Smith, et al., “Determining Driver Visual Attention With One Camera”, IEEE Transactions on Intelligent Transportation Systems, vol. 4, No. 4, Dec. 2003, 14 Pages. |
Singh, et al., “Human Eye Tracking and Related Issues: A Review”, International Journal of Scientific and Research Publications, vol. 2, Issue 9, Sep. 2012, ISSN 2250-3153, 9 pages. |
Office Action dated Dec. 29, 2016 for U.S. Appl. No. 14/780,519, 25 pages. |
Office Action dated Jun. 29, 2017 for U.S. Appl. No. 14/783,495, 50 pages. |
Office Action dated Jun. 29, 2017 for U.S. Appl. No. 14/783,503, 120 pages. |
Gao et al. “Measuring Directionality of the Retinal Reflection with a Shack-Hartmann Wavefront Sensor”, Dec. 2009, Optics Express, vol. 17, No. 25, Optical Society of America, 20 pages. |
Office Action dated Jul. 12, 2017 for U.S. Appl. No. 14/780,519, 45 pages. |
Office Action dated Jun. 8, 2017 for U.S. Appl. No. 14/779,968, 79 pages. |
International Search Report dated Mar. 6, 2014 for PCT Application No. PCT/CN2013/088540, 8 pages. |
Jeong, et al. “Tunable microdoublet lens array”, Optics Express, vol. 12, Issue 11, May 2004, pp. 2494-2500. |
International Search Report dated Apr. 3, 2014 for PCT Application No. PCT/CN2013/088531, 10 pages. |
International Search Report dated Feb. 27, 2014 for PCT Application No. PCT/CN2013/088522, 6 pages. |
International Search Report dated May 8, 2014 for PCT Application No. PCT/CN2013/088547, 4 pages. |
Kim et al., “A 200 s Processing Time Smart Image Sensor for an Eye Tracker using pixel-level analog image processing”, IEEE Journal of Solid-State Circuits, vol. 44, No. 9, Sep. 2009, 10 pages. |
Hansen et al., “In the eye of the beholder: a survey of models for eyes and gaze”, IEEE Transactions on pattern analysis and machine intelligence, vol. 32, No. 3, Mar. 2010, 23 pages. |
International Search Report dated May 28, 2014 for PCT Application No. PCT/CN2013/088553, 6 pages. |
International Search Report dated May 28, 2014 for PCT Application No. PCT/CN2013/088545, 4 pages. |
Office Action dated May 3, 2017 for U.S. Appl. No. 14/781,306, 46 pages. |
Office Action dated Feb. 27, 2017 for U.S. Appl. No. 14/783,495, 39 pages. |
Office Action dated Apr. 21, 2017 for U.S. Appl. No. 14/781,581, 19 pages. |
Office Action dated Apr. 20, 2017 for U.S. Appl. No. 14/781,578, 77 pages. |
Office Action dated Oct. 4, 2017 for U.S. Appl. No. 14/781,584, 95 pages. |
Office Action dated Nov. 9, 2017 for U.S. Appl. No. 14/781,578, 64 pages. |
Office Action dated Nov. 9, 2017 for U.S. Appl. No. 14/780,519, 24 pages. |
Office Action dated Nov. 17, 2017 for U.S. Appl. No. 14/783,495, 32 pages. |
Number | Date | Country | |
---|---|---|---|
20160259406 A1 | Sep 2016 | US |