An example of the present invention relates generally to electronic displays and, more particularly, to a method, apparatus, and computer program product for providing focus correction of displayed information based on a focus distance of a user.
Device manufacturers are continually challenged to provide compelling services and applications to consumers. One area of development has been providing more immersive experiences through augmented reality and electronic displays (e.g., near-eye displays, head-worn displays, etc.). For example, in augmented reality, virtual graphics (i.e., visual representations of information) are overlaid on the physical world and presented to users on a display. These augmented reality user interfaces are then presented to users over a variety of displays, from the aforementioned head-worn display (e.g., glasses) to hand-held displays (e.g., a mobile phone or device). In some cases, the overlay of representations of information over the physical world can create potential visual miscues (e.g., focus mismatches). These visual miscues can create a poor user experience by causing, for instance, eye fatigue. Accordingly, device manufactures face significant technical challenges to reducing or eliminating the visual miscues or their impact on the user.
A method, apparatus, and computer program product are therefore provided for performing focus correction of displayed information. In an embodiment, the method, apparatus, and computer program product determines at least one focal point setting for optical components (e.g., lenses) of a display that are capable of providing dynamic focusing. In an embodiment, the at least one focal point setting is determined based on a determined focus distance of a user (e.g., a distance associated with where the user is looking or where the user's attention is focused in the field of view provided on the display). In this way, visual representations of data when presented on a display whose dynamic focus optical components are configured according to the at least one focal point setting can match the focus distance of the a user. Accordingly, the various example embodiments of the present invention can reduce potential visual miscues and user eye fatigue, thereby improving the user experience associated with various displays.
According to an embodiment, a method comprises determining a focus distance of a user. The method also comprises determining at least one focal point setting for one or more dynamic focus optical components of a display based on the focus distance. The method further comprises causing a configuring of the one or more dynamic focus optical components based on the at least one focal point setting to present a representation of data on the display. In an embodiment of the method, the focus distance may be determined based on gaze tracking information.
The method may also determine a depth for presenting the representation on the display and another depth for viewing information through the display. The method may also determine a focus mismatch based on the depth and the another depth. The method may also determine the at least one focal point setting to cause a correction of the focus mismatch. In this embodiment, the display includes a first dynamic focus optical component and a second dynamic focus optical component. The method may also determine a deviation of a perceived depth of the representation, information, or a combination thereof resulting from a first one of at least one focal point setting configured on the first dynamic focus optical component. The method may also determine a second one of the at least one focal point setting based on the deviation. The method may also cause a configuring of the second dynamic focus optical component based on the second one of the at least one focal point setting to cause the correction of the focus mismatch.
The method may also determine at least one vergence setting for the one or more dynamic focus optical components based on the focus distance. In this embodiment, the at least one vergence setting includes a tilt setting for the one or more dynamic focus optical components. The method may also determine a depth, a geometry, or a combination thereof of information viewed through the display based on depth sensing information. The method may also determine the focus distance, a subject of interest, or a combination thereof based on the depth, the geometry, or a combination thereof.
In an embodiment, the display is a see-through display and a first one of the one or more dynamic focus optical components is positioned between a viewing location and the see-through display, and the second one of the one or more dynamic focus optical components is positioned between the see-through display and information viewed through the see-through display.
According to another embodiment, an apparatus comprises at least one processor, and at least one memory including computer program code for one or more computer programs, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least determine a focus distance of a user. The at least one memory and the computer program code are also configured, with the at least one processor, to cause the apparatus to determine at least one focal point setting for one or more dynamic focus optical components of a display based on the focus distance. The at least one memory and the computer program code may also be configured, with the at least one processor, to cause the apparatus to determine a change in the focus distance and cause a configuring of the one or more dynamic focus optical components based on the at least one focal point setting to present a representation of data on the display. In an embodiment, the at least one memory and the computer program code may also be configured, with the at least one processor, to cause the apparatus to determine the focus distance based on gaze tracking information.
The at least one memory and the computer program code may also be configured, with the at least one processor, to cause the apparatus to determine a depth for presenting the representation on the display. The at least one memory and the computer program code may also be configured, with the at least one processor, to cause the apparatus to determine another depth for viewing information through the display. The at least one memory and the computer program code may also be configured, with the at least one processor, to cause the apparatus to determine a focus mismatch based on the depth and the another depth. The at least one memory and the computer program code may also be configured, with the at least one processor, to cause the apparatus to determine a focus mismatch based on the depth and the another depth. The at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to determine the at least one focal point setting to cause a correction of the focus mismatch.
In this embodiment, the display includes a first dynamic focus optical component and a second dynamic focus optical component. The at least one memory and the computer program code may also be configured, with the at least one processor, to cause the apparatus to determine a deviation of a perceived depth of the representation, information, or a combination thereof resulting from a first one of at least one focal point setting configured on the first dynamic focus optical component. The at least one memory and the computer program code may also be configured, with the at least one processor, to cause the apparatus to determine a second one of the at least one focal point setting based on the deviation. The at least one memory and the computer program code may also be configured, with the at least one processor, to cause the apparatus to cause a configuring of the second dynamic focus optical component based on the second one of the at least one focal point setting to cause the correction of the focus mismatch.
The at least one memory and the computer program code may also be configured, with the at least one processor, to cause the apparatus to determine at least one vergence setting for the one or more dynamic focus optical components based on the focus distance. In this embodiment, the at least one vergence setting includes a tilt setting for the one or more dynamic focus optical components. The at least one memory and the computer program code may also be configured, with the at least one processor, to cause the apparatus to determine a depth, a geometry, or a combination thereof of information viewed through the display based on depth sensing information. The at least one memory and the computer program code may also be configured, with the at least one processor, to cause the apparatus to determine the focus distance, a subject of interest, or a combination thereof based on the depth, the geometry, or a combination thereof. The at least one memory and the computer program code may also be configured, with the at least one processor, to determine the representation based on the focus distance, the at least one focal point setting, or a combination thereof.
In an embodiment, the display is a see-through display and a first one of the one or more dynamic focus optical components is positioned between a viewing location and the see-through display, and the second one of the one or more dynamic focus optical components is positioned between the see-through display and information viewed through the see-through display.
According to another embodiment, a computer program product comprising at least one non-transitory computer-readable storage medium having computer-readable program instructions stored therein, the computer-readable program instructions comprising program instructions configured to determine a focus distance of a user. The computer-readable program instructions also include program instructions configured to determine at least one focal point setting for one or more dynamic focus optical components of a display based on the focus distance. The computer-readable program instructions also include program instructions configured to cause a configuring of the one or more dynamic focus optical components based on the at least one focal point setting to present a representation of data on the display. In an embodiment, the computer-readable program instructions also may include program instructions configured to determine the focus distance based on gaze tracking information.
The computer-readable program instructions also may include program instructions configured to determine a depth for presenting the representation on the display. The computer-readable program instructions also may include program instructions configured to determine another depth for viewing information through the display. The computer-readable program instructions also may include program instructions configured to determine a focus mismatch based on the depth and the another depth. The computer-readable program instructions also may include program instructions configured to determine the at least one focal point setting to cause a correction of the focus mismatch.
In this embodiment, the display includes a first dynamic focus optical component and a second dynamic focus optical component. The computer-readable program instructions also may include program instructions configured to determine a deviation of a perceived depth of the representation, information, or a combination thereof resulting from a first one of at least one focal point setting configured on the first dynamic focus optical component. The computer-readable program instructions also may include program instructions configured to determine a second one of the at least one focal point setting based on the deviation. The computer-readable program instructions also may include program instructions configured to cause a configuring of the second dynamic focus optical component based on the second one of the at least one focal point setting to cause the correction of the focus mismatch.
According to yet another embodiment, an apparatus comprises means for determining a focus distance of a user. The apparatus also comprises means for determining at least one focal point setting for one or more dynamic focus optical components of a display based on the focus distance. The apparatus further comprises means for causing a configuring of the one or more dynamic focus optical components based on the at least one focal point setting to present a representation of data on the display. In an embodiment, the apparatus may also comprise means for determining the focus distance based on gaze tracking information. The apparatus may also comprise means for determining a depth for presenting the representation on the display. The apparatus may also comprise means for determining another depth for viewing information through the display. The apparatus may also comprise means for determining a focus mismatch based on the depth and the another depth. The apparatus may also comprise means for determining the at least one focal point setting to cause a correction of the focus mismatch.
In this embodiment, the display includes a first dynamic focus optical component and a second dynamic focus optical component. The apparatus may also comprise means for determining a deviation of a perceived depth of the representation, information, or a combination thereof resulting from a first one of at least one focal point setting configured on the first dynamic focus optical component. The apparatus may also comprise means for determining a second one of the at least one focal point setting based on the deviation. The apparatus may also comprise means for causing a configuring of the second dynamic focus optical component based on the second one of the at least one focal point setting to cause the correction of the focus mismatch.
Still other aspects, features, and advantages of the invention are readily apparent from the following detailed description, simply by illustrating a number of particular embodiments and implementations, including the best mode contemplated for carrying out the invention. The invention is also capable of other and different embodiments, and its several details can be modified in various obvious respects, all without departing from the spirit and scope of the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
The embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings:
Examples of a method, apparatus, and computer program product for providing focus correction of displayed information are disclosed. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It is apparent, however, to one skilled in the art that the embodiments of the invention may be practiced without these specific details or with an equivalent arrangement. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the embodiments of the invention.
Embodiments of a see-through display includes, for instance, the glasses depicted
Accordingly, a conflict or visual miscue is the vergence-accommodation mismatch (e.g., a focus mismatch), where the eye accommodates or focuses to a different depth than the expected depth for accommodation. This can cause fatigue or discomfort in the eye. In a fixed-focus system, this problem is compounded because the eye generally will try to accommodate at a fixed focus, regardless of other depth cues.
To address at least these challenges, the various embodiments of the method, the apparatus, and the computer program product described herein introduce the capability to determine how representations 107 are presented in the display 101 based on a focus distance of the user. In at least one example embodiment, the representations 107 are presented so that they correspond to the focus distance of the user. By way of example, the focus distance represents the distance to the point from the user's eye 113 on which the user is focusing or accommodating. The various embodiments of the present invention enable determination of how representations are to be presented in the display 101 based on optical techniques, non-optical techniques, or a combination thereof. By way of example, the representations are determined so that visual miscues or conflicts can be reduced or eliminated through the optical and non-optical techniques.
In at least one example embodiment, optical techniques are based on determining a focus distance of a user, determining focal point settings based on the focus distance, and then configuring one or more dynamic focus optical elements with the determined focal point settings. In at least one example embodiment, the focus distance is determined based on gaze tracking information. By way of example, a gaze tracker can measure where the visual axis of each eye is pointing. The gaze tracker can then calculate an intersection point of the visual axes to determine a convergence distance of the eyes. In at least one example embodiment of the gaze tracker, the convergence distance is then used as the focus distance or focus point of each eye. It is contemplated that the other means, including non-optical means, can be used to determine the focus distance of the eye.
In addition or alternatively, the focus distance can be determined through user interface interaction by a user (e.g., selecting a specific point in the user's field of view of display with an input device to indicate the focus distance). At least one example embodiment of the present invention uses gaze tracking to determine the focus of the user and displays the representations 107 of information on each lens of a near eye display so that the representations 107 properly correspond to the focus distance of the user. For example, if the user is focusing on a virtual object that should be rendered at a distance of 4 feet, gaze tracking can be used to detect the user's focus on this distance, and the focal point settings of optics of the display are changed dynamically to result in a focus of 4 feet. In at least one example embodiment, as the focus distance of the user changes, the focal point settings of the dynamic focus optical components of the display can also be dynamically change to focus the optics to the distance of the object under the user's gaze or attention.
As shown in
In at least one example embodiment, the display may be a non-see-through display that presents representations 107 of data without overlaying the representations 107 on a see-through view to the physical world or other information. In this example, the display would be opaque and employ a dynamic focus optical element in front of the display to alter the focal point settings or focus for viewing representations 107 on the display. The descriptions of the configuration and numbers of dynamic focus optical elements, lightguides, displays, and the like are provided as examples and are not intended to be limiting. It is contemplated that any number of the components described in the various embodiments can be combined or used in any combination.
As noted above, in at least one example embodiment, non-optical techniques can be used in addition to or in place of the optical techniques described above to determine how the representations 107 of data can be presented to reduce or avoid visual miscues or conflicts. For example, a display (e.g., the display 101, the display 119, or the display 125) can determine or generate representations 107 to create a sense of depth and focus based on (1) the focus distance of a user, (2) whether the representation 107 is a subject of interest to the user, or (3) a combination thereof. In at least one example embodiment, the display 101 determines the focus distance of the user and then determines the representations 107 to present based on the focus distance. The display 101 can, for instance, render representations 107 of data out of focus when they are not subject of the gaze or focus of the user and should be fuzzy. In at least one example embodiment, in addition to blurring or defocusing a representation, other rendering characteristics (e.g., shadow, vergence, color, etc.) can be varied based on the focus distance.
In at least one example embodiment, the various embodiments of the method, apparatus, and computer program product of the present invention can be enhanced with depth sensing information. For example, the display 101 may include a forward facing depth sensing camera or other similar technology to detect the depth and geometry of physical objects in the view of the user. In this case, the display 101 can detect the distance of a given physical object in focus and make sure that any representations 107 of data associated with the given physical object are location at the proper focal distance and that the focus is adjusted accordingly.
The processes described herein for determining representations of displayed information based on focus distance may be advantageously implemented via software, hardware, firmware or a combination of software and/or firmware and/or hardware. For example, the processes described herein, may be advantageously implemented via processor(s), Digital Signal Processing (DSP) chip, an Application Specific Integrated Circuit (ASIC), Field Programmable Gate Arrays (FPGAs), etc. Such exemplary hardware for performing the described functions is detailed below.
A bus 210 includes one or more parallel conductors of information so that information is transferred quickly among devices coupled to the bus 210. One or more processors 202 for processing information are coupled with the bus 210.
A processor (or multiple processors) 202 performs a set of operations on information as specified by computer program code related to determining representations of displayed information based on focus distance. The computer program code is a set of instructions or statements providing instructions for the operation of the processor and/or the computer system to perform specified functions. The code, for example, may be written in a computer programming language that is compiled into a native instruction set of the processor. The code may also be written directly using the native instruction set (e.g., machine language). The set of operations include bringing information in from the bus 210 and placing information on the bus 210. The set of operations also typically include comparing two or more units of information, shifting positions of units of information, and combining two or more units of information, such as by addition or multiplication or logical operations like OR, exclusive OR (XOR), and AND. Each operation of the set of operations that can be performed by the processor is represented to the processor by information called instructions, such as an operation code of one or more digits. A sequence of operations to be executed by the processor 202, such as a sequence of operation codes, constitute processor instructions, also called computer system instructions or, simply, computer instructions. Processors may be implemented as mechanical, electrical, magnetic, optical, chemical or quantum components, among others, alone or in combination.
Apparatus 200 also includes a memory 204 coupled to bus 210. The memory 204, such as a random access memory (RAM) or any other dynamic storage device, stores information including processor instructions for determining representations of displayed information based on focus distance. Dynamic memory allows information stored therein to be changed by the apparatus 200. RAM allows a unit of information stored at a location called a memory address to be stored and retrieved independently of information at neighboring addresses. The memory 204 is also used by the processor 202 to store temporary values during execution of processor instructions. The apparatus 200 also includes a read only memory (ROM) 206 or any other static storage device coupled to the bus 210 for storing static information, including instructions, that is not changed by the apparatus 200. Some memory is composed of volatile storage that loses the information stored thereon when power is lost. Also coupled to bus 210 is a non-volatile (persistent) storage device 208, such as a magnetic disk, optical disk or flash card, for storing information, including instructions, that persists even when the apparatus 200 is turned off or otherwise loses power.
Information, including instructions for determining representations of displayed information based on focus distance, is provided to the bus 210 for use by the processor from an external input device 212, such as a keyboard containing alphanumeric keys operated by a human user, or a camera/sensor 294. A camera/sensor 294 detects conditions in its vicinity (e.g., depth information) and transforms those detections into physical expression compatible with the measurable phenomenon used to represent information in apparatus 200. Examples of sensors 294 include, for instance, location sensors (e.g., GPS location receivers), position sensors (e.g., compass, gyroscope, accelerometer), environmental sensors (e.g., depth sensors, barometer, temperature sensor, light sensor, microphone), gaze tracking sensors, and the like.
Other external devices coupled to bus 210, used primarily for interacting with humans, include a display device 214, such as a near eye display, head worn display, cathode ray tube (CRT), a liquid crystal display (LCD), a light emitting diode (LED) display, an organic LED (OLED) display, a plasma screen, or a printer for presenting text or images, and a pointing device 216, such as a mouse, a trackball, cursor direction keys, or a motion sensor, for controlling a position of a small cursor image presented on the display 214 and issuing commands associated with graphical elements presented on the display 214. In at least one example embodiment, the commands include, for instance, indicating a focus distance, a subject of interest, and the like. In at least one example embodiment, for example, in embodiments in which the apparatus 200 performs all functions automatically without human input, one or more of external input device 212, display device 214 and pointing device 216 is omitted.
In the illustrated embodiment, special purpose hardware, such as an application specific integrated circuit (ASIC) 220, is coupled to bus 210. The special purpose hardware is configured to perform operations not performed by processor 202 quickly enough for special purposes. Examples of ASICs include graphics accelerator cards for generating images for display 214, cryptographic boards for encrypting and decrypting messages sent over a network, speech recognition, and interfaces to special external devices, such as robotic arms and medical scanning equipment that repeatedly perform some complex sequence of operations that are more efficiently implemented in hardware.
Apparatus 200 also includes one or more instances of a communications interface 270 coupled to bus 210. Communication interface 270 provides a one-way or two-way communication coupling to a variety of external devices that operate with their own processors, such as external displays. In general, the coupling is with a network link 278 that is connected to a local network 280 to which a variety of external devices with their own processors are connected. For example, communications interface 270 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN, such as Ethernet. Wireless links may also be implemented. For wireless links, the communications interface 270 sends or receives or both sends and receives electrical, acoustic or electromagnetic signals, including infrared and optical signals, that carry information streams, such as digital data. For example, in wireless handheld devices, such as mobile telephones like cell phones, the communications interface 270 includes a radio band electromagnetic transmitter and receiver called a radio transceiver. In at least one example embodiment, the communications interface 270 enables connection to the local network 280, Internet service provider 284, and/or the Internet 290 for determining representations of displayed information based on focus distance.
The term “computer-readable medium” as used herein refers to any medium that participates in providing information to processor 202, including instructions for execution. Such a medium may take many forms, including, but not limited to computer-readable storage medium (e.g., non-volatile media, volatile media), and transmission media. Non-transitory media, such as non-volatile media, include, for example, optical or magnetic disks, such as storage device 208. Volatile media include, for example, dynamic memory 204. Transmission media include, for example, twisted pair cables, coaxial cables, copper wire, fiber optic cables, and carrier waves that travel through space without wires or cables, such as acoustic waves and electromagnetic waves, including radio, optical and infrared waves. Signals include man-made transient variations in amplitude, frequency, phase, polarization or other physical properties transmitted through the transmission media. Forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, CDRW, DVD, any other optical medium, punch cards, paper tape, optical mark sheets, any other physical medium with patterns of holes or other optically recognizable indicia, a RAM, a PROM, an EPROM, a FLASH-EPROM, an EEPROM, a flash memory, any other memory chip or cartridge, a carrier wave, or any other medium from which a computer can read. The term computer-readable storage medium is used herein to refer to any computer-readable medium except transmission media.
Logic encoded in one or more tangible media includes one or both of processor instructions on a computer-readable storage media and special purpose hardware, such as ASIC 220.
Network link 278 typically provides information communication using transmission media through one or more networks to other devices that use or process the information. For example, network link 278 may provide a connection through local network 280 to a host computer 282 or to equipment 284 operated by an Internet Service Provider (ISP). ISP equipment 284 in turn provides data communication services through the public, world-wide packet-switching communication network of networks referred to as the Internet 290.
A computer called a server host 292 connected to the Internet hosts a process that provides a service in response to information received over the Internet. For example, server host 292 hosts a process that provides information for presentation at display 214. It is contemplated that the components of apparatus 200 can be deployed in various configurations within other devices or components.
At least one embodiment of the present invention is related to the use of apparatus 200 for implementing some or all of the techniques described herein. According to at least one example embodiment of the invention, those techniques are performed by apparatus 200 in response to processor 202 executing one or more sequences of one or more processor instructions contained in memory 204. Such instructions, also called computer instructions, software and program code, may be read into memory 204 from another computer-readable medium such as storage device 208 or network link 278. Execution of the sequences of instructions contained in memory 204 causes processor 202 to perform one or more of the method steps described herein. In alternative embodiments, hardware, such as ASIC 220, may be used in place of or in combination with software to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware and software, unless otherwise explicitly stated herein.
The signals transmitted over network link 278 and other networks through communications interface 270, carry information to and from apparatus 200. Apparatus 200 can send and receive information, including program code, through the networks 280, 290 among others, through network link 278 and communications interface 270. In an example using the Internet 290, a server host 292 transmits program code for a particular application, requested by a message sent from apparatus 200, through Internet 290, ISP equipment 284, local network 280 and communications interface 270. The received code may be executed by processor 202 as it is received, or may be stored in memory 204 or in storage device 208 or any other non-volatile storage for later execution, or both. In this manner, apparatus 200 may obtain application program code in the form of signals on a carrier wave.
Various forms of computer readable media may be involved in carrying one or more sequence of instructions or data or both to processor 202 for execution. For example, instructions and data may initially be carried on a magnetic disk of a remote computer such as host 282. The remote computer loads the instructions and data into its dynamic memory and sends the instructions and data over a telephone line using a modem. A communications interface 270 receives the instructions and data carried in the infrared signal and places information representing the instructions and data onto bus 210. Bus 210 carries the information to memory 204 from which processor 202 retrieves and executes the instructions using some of the data sent with the instructions. The instructions and data received in memory 204 may optionally be stored on storage device 208, either before or after execution by the processor 202.
As noted previously, potential visual miscues and conflicts (e.g., focus mismatches) and/or their impact on a user can be reduced or eliminated by optical and/or non-optical techniques. The method, apparatus, and computer program product for performing the operations of the process 300 relate to non-optical techniques for manipulating or determining the displayed representations 107 of data on the display 101. In operation 301, the apparatus 200 performs and includes means (e.g., a processor 202, camera/sensors 294, input device 212, pointing device 216, etc.) for determining a focus distance of a user. By way of example, the focus distance represents the distance to a point in a display's (e.g., displays 101, 119, 125, and/or 214) field of view this is the subject of the user's attention.
In at least one example embodiment, the point in the field of view and the focus distance are determined using gaze tracking information. Accordingly, the apparatus 200 may be configured with means (e.g., camera/sensors 294) to determine the point of attention by tracking the gaze of the user and to determine the focus distance based on the gaze tracking information. In at least one example embodiment, the apparatus 200 is configured with means (e.g., processor 202, memory 204, camera/sensors 294) to maintain a depth buffer of information, data and/or objects (e.g., both physical and virtual) present in at least one scene within a field of view of a display 101. For example, the apparatus 200 may include means such as a forward facing depth sensing camera to create the depth buffer. The gaze tracking information can then, for instance, be matched against the depth buffer to determine the focus distance.
In at least one example embodiment, the apparatus 200 may be configured with means (e.g., processor 202, input device 212, pointing device 216, camera/sensors 294) to determine the point in the display's field of view that is of interest to the user based on user interaction, input, and/or sensed contextual information. For example, in addition to or instead of the gaze tracking information, the apparatus 200 may determine what point in the field of view is selected (e.g., via input device 212, pointing device 216) by the user. In another example, the apparatus 200 may process sensed contextual information (e.g., accelerometer data, compass data, gyroscope data, etc.) to determine a direction or mode of movement for indicating a point of attention. This point can then be compared against the depth buffer to determine a focus distance.
After determining the focus distance of the user, the apparatus 200 may performed and be configured with means (e.g., processor 202) for determining a representation of data that is to be presented in the display 101 based on the focus distance (operation 303). In at least one example embodiment, determining the representation includes, for instance, determining the visual characteristics of the representation that reduces or eliminates potential visual miscues or conflicts (e.g., focus mismatches) that may contribute to eye fatigue and/or a poor user experience when viewing the display 101.
In at least one example embodiment, the apparatus 200 may be configured to determine the representation based on other parameters in addition or as an alternate to focus distance. For example, the apparatus 200 may be configured with means (e.g., processor 202) to determine the representation based on a representational distance associated with the data. The representational distance is, for instance, the distance in the field of view or scene where the representation 107 should be presented. For example, in an example where the representation 107 augments a real world object viewable in the display 101, the representational distance might correspond to the distance of the object. Based on this representational distance, the apparatus 200 may be configured with means (e.g., processor 202) to apply various rendering characteristics that are a function (e.g., linear or non-linear) of the representational distance.
In at least one example embodiment, the display 101 may be configured with means (e.g., dynamic focus optical components 121a and 121b) to optically adjust focus or focal point settings. In these embodiments, the apparatus 200 may be configured with means (e.g., processor 202) determine the representations 107 based, at least in part, on the focal points settings of the dynamic focus optical components. For example, if a blurring effect is already created by the optical focal point settings, the representations need not include as much, if any, blurring effect when compared to displays 101 without dynamic focus optical components. In other cases, the representations 107 may be determined with additional effects to add or enhance, for instance, depth or focus effects on the display 101.
In at least one example embodiment, the apparatus 200 may be configured with means (e.g., processor 202) to determine a difference of the representational distance from the focus distance. In other words, the visual appearance of the representation 107 may depend on the how far (e.g., in either the foreground or the background) the representational distance is from the determined focus distance. In this way, the apparatus 200 may be configured with means (e.g., processor 202) to determine a degree of at least one rendering characteristics to apply to the representation 107 based on the representational distance from the focus distance. For example, the rendering characteristics may include blurring, shadowing, vergence (e.g., for binocular displays), and the like. Representations 107 that are farther away from the focus distance may be rendered with more blur, or left/right images for a binocular display may be rendered with vergence settings appropriate for the distance. It is contemplated that any type of rendering characteristics (e.g., color, saturation, size, etc.) may be varied based on the representational distance.
After determining the representation 107, the apparatus 200 may perform and be configured with means (e.g., processor 202, display 214) to cause a presentation of the representation 107 on a display (operation 305). Although various embodiments of the method, apparatus, and computer program product described herein are discussed with respect to a binocular head-worn see-through display, it is contemplated that the various embodiments are applicable to presenting representation 107 on any type of display where visual miscues can occur. For example, other displays include non-see-through displays (e.g., as discussed above), monocular displays where only one eye may suffer from accommodation mismatches, and the like. In addition, the various embodiments may apply to displays of completely virtual information (e.g., with no live view).
As shown in operation 307, the apparatus 200 can perform and be configured with means (e.g., processor 202, camera/sensors 294) to determine a change in the focus distance and then to cause an updating of the representation based on the change. In at least one example embodiment, the apparatus 200 may monitor the focus distance for change in substantially real-time, continuously, periodically, according to a schedule, on demand, etc. In this way, as a user changes his/her gaze or focus, the apparatus 200 can dynamically adjust the representations 107 to match with the new focus distance.
As shown in operation 401, the apparatus 200 may perform and be configured with means (e.g., processor 202, camera/sensors 294) to determine a subject of interest within a user's field of view on a display 101 (e.g., what information or object presented in the display 101 is of interest to the user). Similar to determining the focus distance, gaze tracking or user interactions/inputs may be used to determine the subject of interest. In at least one example embodiment, the apparatus 200 may be configured with means (e.g., processor 202, camera/sensors 294) to determine the subject of interest based on whether the user is looking at a representation 107. In at least one example embodiment, where multiple representations 107, information, or objects are perceived at the approximately the same focus distance, the apparatus 200 may further determine which item in the focal plane has the user's interest (e.g., depending on the accuracy of the gaze tracking or user interaction information).
In operation 403, the apparatus 200 may perform and be configured with means (e.g., processor 202) to determine the representation based on the subject of interest. For example, when the user looks at a representation 107, the representation 107 may have one appearance (e.g., bright and in focus). In a scenario where the user looks away from the representation 107 to another item in the same focal plane, the representation may have another appearance (e.g., dark and in focus). In a scenario where the user looks away from the representation 107 to another item in a different focal plan or distance, the representation may have yet another appearance (e.g., dark and out of focus).
In this example, the apparatus 200 has determined the focus distance of the user as focus distance 501 corresponding to the object 103. As described with respect to
As illustrated in
As noted previously, potential visual miscues and conflicts (e.g., focus mismatches) and/or their potential impacts on the user can be reduced or eliminated by optical and/or non-optical techniques. The method, apparatus, and computer program product for performing the operations of the process 600 relate to optical techniques for determining focal point settings for dynamic focus optical components 121 of a display 101 to reduce or eliminate visual miscues or conflicts. Operation 601 is analogous to the focus distance determination operations described with respect to operation 301 of
In at least one example embodiment, the point in the field of view and the focus distance are determined using gaze tracking information. Accordingly, the apparatus 200 may be configured with means (e.g., camera/sensors 294) to determine the point of attention by tracking the gaze of the user and to determine the focus distance based on the gaze tracking information. In at least one example embodiment, the apparatus 200 is configured with means (e.g., processor 202, memory 204, camera/sensors 294) to maintain a depth buffer of information, data and/or objects (e.g., both physical and virtual) present in at least one scene within a field of view of a display 101. For example, the apparatus 200 may include means such as a forward facing depth sensing camera to create the depth buffer. The depth sensing camera or other similar sensors are, for instance, means for determining a depth, a geometry or a combination thereof of the representations 107 and the information, objects, etc. viewed through display 101. For example, the depth buffer can store z-axis values for pixels or points identified in the field of view of the display 101.
The depth and geometry information can be stored in the depth buffer or otherwise associated with the depth buffer. In this way, the gaze tracking information, for instance, can be matched against the depth buffer to determine the focus distance. In at least one example embodiment, the apparatus can be configured with means (e.g., processor 202, memory 204, storage device 208) to store the depth buffer locally at the apparatus 200. In addition or alternatively, the apparatus 200 may be configured to include means (e.g., communication interface 270) to store the depth buffer and related information remotely in, for instance, the server 292, host 282, etc.
In at least one example embodiment, the apparatus 200 may be configured with means (e.g., processor 202, input device 212, pointing device 216, camera/sensors 294) to determine the point in the display's field of view that is of interest to the user based on user interaction, input, and/or sensed contextual information. For example, in addition to or instead of the gaze tracking information, the apparatus 200 may determine what point in the field of view is selected (e.g., via input device 212, pointing device 216) by the user. In another example, the apparatus 200 may process sensed contextual information (e.g., accelerometer data, compass data, gyroscope data, etc.) to determine a direction or mode of movement for indicating a point of attention. This point can then be compared against the depth buffer to determine a focus distance.
In operation 603, the apparatus 200 may perform and be configured with means (e.g., processor 202) for determining at least one focal point setting for one or more dynamic focus optical components 121 of the display 101 based on the focus distance. In at least one example embodiment, the parameters associated with the at least one focal point setting may depend on the type of dynamic focusing system employed by the display 101. As described with respect to
In at least one example embodiment, the apparatus 200 may be configured with means (e.g., processor 202, camera/sensors 294) to determine the at least one focal point setting based on a focus mismatch between representations 107 of data presented on the display 101 and information view through the display 101. By way of example, the apparatus 200 determines a depth for presenting a representation 107 on the display 101 and another depth for viewing information through the display. Based on these two depths, the apparatus 200 can determine whether there is a potential focus mismatch or other visual miscue and then determine the at least one focal point setting to cause a correction of the focus mismatch.
In at least one example embodiment, wherein the display 101 includes at least two dynamic focus optical components 121, the apparatus 200 may be configured with means (e.g., processor 202, camera/sensors 294) to determine a focus mismatch by determining a deviation of the perceived depth of the representation, the information viewed through the display, or a combination thereof resulting from a first set of the focal points settings configured on one of the dynamic focus optical components 121. The apparatus 200 can then determine another set of focal point settings for the other dynamic focus optical component 121 based on the deviation. For instance, the second or other set of focal point settings can be applied to the second or other dynamic focus optical elements to correct any deviations or miscues between representations 107 presented in the display 101 and information viewed through the display. Additional discussion of the process of focus correction using optical components is provided below with respect to
In at least one example embodiment, in addition to optical focus adjustments, the apparatus may be configured with means (e.g., processor 202) for determining at least one vergence setting for the one or more dynamic focus optical components based on the focus distance. In at least one example embodiment, vergence refers to the process of rotating of the eyes around a vertical axis to provide for binocular vision. For example, objects closer to the eyes typically require greater inward rotation of the eyes, whereas for objects that are farther out towards infinity, the eyes are more parallel. Accordingly, the apparatus 200 may determine how to physically configure the dynamic focus optical components 121 to approximate the appropriate level of vergence for a given focus distance. In at least one example embodiment, the at least one vergence setting includes a tilt setting for the one or more dynamic focus optical elements. An illustration of the tilt vergence setting for binocular optical components is provided with respect to
In at least one example embodiment, the apparatus 200 can be configured with means (e.g., processor 202, camera/sensors 294) to combine use of both optical and non-optical techniques for determining focus or other visual miscue correction. Accordingly, in operation 605, the apparatus 200 may perform and be configured with means (e.g., processor 202) to determine a representation 107 based, at least in part, on the focal points settings of the dynamic focus optical components (operation 311). For example, if a blurring effect is already created by the optical focal point settings, the representations need not include as much, if any, blurring effect when compared to displays 101 without dynamic focus optical components. In other cases, the representations 107 may be determined with additional effects to add or enhance, for instance, depth or focus effects on the display 101 with a given focal point setting.
As shown in operation 607, the apparatus 200 can perform and be configured with means (e.g., processor 202, camera/sensors 294) to determine a change in the focus distance and then to cause an updating of the at least one focal point settings for the dynamic focus optical components 121 based on the change. In at least one example embodiment, the apparatus 200 may monitor the focus distance for change in substantially real-time, continuously, periodically, according to a schedule, on demand, etc. In this way, as a user changes his/her gaze or focus, the apparatus 200 can dynamically adjust the focus of the optical components to match with the new focus distance.
However, in the case of a see-through display 101, the perceived depth of the image of the object 103 viewed through the display is also brought closer, therefore maintaining a potential focus mismatch. In the embodiment of
In at least one example embodiment, when the dual lens system of
In at least one example embodiment, the chip set or chip 800 includes a communication mechanism such as a bus 801 for passing information among the components of the chip set 800. A processor 803 has connectivity to the bus 801 to execute instructions and process information stored in, for example, a memory 805. The processor 803 may include one or more processing cores with each core configured to perform independently. A multi-core processor enables multiprocessing within a single physical package. Examples of a multi-core processor include two, four, eight, or greater numbers of processing cores. Alternatively or in addition, the processor 803 may include one or more microprocessors configured in tandem via the bus 801 to enable independent execution of instructions, pipelining, and multithreading. The processor 803 may also be accompanied with one or more specialized components to perform certain processing functions and tasks such as one or more digital signal processors (DSP) 807, or one or more application-specific integrated circuits (ASIC) 809. A DSP 807 typically is configured to process real-world signals (e.g., sound) in real time independently of the processor 803. Similarly, an ASIC 809 can be configured to performed specialized functions not easily performed by a more general purpose processor. Other specialized components to aid in performing the inventive functions described herein may include one or more field programmable gate arrays (FPGA), one or more controllers, or one or more other special-purpose computer chips.
In at least one example embodiment, the chip set or chip 800 includes merely one or more processors and some software and/or firmware supporting and/or relating to and/or for the one or more processors.
The processor 803 and accompanying components have connectivity to the memory 805 via the bus 801. The memory 805 includes both dynamic memory (e.g., RAM, magnetic disk, writable optical disk, etc.) and static memory (e.g., ROM, CD-ROM, etc.) for storing executable instructions that when executed perform the inventive steps described herein to determine representations of displayed information based on focus distance. The memory 805 also stores the data associated with or generated by the execution of the inventive steps.
Pertinent internal components of the telephone include a Main Control Unit (MCU) 903, a Digital Signal Processor (DSP) 905, and a receiver/transmitter unit including a microphone gain control unit and a speaker gain control unit. A main display unit 907 provides a display to the user in support of various applications and mobile terminal functions that perform or support the steps of determining representations of displayed information based on focus distance. The display 907 includes display circuitry configured to display at least a portion of a user interface of the mobile terminal (e.g., mobile telephone). Additionally, the display 907 and display circuitry are configured to facilitate user control of at least some functions of the mobile terminal. An audio function circuitry 909 includes a microphone 911 and microphone amplifier that amplifies the speech signal output from the microphone 911. The amplified speech signal output from the microphone 911 is fed to a coder/decoder (CODEC) 913.
A radio section 915 amplifies power and converts frequency in order to communicate with a base station, which is included in a mobile communication system, via antenna 917. The power amplifier (PA) 919 and the transmitter/modulation circuitry are operationally responsive to the MCU 903, with an output from the PA 919 coupled to the duplexer 921 or circulator or antenna switch, as known in the art. The PA 919 also couples to a battery interface and power control unit 920.
In use, a user of mobile terminal 901 speaks into the microphone 911 and his or her voice along with any detected background noise is converted into an analog voltage. The analog voltage is then converted into a digital signal through the Analog to Digital Converter (ADC) 923. The control unit 903 routes the digital signal into the DSP 905 for processing therein, such as speech encoding, channel encoding, encrypting, and interleaving. In at least one example embodiment, the processed voice signals are encoded, by units not separately shown, using a cellular transmission protocol such as enhanced data rates for global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., microwave access (WiMAX), Long Term Evolution (LTE) networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), wireless fidelity (WiFi), satellite, and the like, or any combination thereof.
The encoded signals are then routed to an equalizer 925 for compensation of any frequency-dependent impairments that occur during transmission though the air such as phase and amplitude distortion. After equalizing the bit stream, the modulator 927 combines the signal with a RF signal generated in the RF interface 929. The modulator 927 generates a sine wave by way of frequency or phase modulation. In order to prepare the signal for transmission, an up-converter 931 combines the sine wave output from the modulator 927 with another sine wave generated by a synthesizer 933 to achieve the desired frequency of transmission. The signal is then sent through a PA 919 to increase the signal to an appropriate power level. In practical systems, the PA 919 acts as a variable gain amplifier whose gain is controlled by the DSP 905 from information received from a network base station. The signal is then filtered within the duplexer 921 and optionally sent to an antenna coupler 935 to match impedances to provide maximum power transfer. Finally, the signal is transmitted via antenna 917 to a local base station. An automatic gain control (AGC) can be supplied to control the gain of the final stages of the receiver. The signals may be forwarded from there to a remote telephone which may be another cellular telephone, any other mobile phone or a land-line connected to a Public Switched Telephone Network (PSTN), or other telephony networks.
Voice signals transmitted to the mobile terminal 901 are received via antenna 917 and immediately amplified by a low noise amplifier (LNA) 937. A down-converter 939 lowers the carrier frequency while the demodulator 941 strips away the RF leaving only a digital bit stream. The signal then goes through the equalizer 925 and is processed by the DSP 905. A Digital to Analog Converter (DAC) 943 converts the signal and the resulting output is transmitted to the user through the speaker 945, all under control of a Main Control Unit (MCU) 903 which can be implemented as a Central Processing Unit (CPU).
The MCU 903 receives various signals including input signals from the keyboard 947. The keyboard 947 and/or the MCU 903 in combination with other user input components (e.g., the microphone 911) comprise a user interface circuitry for managing user input. The MCU 903 runs a user interface software to facilitate user control of at least some functions of the mobile terminal 901 to determine representations of displayed information based on focus distance. The MCU 903 also delivers a display command and a switch command to the display 907 and to the speech output switching controller, respectively. Further, the MCU 903 exchanges information with the DSP 905 and can access an optionally incorporated SIM card 949 and a memory 951. In addition, the MCU 903 executes various control functions required of the terminal. The DSP 905 may, depending upon the implementation, perform any of a variety of conventional digital processing functions on the voice signals. Additionally, DSP 905 determines the background noise level of the local environment from the signals detected by microphone 911 and sets the gain of microphone 911 to a level selected to compensate for the natural tendency of the user of the mobile terminal 901.
The CODEC 913 includes the ADC 923 and DAC 943. The memory 951 stores various data including call incoming tone data and is capable of storing other data including music data received via, e.g., the global Internet. The software module could reside in RAM memory, flash memory, registers, or any other form of writable storage medium known in the art. The memory device 951 may be, but not limited to, a single memory, CD, DVD, ROM, RAM, EEPROM, optical storage, magnetic disk storage, flash memory storage, or any other non-volatile storage medium capable of storing digital data.
An optionally incorporated SIM card 949 carries, for instance, important information, such as the cellular phone number, the carrier supplying service, subscription details, and security information. The SIM card 949 serves primarily to identify the mobile terminal 901 on a radio network. The card 949 also contains a memory for storing a personal telephone number registry, text messages, and user specific mobile terminal settings.
Further, one or more camera sensors 1053 may be incorporated onto the mobile station 1001 wherein the one or more camera sensors may be placed at one or more locations on the mobile station. Generally, the camera sensors may be utilized to capture, record, and cause to store one or more still and/or moving images (e.g., videos, movies, etc.) which also may comprise audio recordings.
While the invention has been described in connection with a number of embodiments and implementations, the invention is not so limited but covers various obvious modifications and equivalent arrangements, which fall within the purview of the appended claims. Although features of the invention are expressed in certain combinations among the claims, it is contemplated that these features can be arranged in any combination and order.