The present invention relates to a portable imaging apparatus for assisting a user with reduced vision.
People with central vision loss (CVL) often retain one or more regions of residual vision. In the case of CVL, this region of remaining vision is peripheral to the fovea, which is the normally high detail and high spatial acuity region of the central macular. Peripheral vision is good for detecting moving objects and objects that are relatively dim. However, the lower spatial resolution of the peripheral vision means that an individual with only peripheral vision struggles to differentiate individual visual features. Generally this means that reading (in the periphery) is a particular challenge as adjacent letters in a word interfere with each other. In addition, faces are difficult to clearly see because the features (e.g. eyes, nose, mouth) become blurred.
It is known to provide a headset to assist a user who suffers from a vision defect. Some issues with headsets are that the headsets can be physically large and bulky, or may have a limited field of view. This makes the headset difficult and uncomfortable to use.
Some headsets may present the user with images which are at a different scale to the surrounding environment. This causes difficulties with navigation due to mismatches in optic flow of the visual scene.
It is an aim of the present invention to address at least one disadvantage associated with the prior art.
An aspect provides a head mountable imaging apparatus for assisting a user with reduced vision comprising:
An advantage of at least one example is an imaging apparatus which is physically compact. For example, providing a circular or an elliptical display device can reduce the bulk of the optics required in front of the display device. This can improve user comfort (e.g. a smaller and/or lighter apparatus) and can allow the imaging apparatus to be worn for longer periods. An advantage of at least one example is an imaging apparatus with a wide field of view which matches the natural range of eye movements. An advantage of at least one example is an imaging apparatus with reduced distortion.
Optionally, the imaging apparatus comprises a first tubular element which surrounds the first display device and the first lens, with the first lens located at an eye-facing end of the first tubular element. The first tubular element may provide a light tight shield. The first lens may be supported by the first tubular element.
The tubular element can provide a light tight shield for blocking stray light in order to keep the contrast of the displays as high as possible. High contrast is important for partially-sighted people due to a common degradation in contrast sensitivity. As the first lens is supported by the first tubular element, the frame or housing of the imaging apparatus can have an open region to the side of the first lens, as the frame or housing does not have to provide support for the lens in this region. This allows a user to view the surrounding environment to the side of the display with their peripheral vision. As described above, users with central vision loss (CVL) often retain peripheral vision. The imaging apparatus also has a second tubular element, with the same features, for a second display and second lens.
Optionally, the imaging apparatus comprises an open region adjacent to the first lens such that a user is able to view a combination of an image on the first display device and surrounding environment outside the imaging apparatus.
The open region has an advantage of keeping the periphery clear for general spatial awareness, object location and obstacle avoidance. It has an advantage of reducing the feeling of isolation from the external world that is normally associated with a shielded headset type of imaging apparatus. The open region has an advantage of reducing nausea because motion sickness is strongly associated with peripheral vision. Keeping it open allows zero-latency motion in the periphery. The open region has an advantage of improving airflow, preventing uncomfortable heat and moisture.
Optionally, the first display device is an opaque display device which does not allow a user to see through the display device. The user can only view what is displayed on the first display device (and the second display device) and the surrounding environment to the side of the first display device (and the second display device). This contrasts with headsets intended for Augmented Reality or Mixed Reality, where a user can see the real world through a display device, and views a combination of an image on the display device and the real world visible through the display device.
Optionally, the first camera has a first image sensor, and wherein the processor is configured to obtain the output from a selected region of the first image sensor which is a subset of an overall area of the first image sensor.
Optionally, the first image sensor has a rectangular shape.
Optionally, the image sensor has an x-axis and a y-axis and wherein the processor is configured to vary the position of the selected region in at least one of the x-axis and the y-axis.
Optionally, the processor is configured to vary a size of the selected region.
Optionally, the first camera is positioned on an outer, forward-facing, side of the imaging apparatus in front of the first display device.
Optionally, the first camera is aligned with a central axis of the first display device.
Optionally, the first camera is substantially aligned with an optical axis of the user's first eye.
Optionally, the first lens is a Fresnel lens, an aspheric lens or a plano convex lens.
Optionally, a distance between the first display and the first lens is less than a diameter or height of the first display. An example range of values of the distance between the first display and the first lens is between one half and two thirds of the diameter or height of the first display. Other values are possible.
Optionally, the imaging apparatus is configured to display an image having a first angular field of view of the scene on the first display device, and the imaging apparatus is configured to provide to the user an angular field of view of the first display device which is the same as the first angular field of view. That is, the imaging apparatus is configured to provide to the user an angular field of view of the image displayed on the first display device which is the same as the first angular field of view.
Optionally, the imaging apparatus comprises a second display device which is circular or elliptical. The second display device may have any of the features described for the first display device.
The imaging apparatus may have a single camera, or may have multiple cameras, such as a camera dedicated to providing a display for each eye. The camera, or cameras, may be positioned on-axis (i.e. aligned with a central axis of the circular or elliptical display device) or positioned off-axis.
Optionally, a second camera is positioned on an outer, forward-facing, side of the imaging apparatus in front of the second display device.
Optionally, the second camera is aligned with a central axis of the second display device.
Optionally, the second camera is substantially aligned with an optical axis of the user's second eye.
An advantage of at least one example is an imaging apparatus with a unity gain factor between the first angular field of view (of the scene on the first display device) and the angular field of view of the second display device provided to the user. This has an advantage of reduced distortion. It can also allow a more seamless transition between what the user sees via the display device and what the user sees in the surrounding environment.
The imaging apparatus can be implemented as a pair of glasses with a frame and arms which locate over a user's ears. Other possible implementations include goggles or a visor with a restraint such as an elasticated strap or a band to fit around the user's head.
The functionality described here can be implemented in hardware, software executed by a processing apparatus, or by a combination of hardware and software. The processing apparatus can comprise a computer, a processor, a state machine, a logic array or any other suitable processing apparatus. The processing apparatus can be a general-purpose processor which executes software to cause the general-purpose processor to perform the required tasks, or the processing apparatus can be dedicated to perform the required functions. Another aspect of the invention provides machine-readable instructions (software) which, when executed by a processor, perform any of the described methods. The machine-readable instructions may be stored on an electronic memory device, hard disk, optical disk or other machine-readable storage medium. The machine-readable medium can be a non-transitory machine-readable medium. The term “non-transitory machine-readable medium” comprises all machine-readable media except for a transitory, propagating signal. The machine-readable instructions can be downloaded to the storage medium via a network connection.
Within the scope of this application it is envisaged that the various aspects, embodiments, examples and alternatives, and in particular the individual features thereof, set out in the preceding paragraphs, in the claims and/or in the following description and drawings, may be taken independently or in any combination. For example features described in connection with one embodiment are applicable to all embodiments, unless such features are incompatible.
For the avoidance of doubt, it is to be understood that features described with respect to one aspect of the invention may be included within any other aspect of the invention, alone or in appropriate combination with one or more other features.
One or more embodiments of the invention will now be described, by way of example only, with reference to the accompanying figures in which:
The imaging apparatus 5 provides each eye of the user with an image representing a view of the surrounding environment in front of the apparatus. In particular, the imaging apparatus 5 provides each eye of the user with an image representing a view of the surrounding environment that a user would normally experience with that eye. A display 20, 30 is provided in front of each of the user's eyes. A first display 20 is provided in front of the user's left eye. A second display 30 is provided in front of the user's right eye. Each display 20, 30 is supported by the frame/housing 10. The position of the displays 20, 30 is best seen in
Each of the displays 20, 30 has a round shape, such as a circular shape or an oval/elliptical shape. In an example, the diameter of the circular OLED display is 35 mm. Other dimensions are possible. The displays 20, 30 may be the type of round displays used in smart watches, or any other suitable display.
In
Referring again to
Each lens 27, 37 is round and advantageously is slightly larger than the display. For example a 40 mm diameter lens may be used with a 35 mm diameter display. Each display 20, 30 is placed within, or near, the focal plane of the lens. This allows the full area of the display to be viewed and results in an image focused far away when placed close to the eye.
A first tubular element 24 surrounds the display 20 and the lens 27, maintaining the lens 27 at a fixed distance from the display 20. The lens 27 is supported by the first tubular element 24. No other part of the frame 10 or housing is required to support the first lens 27. The display (or the PCB 26 on which the display 20 is mounted) is located at the outward-facing end of the tubular element 24. The fresnel lens 27 is located at, or close to, the eye-facing end of the tubular element 24. A region of empty space separates the display 20 and the lens 27. The first tubular element 24 may also provide a light tight shield between the lens 27 and the display 20. That is, the only optical path to/from the display is via the lens 27. This prevents stray light from reaching the display 20. This can improve readability of the display, especially under bright conditions while avoiding the need to fully isolate the user from the surrounding environment in the manner of a conventional shielded headset. High contrast is important for partially-sighted people due to a degradation in contrast sensitivity. A second tubular element 34 provides the same functions for the right eye display 30 and lens 37.
The imaging apparatus 5 comprises an open region 16 adjacent to the first lens 27. The open region 16 is to the left hand side of the first lens 27. Similarly, the imaging apparatus 5 comprises an open region 17 adjacent to the second lens 37. The open region 17 is to the right hand side of the first lens 27. Instead of shielding the user from the surrounding environment, the user can view a combination of an image on the first display device (via the lens 27) and the surrounding environment outside the imaging apparatus. Similarly, the user can view a combination of an image on the second display device (via the lens 37) and the surrounding environment outside the imaging apparatus. The open region has an advantage of keeping the periphery clear for general spatial awareness, object location and obstacle avoidance. It has an advantage of reducing the feeling of isolation from the external world that is normally associated with a shielded headset type of imaging apparatus, and can reduce nausea. The open region has an advantage of improving airflow, preventing uncomfortable heat and moisture.
A prescription lens 28, 38 may also be provided. The prescription lens may compensate for short-sightedness (myopia), far-sightedness (hyperopia) and/or some other condition. In
One possible location for the processing unit 40 is the bridge region 11 of the frame/housing 10. Another possible location for the processing unit 40 is in one, or both, of the arms 12, 13. The imaging apparatus 5 may comprise a local power source, such as at least one battery housed in one, or both, of the arms 12, 13.
For comparison purposes,
Advantageously the display 20, camera 25 and Fresnel lens 27 are all aligned, and are aligned with a main optical axis 21 of the user's eye 2. The Fresnel lens 27 is positioned within a field of view (FOV) of the user's eye. The lens 27 allows the user's eye to form a focused image of the display 20.
An aim of the imaging apparatus 5 is to appear, to the user, as if there is nothing but an empty glasses frame in front of their eye. To achieve this, the FOV 22 of the lens 27 and display 20 as seen by the user's eye is matched to the FOV 25D of the scene displayed on the display 20. This gives a system magnification of 1× (unity) in terms of angular field of view. The relationship between the FOV 22 and FOV 25D is shown by
In conventional Virtual Reality (VR)/Augmented Reality (AR) imaging there is often a mismatch between the viewing angle of cameras and the viewing angle of their displays. This provides further difficulties with navigation due to mismatches in the optic flow of the visual scene. Close objects move faster in the visual field than distant objects. When the image on the display is zoomed in, and at a greater size than real life, the increased optic flow makes everything appear to be closer than it is.
The user typically experiences a discontinuity between their view of the display 20, 30 and their view past the edge of the display 20, 30 due to distortions in the image. The discontinuity in the optic flow may induce nausea and make navigation around the world challenging. It may also make it difficult to perform tasks requiring hand-eye coordination.
In the imaging apparatus 5, the effects of this optical discontinuity are reduced. A user experiences a system magnification of unity by matching the camera focal length and chip size, to the display size and lens focal length. Fine adjustments to the system magnification are made digitally. This ensures that peripheral vision past the edge of the display and the image on the display are continuous. The user is able to then use peripheral vision with no mismatch in position, scale or flow of objects as they pass the boundary from peripheral vision to the display. The discontinuity at the boundary of the apparatus may be similar to that experienced at a frame of a conventional pair of glasses.
The user's field of view FOV 22 of the lens 27, and the display 20 beyond the lens 27, is determined by factors such as the size of the lens 27, the size of the display 20 and the distance 50 between the lens 27 and the eye 2. As explained above, the eye 2 has a wider overall FOV than FOV 22. The extent of the wider FOV of the eye is shown by the dashed lines 6, 7. When the eye is located in one of the extreme positions 2B, 2C the user's gaze is directed approximately one third of the way across the display 20. The full display 20 will still be visible within the user's peripheral vision. The world beyond the edge of the lens 27 will also be visible in the user's peripheral vision, assuming the glasses frame does not obstruct this. This is true even when the gaze is directed straight ahead. Rotation of the eye 2 effectively translates the pupil, and the edge of the lens 27 effectively acts as a window that the display 20 is viewed through. This means that as the eye rotates the view of the display 20 will appear to be cropped differently. If the eye is rotated to the left then the display will be cropped on the left hand edge. By configuring the lens 27 with a larger diameter than a diameter the display 20, this effect should be minimised or negligible.
Referring to
As explained above, the scene displayed on the display 20 has a FOV 25D. Referring again to
The circular display 20 displays an image which is selected from a region of the image sensor 25A. Stated another way, the image sensor 25A is cropped to provide the image for display.
In
A position of the cropped region of the image sensor 25A may be selected by the processing unit 40. For example, the cropped region used for output to the display 20 may be moved in the x-axis and/or y-axis.
The size of the cropped region may be varied by the processing unit 40. Size may be varied by a digital zoom operation, i.e. a digital domain manipulation of the mapping between the pixels of the image sensor 25A and the pixels of the display 20. A digital zoom in function is shown in
Position and/or size of the displayed region may be selected by manual control. For example, a user can manually enlarge (zoom in) or shrink (zoom out) the image based on their own needs and the visual experience. A user interface to control the zoom function may be provided on the imaging apparatus 5 (e.g. buttons on arms 12, 13,
In another situation, the zoom level may be preconfigured for the wearer by a qualified technician or clinician based on factors such as: the shape of the user's face; the distance from the eye to the lens 27 when the imaging apparatus is worn by the user.
The camera lens 25B is a wide angle lens. This type of lens inevitably has non-ideal optical properties.
As described above, the imaging apparatus can have a single camera, such as a single camera which is centrally-mounted on the front of the frame 10 or housing. The single camera has a FOV which is sufficient to provide images to each display. For example, to provide a FOV to each eye of 60 degrees, the single camera may have a FOV of 80 degrees. An output of the single camera is processed to provide an image to the left eye display 20 and to the right eye display 30. The images displayed by each display 20, 30 can have the same unity gain factor described above. That is, the left eye display 20 is configured to display an image having a first angular field of view of the scene in front of the left eye on the left eye display device, and the imaging apparatus is configured to provide to the user an angular field of view of the left eye display device which is the same as the first angular field of view. Similarly, the right eye display 30 is configured to display an image having a first angular field of view of the scene in front of the right eye on the right eye display device, and the imaging apparatus is configured to provide to the user an angular field of view of the right eye display device which is the same as the first angular field of view. This gives continuity between the displayed image and the real world, and continuity between the displayed image and the surrounding environment visible through the open regions 16, 17 to the side of the lenses 27, 37.
Another aspect of the disclosure may be understood with reference to the following numbered clauses.
Throughout the description and claims of this specification, the words “comprise” and “contain” and variations of the words, for example “comprising” and “comprises”, means “including but not limited to”, and is not intended to (and does not) exclude other moieties, additives, components, integers or steps.
Throughout the description and claims of this specification, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.
Features, integers or characteristics described in conjunction with a particular aspect, embodiment or example of the invention are to be understood to be applicable to any other aspect, embodiment or example described herein unless incompatible therewith.
Number | Date | Country | Kind |
---|---|---|---|
1902163.3 | Feb 2019 | GB | national |
1902164.1 | Feb 2019 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/GB2020/050354 | 2/14/2020 | WO | 00 |