The present disclosure relates to a device for taking an image of an object and displaying the image on an associated screen; more specifically, the disclosure relates to a vision assistive device with an extended depth of field.
As many as 10 million Americans are blind or visually impaired. Over the next 30 years, this number is expected to double. Thus, there is an ever growing need to provide members of the blind/low-vision community with the tools necessary to accomplish daily tasks. These tasks may include reading documents and inspecting or manipulating objects. Although these tasks may be routine for sighted individuals, they present unique difficulties for blind or low vision users.
The background art contains numerous examples of devices for assisting those with visual difficulties. One such example is U.S. Pat. No. 6,198,547 to Matsuda, which discloses an apparatus for reading a document and extracting an image. The apparatus is capable of reproducing the obverse side of a paper such that a copy taken is free of the adverse effect of characters printed on the reverse side of the paper and seen therethrough.
Another example is disclosed by U.S. Pat. No. 6,570,583 to Kung. Kung discloses a handheld device with a display that can zoom in or out according to a signal from a control device. The control device can also be used to make changes to font and icon sizes.
U.S. Pat. No. 7,899,310 to Hsiech is an example of a document snapshot device. The device includes a baseboard, a camera, and a foldable supporting device. The camera is designed to rotate along a plane to assist in taking a snapshot of a document.
One problem with the devices of the background art is that they require complex set up procedures before they can be used. Moreover, operating current vision assistance devices can be problematic for a visually impaired person due to the heavy reliance on digital controls, such as keypads, keyboards, mice, and touch screens. Known devices also suffer from a limited depth of field. As such magnified objects tend to have regions that are blurry or out of focus. There is a further need for vision assistive devices that can be used in multiple configurations. The vision assistive device of the present disclosure is designed to overcome these and other shortcomings of the background art.
One of the advantages realized by the present device is that it achieves an extended depth of field while using only a single camera.
By providing an extended depth of field the present device helps eliminate peripheral areas that may be out of focus or blurry and does so without the need for multiple image sensors.
A further benefit is realized by allowing the entire surface of an object to be imaged by a single, static image sensor.
Another advantage is attained by permitting a single camera to sequentially change its area of focus and then combine the resulting images to arrive at a single integrated, focused image.
An improvement over known devices is realized by providing a single image sensor that can change its area of focus via digital processing techniques, thereby eliminating the need to pivot or otherwise move the image sensor.
Still yet another advantage is realized by providing a sensor with a focus motor to rotate the sensor and image multiple areas of focus.
Another benefit over known devices is attained by providing a device that can be employed in a standing configuration upon a desktop or a reclined configuration upon a user's lap.
Another advantage is attained by providing a fin-shaped structure upon the rear face of the device, with the fin-shaped device furthering the stability of the device in various orientations.
These and other objects are achieved by providing a vision assistive device for use by blind or low vision users. The device includes an imaging unit that is cantilevered from the main body of the device, and thereby permitting the unit to view objects positioned beneath the device. The device further includes a forwardly facing screen for displaying an enlarged view of the imaged object to the user. The imaging unit can take multiple views of the object, with different areas of focus. The areas of focus can be changed digitally or via the use of a focus motor. These various images can then be combined into a single, focused, composite image. The device disclosure further includes a rearwardly positioned fin that facilitates multiple orientations of the device. In the first orientation, the device is vertically positioned upon a desktop. In the second orientation, the device is reclined and placed in the user's lap with the fin positioned between the user's legs.
The foregoing has outlined rather broadly the more pertinent and important features of the present invention in order that the detailed description of the invention that follows may be better understood so that the present contribution to the art can be more fully appreciated. Additional features of the invention will be described hereinafter which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and the specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims.
The presently disclosed embodiments will be further explained with reference to the attached drawings. The drawings shown are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the presently disclosed embodiments.
Similar reference characters refer to similar parts throughout the several views of the drawings.
The present disclosure relates to a vision assistive device for use by blind or low vision users. The device includes an imaging unit for viewing objects positioned beneath the device. The device further includes a forwardly facing screen for displaying an enlarged view of the imaged object to the user. The imaging unit is configured to take multiple views of the object, each with a different area of focus. This can be accomplished by digitally changing the imaging sensor's area of focus or by pivoting the sensor via a focus motor. In either event, a single sensor takes multiple images that are combined into a single, integrated, focused, and composite image. Combining images with differing areas of focus helps eliminate any blurry regions in the composite image. The device further includes a rearwardly positioned fin that facilitates positioning the device in multiple various orientations. In a first orientation, the fin stabilizes the device in a vertically oriented position. In a second orientation, the device is reclined and placed in the user's lap with the fin positioned between the user's legs.
Device 20 includes a main housing 22 with a base region 24, a top region 26, an intermediate region 28, and front and back faces (32 and 34). As noted in
With reference to
The imaging unit 56 of the device is described next. A single imaging unit 56 is preferably formed within a bottom surface 44 of the overhang so as to point downwardly towards the object 74 to be imaged. Two or more lights 58 such as LEDs may be positioned within an inset and adjacent to the imaging unit 56 to provide proper illumination for the object. As noted in
If only a single area of focus is utilized, the peripheral regions of the resulting image may be blurry or out of focus. In accordance with the invention, the focal length is varied and multiple areas of focus are utilized. This can be accomplished by adjusting the distance between lens 66 and image sensor 62 while at the same time employing digital processing techniques to change the area of focus. Such techniques permit the area of focus to be changed digitally via a singular static image sensor 62. In the preferred embodiment, three different areas of focus are employed as noted by 76(a), 76(b), and 76(c) in
In an alternative embodiment, the focal length can be altered by way of the focus motor 72 associated with the image sensor 62. Specifically, the image sensor 62 can be pivoted about two intersecting and perpendicular X and Y axes via focus motor 72. This allows the image sensor to physically change its area of focus instead of relying upon digital processing techniques. Movement of the image sensor 62 can be controlled manually via the user or automatically on the basis of a pre-established imaging program.
In the example depicted in
However, in accordance with the disclosure, this undesirable result is avoided by selectively changing the focal length “F” and the area of focus. For example as noted in
In the embodiment depicted in
Next as illustrated in
The present disclosure includes that contained in the appended claims, as well as that of the foregoing description. Although this invention has been described in its preferred form with a certain degree of particularity, it is understood that the present disclosure of the preferred form has been made only by way of example and that numerous changes in the details of construction and the combination and arrangement of parts may be resorted to without departing from the spirit and scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
D211414 | Hockenberry | Jun 1968 | S |
D254868 | Hoadley | Apr 1980 | S |
D270277 | Studer | Aug 1983 | S |
4888195 | Huhn et al. | Dec 1989 | A |
4928170 | Soloveychik et al. | May 1990 | A |
5633674 | Trulaske et al. | May 1997 | A |
D430588 | Goldberg et al. | Sep 2000 | S |
6115482 | Sears et al. | Sep 2000 | A |
6198547 | Matsuda | Mar 2001 | B1 |
6570583 | Kung et al. | May 2003 | B1 |
6731326 | Bettinardi | May 2004 | B1 |
6791600 | Chan | Sep 2004 | B1 |
6965862 | Schuller | Nov 2005 | B2 |
7626634 | Ohki et al. | Dec 2009 | B2 |
D623214 | Onoda | Sep 2010 | S |
7805307 | Levin et al. | Sep 2010 | B2 |
7899310 | Hsieh et al. | Mar 2011 | B2 |
8113841 | Rojas et al. | Feb 2012 | B2 |
8194154 | Yoon et al. | Jun 2012 | B2 |
8284999 | Kurzweil et al. | Oct 2012 | B2 |
8681268 | Reznik et al. | Mar 2014 | B2 |
9449531 | Reznik et al. | Sep 2016 | B2 |
20030043114 | Silfverberg et al. | Mar 2003 | A1 |
20040100575 | Malzbender | May 2004 | A1 |
20040165100 | Motta | Aug 2004 | A1 |
20060257138 | Fromm | Nov 2006 | A1 |
20070188626 | Squilla et al. | Aug 2007 | A1 |
20070253703 | Tsai et al. | Nov 2007 | A1 |
20080151056 | Ahamefula | Jun 2008 | A1 |
20080186287 | Saila | Aug 2008 | A1 |
20080260210 | Kobeli et al. | Oct 2008 | A1 |
20090225164 | Renkis | Sep 2009 | A1 |
20090244301 | Border et al. | Oct 2009 | A1 |
20100201801 | Maruyama et al. | Aug 2010 | A1 |
20100215354 | Ohnishi | Aug 2010 | A1 |
20110128825 | Tanaka | Jun 2011 | A1 |
20110194155 | Kasuga | Aug 2011 | A1 |
20120001999 | Schirdewahn et al. | Jan 2012 | A1 |
20120154543 | Kasuga | Jun 2012 | A1 |
20130187774 | Muecke et al. | Jul 2013 | A1 |
20130215322 | Haler | Aug 2013 | A1 |
20130314593 | Reznik | Nov 2013 | A1 |
20150177382 | Vogel | Jun 2015 | A1 |
20170069228 | Reznik et al. | Mar 2017 | A1 |
Number | Date | Country |
---|---|---|
2065871 | May 2010 | EP |
2005318023 | Nov 2005 | JP |
2013177380 | Nov 2013 | WO |
Entry |
---|
Liang, Jian, Doermann, David, Huiping, Li; “Camera-based analysis of text and documents: a survey”; International Journal of Document Analysis and Recognition (IJDAR), Springer, Berlin, DE, vol. 7, No. 2-3, Jul. 1, 2005 (Jul. 1, 2005), pp. 84-104, XP019352711, ISSN: 1433-2825, DOI: 10.1007/S10032-004-0138-Z. |
Number | Date | Country | |
---|---|---|---|
20200258421 A1 | Aug 2020 | US |