This disclosure relates to stabilization of detectors used for imaging, and contextual processing of measurement data.
When visual information is acquired using a detector mounted to a prosthetic device worn by a patient, the information can include artifacts that reflect involuntary movement of the patient rather than intentional positioning of the detector. These artifacts can limit the usefulness of the acquired visual information. Methods for processing acquired images using various software algorithms in an attempt to reduce such artifacts have been described. As the amount of visual information acquired increases, the hardware requirements for implementing such algorithms to produce relatively rapid results also increase.
The methods and systems disclosed herein can be used to compensate for motion of a detector while the detector acquires information. While motion of a detector can occur in a variety of circumstances, an important application of the methods and systems disclosed herein involves the use of detectors to provide visual information to persons with reduced visual acuity. In such circumstances, involuntary motion of the detector (as occurs, for example, when a person is wearing the detector while walking or moving their head) can lead to aberrations such as shakiness and blurring of the visual information. As a result, a person receiving the blurred visual information, or other information derived from the visual information, may have difficulty interpreting the information.
The methods and systems disclosed herein can compensate for involuntary motion of a detector. One or more sensors coupled to the detector detect and transmit information about linear and/or angular motion of the detector to an electronic processor. The processor analyzes the motion of the detector, separating the motion into one or more components. Each component is assigned to a particular class by the processor. As used herein, a “class” corresponds to a certain category, type, source, or origin of motion. For example, certain classes of motion correspond to involuntary movement of the detector and are designated for compensation. Other classes of motion are recognized by the electronic processor as corresponding to voluntary movement of the detector (e.g., the movement of the detector that occurs when the wearer shifts his or her gaze), and are not designated for compensation. The electronic processor can then generate control signals that drive actuators to compensate for the components of the motion designated for compensation. The actuators can be coupled to one or more components of the systems including, for example, the detector (e.g., the exterior housing of the detector), one or more sensing elements within the detector (e.g., CCD arrays and/or other sensor elements such as diodes), and one or more optical elements through light from the scene that is being imaged passes (e.g., one or more mirrors, lenses, prisms, optical plates, and/or other such elements). Actuators can also be coupled to more than one component in the system, including any of the foregoing types of components.
In general, in a first aspect, the disclosure features an image stabilization system that includes a detector configured to detect images, an actuator coupled to the detector, a sensor coupled to the detector and configured to detect motion of the detector, and an electronic processor in communication with the sensor and the actuator, where the electronic processor is configured to: (a) receive information about motion of the detector from the sensor; (b) determine components of the motion of the detector, and associate a class with each of the determined components; (c) identify components to be compensated from among the determined components based on the associated classes; and (d) generate a control signal that causes the actuator to adjust a position of at least a portion of the detector to compensate for the identified components.
Embodiments of the system can include any one or more of the following features.
The sensor can include at least one of an accelerometer and a gyroscope. The detector can include a camera. The actuator can include at least one of a mechanical actuator and a piezoelectric actuator.
The motion of the detector can include at least one of motion along a linear direction and angular motion about an axis. The actuator can be configured to adjust the position by at least one of translating and rotating the at least a portion of the detector.
The system can include a support structure to which the detector, actuator, sensor, and electronic processor are attached. The support structure can be eyeglass frames or a hat or cap.
The system can include a receiver in communication with the detector and configured to: (a) receive information from the detector, where the information is derived from one or more images detected by the detector; and (b) transmit a representation of the received information to a human. The receiver can include a visual implant positioned in an eye of the human.
The system can include at least one additional sensor coupled to the detector, where each sensor is configured to detect linear motion along any of three orthogonal axes or angular motion about any one of the three orthogonal axes, and where each sensor detects a different motion of the detector.
The system can be worn by a human, and one of the associated classes can include involuntary motion of the detector by the human. Another one of the classes can include voluntary motion of the detector by the human.
Embodiments of the system can also include any of the other features disclosed herein, in any combination, as appropriate.
In another aspect, the disclosure features a method for image stabilization that includes obtaining image information using a detector, detecting motion of the detector while the image information is obtained, determining components of the motion of the detector and associating a class with each of the determined components, identifying components to be compensated from among the determined components based on the associated classes, and adjusting a position of at least a portion of the detector to compensate for the identified components while the image information is obtained.
Embodiments of the method can include any one or more of the following features, as appropriate.
Obtaining image information can include detecting one or more images.
Detecting motion of the detector can include at least one of detecting a linear displacement of the detector along a direction and detecting an angular displacement of the detector about an axis. Detecting motion of the detector can include at least one of detecting linear displacements of the detector along at least two orthogonal coordinate directions, and detecting angular displacements of the detector about at least two orthogonal axes.
One of the classes can include involuntary motion of the detector by a wearer of the detector. One of the classes can include voluntary motion of the detector by the wearer.
Adjusting the position of at least a portion of the detector to compensate for the identified components can include directing an actuator coupled to the detector to: (a) linearly displace the detector along a direction opposite to a linear displacement corresponding to at least one of the identified components; or (b) angularly displace the detector about an axis in a direction opposite to an angular displacement about the same axis corresponding to at least one of the identified components; or (c) both (a) and (b).
Determining components of the motion of the detector can include detecting a magnitude of a displacement of the detector relative to a reference position, and identifying components of the motion based upon the magnitude of the displacement. The method can include associating a class with at least some of the determined components based upon the magnitude of the displacement.
Determining components of the motion of the detector can include detecting a magnitude of a displacement of the detector relative to a reference position, determining one or more frequencies associated with the displacement of the detector, and identifying components of the motion based upon the one or more frequencies. The method can include associating a class with at least some of the determined components based upon the determined frequencies.
The method can include transmitting the image information from the detector to a receiver. The detector can be worn by a human, and the receiver can be a visual implant positioned in an eye of the human.
The method can include determining a velocity of the detector, and halting transmission of the image information to the receiver when a magnitude of the velocity exceeds a threshold value.
Embodiments of the method can also include any of the other features disclosed herein, in any combination, as appropriate.
In a further aspect, the disclosure features a method for transmitting image information that includes detecting motion of a detector while the detector measures image information, decomposing the detected motion into a plurality of components to identify a portion of the motion to be compensated based on at least one of a magnitude, a frequency, and a velocity of the motion, and while the detector measures image information: (a) moving the detector to offset the identified portion of the motion; and (b) transmitting information derived from the image information to a receiver.
Embodiments of the method can include any one or more of the features disclosed herein, in any combination, as appropriate.
In another aspect, the disclosure features a method for image correction that includes obtaining an image using a detector, detecting motion of the detector while the image is obtained, transmitting information about the motion of the detector to an electronic processor, and using the electronic processor to correct the image based on the information about the motion of the detector.
Embodiments of the method can include any one or more of the following features.
Correcting the image can include reducing artifacts in the image that arise from the motion of the detector.
The method can include: determining components of the motion of the detector and associating a class with each of the determined components; identifying components to be compensated from among the determined components based on the associated classes; and correcting the image by compensating for the effects of the identified components in the image.
Embodiments of the method can also include any of the other features disclosed herein, in any combination, as appropriate.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of the disclosed embodiments, suitable methods and materials are described below. All publications, patent applications, patents, and other references mentioned herein are incorporated by reference in their entirety. In case of conflict, the present specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and not intended to be limiting.
The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features and advantages of the invention will be apparent from the description, drawings, and claims.
Like reference symbols in the various drawings indicate like elements.
Introduction
Compensating for the motion of a detector during data acquisition is a problem that is common to a variety of different applications. Visual information such as still or video images is particularly prone to aberrations such as shakiness and blurring that result from motion of the detector as the information is obtained. Attempts to compensate for detector motion during the acquisition of visual information have generally involved increasingly sophisticated software methods for analyzing and numerically processing images after they are acquired. However, as the information density in the acquired images increases, the computing hardware requirements for implementing complex image processing algorithms so that they execute within a reasonable time also must increase. Moreover, more advanced hardware tends to be larger, to consume more power, and to be unsuitable for implementation in portable applications. Thus, alternatives for image stabilization to known image processing algorithms are needed.
Human visual impairment is a significant handicap that dramatically alters the quality of life for an affected person. In recent years, efforts toward enhancing the visual acuity of persons with visual impairment have focused on methods for artificially acquiring and conveying visual information in a manner that can be interpreted by such persons. This disclosure features systems for providing such visual information, and other types of information derived from visual information, that can be housed in a variety of wearable prosthetics such as eyeglass frames. The systems typically feature one or more detectors configured to acquire visual information and to transmit that information to the wearer of the prosthetic device.
The methods and systems disclosed herein provide for compensation of visual information by directly compensating for the motion of the detector as the visual information is acquired. In this way, the visual information is already compensated when it is acquired, and no post-acquisition processing is necessary. As a result, the computing hardware requirements and power consumption of the systems disclosed herein are, in general, significantly reduced compared to systems that rely on software-based post-processing of visual information. However, the present systems can also be combined with known image stabilization software systems. For example, the sensors disclosed herein can be used to estimate the motion of one or more detectors, rather than using only software-based image processing algorithms to perform this estimation, as is common in conventional motion compensation systems. Typically, estimation of detector motion using image processing algorithms creates a feedback “bottleneck,” limiting the rate at which information can be provided to the wearer of such systems. The systems and methods disclosed herein allow this processing bottleneck to be largely circumvented, making real-time or near-real-time motion compensation feasible.
Systems for Motion Compensation During Active Detection
To implement motion compensation of a detector, however, it can be important to determine the context in which the motion occurs. By determining the proper context of the motion, the systems and methods disclosed herein can selectively compensate for certain types of motion, and not compensate for other types of motion. Further, the systems and methods disclosed herein can deliver visual information to a person that is appropriate to the context in which the information will be used. As an example, consider the case of a visually impaired person wearing a visual detector and undergoing locomotion (e.g., walking).
For a person undergoing locomotion to navigate reliably based on visual information obtained by prosthetic device 20, a detector in prosthetic device 20 that measures the visual information should remain stably fixed upon a particular scene of interest. However, as discussed above, visual information measured by prosthetic device 20 will typically include artifacts that result from involuntary motion of the person's head 10 during acquisition of the information. Such artifacts can make the visual information difficult to interpret for the person receiving it. A similar effect referred to as oscillopsia has been observed in sighted persons who lose their ability to stabilize their gaze in space due to the loss of inner ear balance and vestibular function. To avoid such problems, involuntary components of the motion of the detector in prosthetic device 20 (which are the result of involuntary components of the motion of the person's head 10) are compensated by the systems and methods disclosed herein, so that the visual information provided to the person wearing the prosthetic device reflects a “steady gaze” of the detector on a scene of interest.
On the other hand, consider the case where a person at rest slowly turns his or her head (thereby moving the detector located in prosthetic device 20) to look at an object of interest. In this example, compensating for the motion of the detector is counterproductive, because the movement of the detector is intended by the wearer—it is not involuntary. Compensating for such motion frustrates the intent of the wearer, which is to change the orientation of the detector and thereby alter the nature of the visual information that is acquired.
To provide visual information that is appropriate to a person in a wide variety of circumstances, the systems and methods disclosed herein implement context-dependent compensation of acquired information. That is, the systems and methods measure the motion of one or more detectors used to acquire visual information, analyze the components of the motion of the detectors to determine the context of each component of the motion, and then compensate only the components of motion that are appropriate for compensation. Whether or not it is appropriate to compensate a particular component of motion depends on the context under which that component of motion arises. For example, involuntary movements of a detector are typically compensated, whereas voluntary (e.g., on the part of the wearer of the prosthetic device) movements of the detector are not compensated.
In this way, the systems and methods disclosed herein bear some similarities to the natural methods by which the brain perceives information. When a person is walking, for example, the brain ensures that involuntary head bobbing and translation are compensated by rapid countervailing movements of the eyes, so that the person's gaze remains fixed on a particular scene. A similar type of compensation occurs when a person nods his or her head to indicate “yes” or “no”—the person's gaze generally remains fixed on a particular scene through compensating movements of the eyes which are initiated by the brain. In contrast, when a person rapidly turns his or her head sideways (e.g., to follow a moving object), the brain generally does not compensate the acquired visual information for the motion, and the visual information that is perceived by the person typically is blurred as a result.
Detector 210 can generally include one or more detectors configured to acquire visual information. Suitable detectors include CCD cameras, CMOS-based detectors, analog imaging devices, diode arrays, and other detectors capable of obtaining visual information and, more generally, measuring optical signals.
System 200 can include one or more sensors 220. Although
A variety of different types of sensors can be used to detect motion of detector 210. In some embodiments, for example, sensors 220 include accelerometers configured to detect linear motion of detector 210. In certain embodiments, sensors 220 include gyroscopes configured to detect angular motion of detector 210. Other sensors that can be used to detect motion of detector 210 include: magnetometers, which measure angular yaw motion of detector 210 by sensing magnetic field changes; and global positioning system (GPS) detectors. In some embodiments, sensors that include multiple different types of detectors can be used to detect motion of detector 210. For example, commercially available inertial measurement units (IMUs) can be used to detect motion of detector 210. One commercially available IMU that can be used in the systems and methods disclosed herein is the Six Degrees of Freedom Inertial Sensor ADIS16385 (available from Analog Devices Inc., Norwood, Mass.) which includes three accelerometers and three gyroscopes in a single integrated package, and can be used to detect roll, pitch, and yaw angular motions of detector 210, and also acceleration in each of three orthogonal coordinate directions. Other commercially available IMUs that can be used in the systems and methods disclosed herein include the MPU-600 Motion Processor, available from InvenSense (Sunnyvale, Calif.), and the Ten Degrees of Freedom Inertial Sensor AIS16407 (available from Analog Devices Inc.).
Actuators 230 are coupled to detector 210 and configured to adjust the position of detector 210 by receiving control signals from electronic processor 240 via communication line 257. Although system 200 in
In some embodiments, system 200 includes receiver 260. Receiver 260 is generally configured to receive acquired visual information from detector 210, either via a communication line linking detector 210 and receiver 260, or through another communications interface (such as a wireless communications interface). Receiver 260 can also be configured to receive information from external sources such as other computing devices; in some embodiments, detector 210 provides visual information to an external device, which then processes the visual information and relays a portion of the processed information to receiver 260.
Receiver 260 can take a variety of different forms. In some embodiments, for example, receiver 260 can be a visual implant that receives visual information and transforms the visual information into electronic signals that can be interpreted as images by the implant wearer. In certain embodiments, receiver 260 can be a device that receives visual or other information and transforms the information into other types of signals such as sounds, speech, vibrations, and/or other non-image visual cues (e.g., flashing lights and other visual signals that convey information or warnings to the wearer). Examples of useful receivers are disclosed, for example, in U.S. Pat. No. 5,935,155, and at the internet address seeingwithsound.com, the entire contents of which are incorporated herein by reference. Certain receivers can also include tactile vibrators such as the C2 tactor (available from EAI, Casselberry, Fla.).
Context-Based Compensation Methods and Systems
Steps 330, 340, and 350 in flow chart 300 provide for context-based analysis of the motion of detector 210. In step 330, electronic processor 240 analyzes the motion of detector 240 to identify various components of the detector motion. In some embodiments, the identification of different components of the detector's motion is analogous to identifying the causes or the context underlying the detector's motion.
Components of the detector's motion can be identified by analyzing the specific patterns of detector motion detected by sensors 220. As an example, when detector 210 is worn by a person, high-velocity and small-amplitude motions of detector 210 are likely to correspond to involuntary motion as occurs, for example, when the person undergoes locomotion. In contrast, larger-amplitude low-velocity motions of detector 210 are likely to correspond to voluntary motion as occurs, for example, when the person intentionally turns his or her head to shift gaze. On the basis of these principles, electronic processor 240 can analyze the detected motion of detector 210 to identify the various components of the detector's motion in step 330. Then, in step 340, the electronic processor can assign the various motion components to classes. For a system configured to acquire visual information, the classes typically correspond to different contexts or causes of the detector motion.
In general, electronic processor 240 can be configured to identify many different components of motion and to assign the identified components to many different classes or contexts. Three exemplary classes will be discussed below. The first such class or context is motion of a detector due to locomotion by a person wearing or carrying the detector on his or her head. When a person undergoes locomotion, the person's head translates up and down (e.g., parallel to the z-coordinate direction in
In individuals with normal visual acuity, the brain eliminates the blurring effects of locomotion through the linear vestibulo-ocular reflex (lVOR) and the angular vestibulo-ocular reflex (aVOR) to compensate for linear and angular head motion, respectively. In general, the relative contributions of the lVOR and aVOR depend upon the distance of the objects that are viewed. The aVOR is typically more significant for objects in the far field (e.g., more than about 1 meter away), while the lVOR is typically more significant for objects in the near field (e.g., closer than 1 meter).
System 200 is typically configured to acquire information in a far field imaging configuration; as such, compensation for linear motion of detector 210 during locomotion usually provides only a marginal benefit to the wearer. However, compensation for pitch and yaw angular motion of detector 210 (which is analogous to the brain's activation of the aVOR reflex) can significantly reduce blurring of the visual information that is acquired by detector 210.
In step 330 of flow chart 300, electronic processor 240 analyzes the detector motion to identify components of the detector motion. To identify components of the detector motion, electronic processor 240 can, in general, analyze either or both of the measured motions in
Electronic processor 240 can analyze the yaw angular motion of detector 210 shown in
In step 340, electronic processor 240 assigns the identified components of the detector's motion to classes. In some embodiments, step 340 can be performed essentially contemporaneously with step 330. For example, referring again to
Electronic processor 240 can perform a similar assignment based on the yaw angular motion shown in
A second exemplary class or context of motion corresponds to a subject wearing detector 210 on his or her head and shaking the head from side-to-side to indicate disagreement.
Returning to flow chart 300, in step 330, electronic processor 240 analyzes
A third exemplary class or context of motion corresponds to a gaze shift (e.g., an intentional rotation of the head) by a subject wearing detector 210 on his or her head.
Returning to flow chart 300, the motion in
The preceding discussion has provided examples of three different classes or contexts of motion of detector 210 that can be identified by electronic processor 240. In general, however, electronic processor 240 can be configured to identify any number of classes of motion. Moreover, electronic processor 240 can identify and distinguish motion components and assign the components to different classes when multiple different classes of motion give rise to the overall motion of detector 210. For example, in circumstances where the wearer of a head-mounted detector 210 is undergoing locomotion and shaking his or her head from side-to-side at the same time, the measured pitch angular motion of the detector will correspond approximately to a superposition of the motions shown in
As disclosed previously, methods such as Fourier analysis, frequency measurement, and direct comparison to reference information (e.g., by performing a correlation analysis) can be used to identify motion components. These methods can be used when one or more components contribute to a particular measured motion. Other methods can also be used to identify components, and some methods may be more effective at identifying components as the number of components contributing to the measured motion of the detector increases. For example, more sophisticated methods of identifying components such as wavelet analysis, eigenvector decomposition, and principal components analysis can be used by electronic processor 240 to analyze the detector motion.
Returning to flow chart 300, after the motion components have been identified and assigned to classes, electronic processor 240 determines which components of the motion will be compensated based on the classes in step 350. That is, the compensation that is subsequently applied to detector 210 is dependent upon the contextual analysis of the motion of the detector in steps 330 and 340. The determination as to which of the identified components will be compensated is facilitated by stored information in system 200 that describes which classes of motion are suitable for compensation and which are not. This type of information corresponds, in essence, to a set of rules that attempts to mimic the response of a person with ordinary visual acuity.
For example, when a person with ordinary visual acuity undergoes locomotion, the person's brain activates its aVOR to counteract pitch and yaw angular motions of the head, steadying the person's gaze. Thus, in some embodiments, system 200 includes stored information instructing electronic processor 240 to compensate identified components of motion of detector 210 that correspond to the class “locomotion.” In so doing, system 200 mimics the visual response of a person with ordinary visual acuity.
When a person with ordinary visual acuity shakes his or her head from side-to-side, the person's brain typically activates its aVOR to counteract the yaw angular motion of the head. However, when the same person shifts his or her gaze from side-to-side, the brain does not typically activate the aVOR (or activates it to a lesser degree). As a result, the vision of even a person with ordinary visual acuity will be blurred somewhat as that person shifts gaze. In similar fashion, in certain embodiments, system 200 includes stored information instructing electronic processor 240 to compensate identified components of motion of detector 210 that are assigned to the class “head shaking,” but not to compensate components that are assigned to the class “gaze shift.” In this way, system 200 further mimics the visual response of a person with normal visual acuity.
After the components of motion to be compensated have been identified in step 350, electronic processor 240 adjusts the position of detector 210 in step 360 to compensate for the identified components of motion. To perform the position adjustment, electronic processor 240 generates control signals for actuators 230 and transmits the control signals to the actuators via communication line 257. The control signals direct actuators 230 to adjust the position of detector 210 so that some or all of the detector motion (e.g., the components of the motion identified in step 350) is counteracted.
Similarly, the “head shaking” class of motion components was identified for compensation in step 350. Accordingly, the signals shown in
In some embodiments, when certain types of motion such as a gaze shift occur and are identified by electronic processor 240 as disclosed herein, system 200 can temporarily halt transmission of visual information from detector 210 to the wearer. For example, in certain embodiments, electronic processor 240 can be configured to determine the speed at which the detector moves during a gaze shift from the position of detector 210 as a function of time. Electronic processor 240 can instruct detector 210 to halt transmission of visual information of the speed of the detector exceeds a certain threshold value because at greater speeds, the visual information may be so distorted that it would only confuse the wearer. Transmission of visual information can be resumed when electronic processor 240 determines that the speed at which the detector moves is reduced below the threshold (e.g., when the gaze shift is completed).
After adjustments to the position of detector 210 have been completed in step 360, the system determines in step 370 whether acquisition of information by detector 210 is complete. If detector 210 is still acquiring information, control returns to step 310, where motion of the detector is again detected, and a new compensation cycle begins. If acquisition of information is complete, the process terminates at step 380.
System 200 can be incorporated into a wide variety of housings for purposes of allowing a person with reduced visual acuity to carry the system. In so doing, system 200 enhances the mobility and quality of life of afflicted persons. In some embodiments, system 200 can be enclosed within a housing to form a wearable prosthetic device that functions as disclosed herein. In one such implementation, for example, system 200 is incorporated into eyeglass frames to form a prosthetic device that bears a strong resemblance to an ordinary pair of glasses.
Many other configurations and types of prosthetic devices can also be implemented. While prosthetics configured for head mounting have certain desirable features (e.g., the person wearing the prosthetic can effect a gaze shift by turning his or her head), prosthetic devices can also be worn on other parts of the body and/or carried. When the prosthetic devices include a housing, then as above, some or all of the components can be enclosed within the housing, embedded within the housing, or attached to a surface of the housing.
The disclosure herein describes the use of sensors to detect motion of a detector as visual information is acquired by the detector, and then making adjustments to the position of the detector to compensate for its motion. However, the systems disclosed herein can also be used to compensate detector motion in other ways. For example, in some embodiments, the sensors can be used to detect motion of the detector, and information about the detector's motion can be transmitted to an electronic processor that is configured to process images obtained by the detector. The processor can analyze the images and apply correction algorithms that reduce or eliminate artifacts in the images due to detector motion, using as input the information about detector motion acquired by the sensors. This mode of operation implements a “directed” approach to image processing rather than an iterative approach in which algorithms are not applied iteratively, for example, until one or more threshold conditions related to the “quality” the processed images is achieved. Instead, in a directed approach, the image processing algorithms can estimate the nature of the correction needed, increasing the speed at which corrected images can be produced. Image processing algorithms that can use motion information measured by the sensors disclosed herein to compensate for artifacts in images are disclosed, for example, in the following references, the contents of each of which are incorporated herein by reference in their entirety: J. Chang et al., “Digital image translational and rotational motion stabilization using optical flow technique,” IEEE Transactions on Consumer Electronics 48(1): 108-115 (2002); T. Chen, “Video Stabilization Using a Block-Based Parametric Motion Model,” Stanford University Dept. of Electrical Engineering Technical Report (2000), available from internet address twiki.cis.rit.edu/twiki/pub/Main/HongqinZhang/chen_report.pdf; A. Censi et al., “Image Stabilization by Features Tracking,” IEEE International Conference on Image Analysis and Processing (1999), pp. 665-667; and K. Uomori et al., “Automatic image stabilizing system by full-digital signal processing,” IEEE Transactions on Consumer Electronics 36(3): 510-519 (1990).
The preceding discussion has focused, for purposes of clarity of exposition, primarily on using detector 210 to acquire and provide visual information to persons with reduced visual acuity. However, the systems and methods disclosed herein can also be used in many other applications. In general, a wide variety of automated vision systems that are susceptible to undesirable motion of detectors can be compensated as disclosed herein. By detecting motion of the detectors, analyzing the motion to identify different motion components, assigning the components to classes, and then selectively compensating the different components according to class, the effects of undesired motions can be reduced or eliminated while leaving undisturbed the ordinary functions of such systems (e.g., the ability of such systems to intentionally reposition the detectors to control the field of view). Exemplary vision systems to which the methods and systems disclosed herein can be applied include robotic vision systems, manufacturing and assembly line inspection systems, and automated vehicle guidance systems.
In some embodiments, the systems disclosed herein can be integrated into a head covering, such as a hat or cap, or a helmet worn by a soldier, a police officer, an athlete at a sporting event, or another person for whom head protection is important.
Images (including video signals) captured by detector 210 can be transmitted to a remote receiver 260 (e.g., using a wireless communication interface). The methods disclosed herein can be used to correct for motion of the head-mounted detector so that receiver 260 receives a stabilized video signal for display on a variety of devices including computer screens and televisions. Monitoring of the stabilized video signals allows persons at remote locations to visualize the environment of the person wearing the head covering, permitting persons at remote locations to more accurately perceive situations as they arise and to issue instructions to the wearer, for example.
The systems and methods disclosed herein—in particular, methods for context-dependent analysis and compensation of detectors—can also be applied to measurements of non-visual information. For example, in some embodiments, detector 210 can be configured to measure audio information such as sounds and/or speech (e.g., a microphone). Because audio detectors are generally relatively insensitive to orientation, system 200 may not include sensors 220 or actuators 230. Nonetheless, audio information measured by detector 210 can be transmitted to electronic processor 240. Electronic processor 240 can process the audio information before transmitting it to a wearer or carrier of system 200 (e.g., via an implanted receiver in the wearer/carrier's ear). To process the audio information, electronic processor 240 can be configured to identify different components of the audio information, and to adjust the processing method or output based on the identified components. For example, electronic processor 240 can process the audio information differently if the identified components of the information indicate that the audio information features spoken words recorded in a quiet room, music recorded in a symphony hall, or a mixture of speech and ambient noise in a room with many voices.
In some embodiments, the audio information can be used to control the configuration of the system. The audio information can be processed to identify components, and when certain components are identified, the system is configured (or re-configured) appropriately. For example, when processing of the audio information reveals that an emergency siren is close by on the left-hand side of the wearer of system 200, the orientations of one or more detectors configured to detect visual information can be adjusted so that the detectors so that more visual information is acquired from the direction of the emergency siren. Continued monitoring of the intensity and direction of the emergency siren component of the audio information can be used to allow the visual information detectors to “follow” the siren as it moves, and to discontinue following the siren when it is far away. In this manner described above, system 200 can be configured to make context-sensitive decisions with regard to the processing of measured audio information.
As another example, the systems and methods disclosed herein can be used in sensory substitution devices. Many individuals experience imbalances due to deficiencies in one or more senses. The systems and methods disclosed herein can be used to measure different types of information, to analyze the measured information by identifying different components of the information, and then to process the measured information in a manner that is contextually-dependent on the identified components. The processed information is then conveyed to the individual to assist in compensating for sensory imbalance. Processing of the measured information can depend, for example, on whether the individual is walking, standing, or sitting.
Still further, the systems and methods disclosed herein can be used in sensory augmentation devices (e.g., devices that measure information that is beyond the capability of unaided humans to detect). Such devices can be used to measure, for example, wavelengths of light in the ultraviolet and/or infrared regions of the electromagnetic spectrum, and sounds in frequency bands that are beyond the ordinary range of human hearing. By analyzing this type of information, the systems and methods disclosed herein can associate the information with particular contexts, and tailor processing and delivery of the information to humans accordingly. Context-dependent processing can include, for example, filtering the information to extract portions of interest (e.g., selecting a particular spectral region) within a set of circumstances defined by the context, and discarding irrelevant information.
In some embodiments, the spectral region of interest (e.g., the portion of the spectrum in which subsequent measurements are made) can be changed according to the determined context associated with the initially measured information. For example, if the initially measured information is analyzed and determined to correspond to a context in which high frequency signals are present (e.g., the wearer is engaged in an activity such as whale watching), subsequent measurements of audio information could be made in this high frequency region of the spectrum. The high frequency measured information can then be shifted to a lower frequency region of the spectrum (e.g., within the detection range of the human ear) and presented to the wearer. As another example, if the initially measured information is analyzed and determined to indicate that a fire is (or is possibly) present in the vicinity of the wearer, the system can subsequently measure information about objects in the infrared spectral region. The measured infrared information can be translated into image information in the visible region of the spectrum (so it can be directly observed by the wearer). Alternatively, or in addition, it can be converted into other types of information (such as audio information and/or warnings) which is presented to the wearer. In this manner, the system permits the wearer to perceive information that the wearer would not otherwise detect, augmenting his or her sensory capabilities.
A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other embodiments are within the scope of the following claims.
This application is a national stage application under 35 U.S.C. §371 of PCT Patent Application No. PCT/US2012/063577, filed on Nov. 5, 2012 and published as WO 2013/067513, which claims priority under 35 U.S.C. §119 to U.S. Provisional Application Nos. 61/555,908 and 61/555,930, each filed on Nov. 4, 2011, the entire contents of each of which is incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2012/063577 | 11/5/2012 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2013/067513 | 5/10/2013 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
3383682 | Stephens, Jr | May 1968 | A |
4992998 | Woodward | Feb 1991 | A |
5159927 | Schmid | Nov 1992 | A |
5411540 | Edell et al. | May 1995 | A |
5476494 | Edell et al. | Dec 1995 | A |
5521957 | Hansen | May 1996 | A |
5554187 | Rizzo, III | Sep 1996 | A |
5575813 | Edell et al. | Nov 1996 | A |
5597381 | Rizzo, III | Jan 1997 | A |
5636038 | Lynt et al. | Jun 1997 | A |
5642431 | Poggio et al. | Jun 1997 | A |
5777715 | Kruegle et al. | Jul 1998 | A |
5800530 | Rizzo, III | Sep 1998 | A |
5835616 | Lobo et al. | Nov 1998 | A |
5850470 | Kung et al. | Dec 1998 | A |
5875018 | Lamprecht | Feb 1999 | A |
5935155 | Humayun et al. | Aug 1999 | A |
6104671 | Hoyt et al. | Aug 2000 | A |
6115482 | Sears et al. | Sep 2000 | A |
6120538 | Rizzo, III et al. | Sep 2000 | A |
6161091 | Akamine et al. | Dec 2000 | A |
6230057 | Chow et al. | May 2001 | B1 |
6324429 | Shire et al. | Nov 2001 | B1 |
6368349 | Wyatt et al. | Apr 2002 | B1 |
6389317 | Chow et al. | May 2002 | B1 |
6411327 | Kweon et al. | Jun 2002 | B1 |
6446041 | Reynar et al. | Sep 2002 | B1 |
6470264 | Bide | Oct 2002 | B2 |
6549122 | Depta | Apr 2003 | B2 |
6671618 | Hoisko | Dec 2003 | B2 |
6774788 | Balfe | Aug 2004 | B1 |
6920358 | Greenberg et al. | Jul 2005 | B2 |
6948937 | Tretiakoff et al. | Sep 2005 | B2 |
6976998 | Rizzo et al. | Dec 2005 | B2 |
7295872 | Kelly et al. | Nov 2007 | B2 |
7307575 | Zemany | Dec 2007 | B2 |
7308315 | Ohta et al. | Dec 2007 | B2 |
7565139 | Neven, Sr. et al. | Jul 2009 | B2 |
7598976 | Sofer et al. | Oct 2009 | B2 |
7627142 | Kurzweil et al. | Dec 2009 | B2 |
7642949 | Pergande et al. | Jan 2010 | B2 |
7659915 | Kurzweil et al. | Feb 2010 | B2 |
7782251 | Bishop et al. | Aug 2010 | B2 |
7788032 | Moloney | Aug 2010 | B2 |
7805307 | Levin et al. | Sep 2010 | B2 |
7817855 | Yuille et al. | Oct 2010 | B2 |
7898468 | Samaniego et al. | Mar 2011 | B2 |
7925354 | Greenberg et al. | Apr 2011 | B2 |
7965196 | Liebermann | Jun 2011 | B2 |
7967439 | Shelhamer et al. | Jun 2011 | B2 |
7983920 | Sinclair, II | Jul 2011 | B2 |
7991576 | Roumeliotis | Aug 2011 | B2 |
8009928 | Manmatha et al. | Aug 2011 | B1 |
8014604 | Tzadok et al. | Sep 2011 | B2 |
8015132 | Xu | Sep 2011 | B2 |
8018580 | Luo et al. | Sep 2011 | B2 |
8019428 | Greenberg et al. | Sep 2011 | B2 |
8021045 | Foos et al. | Sep 2011 | B2 |
8036895 | Kurzweil et al. | Oct 2011 | B2 |
8049680 | Spruck et al. | Nov 2011 | B2 |
8068644 | Tkacik | Nov 2011 | B2 |
8113841 | Rojas et al. | Feb 2012 | B2 |
8115831 | Rodriquez et al. | Feb 2012 | B2 |
8130262 | Behm et al. | Mar 2012 | B2 |
8135217 | Goktekin et al. | Mar 2012 | B2 |
8135227 | Lewis et al. | Mar 2012 | B2 |
8139894 | Nestares | Mar 2012 | B2 |
8150107 | Kurzweil et al. | Apr 2012 | B2 |
8154771 | Albrecht et al. | Apr 2012 | B2 |
8160880 | Albrecht et al. | Apr 2012 | B2 |
8174931 | Vartanian et al. | May 2012 | B2 |
8175802 | Forstall et al. | May 2012 | B2 |
8185398 | Anderson et al. | May 2012 | B2 |
8186581 | Kurzweil et al. | May 2012 | B2 |
8204684 | Forstall et al. | Jun 2012 | B2 |
8208729 | Foss | Jun 2012 | B2 |
8210848 | Beck et al. | Jul 2012 | B1 |
8218020 | Tenchio et al. | Jul 2012 | B2 |
8218873 | Boncyk et al. | Jul 2012 | B2 |
8218874 | Boncyk et al. | Jul 2012 | B2 |
8224078 | Boncyk et al. | Jul 2012 | B2 |
8224079 | Boncyk et al. | Jul 2012 | B2 |
8233671 | Anderson et al. | Jul 2012 | B2 |
8234277 | Thong et al. | Jul 2012 | B2 |
8239032 | Dewhurst | Aug 2012 | B2 |
20010056342 | Piehn et al. | Dec 2001 | A1 |
20020111655 | Scribner | Aug 2002 | A1 |
20020111739 | Jandrell | Aug 2002 | A1 |
20020148607 | Pabst | Oct 2002 | A1 |
20030179133 | Pepin et al. | Sep 2003 | A1 |
20040107010 | King | Jun 2004 | A1 |
20050251223 | Eckmiller | Nov 2005 | A1 |
20060050933 | Adam et al. | Mar 2006 | A1 |
20060129308 | Kates | Jun 2006 | A1 |
20070025512 | Gertsenshteyn et al. | Feb 2007 | A1 |
20070211947 | Tkacik | Sep 2007 | A1 |
20070272738 | Berkun | Nov 2007 | A1 |
20070273708 | Andreasson et al. | Nov 2007 | A1 |
20070279497 | Wada | Dec 2007 | A1 |
20080037727 | Sivertsen et al. | Feb 2008 | A1 |
20080077196 | Greenberg et al. | Mar 2008 | A1 |
20080120029 | Zelek et al. | May 2008 | A1 |
20080136923 | Inbar et al. | Jun 2008 | A1 |
20080154336 | McClure et al. | Jun 2008 | A1 |
20080154337 | McClure et al. | Jun 2008 | A1 |
20080187104 | Sung et al. | Aug 2008 | A1 |
20080275527 | Greenberg et al. | Nov 2008 | A1 |
20080288067 | Flood | Nov 2008 | A1 |
20090002500 | Kawai | Jan 2009 | A1 |
20090186321 | Rojas et al. | Jul 2009 | A1 |
20090306741 | Hogle et al. | Dec 2009 | A1 |
20090312817 | Hogle et al. | Dec 2009 | A1 |
20100002204 | Jung et al. | Jan 2010 | A1 |
20100013612 | Zachman | Jan 2010 | A1 |
20100177179 | Behm et al. | Jul 2010 | A1 |
20100201793 | Kurzweil et al. | Aug 2010 | A1 |
20110013896 | Kawahara | Jan 2011 | A1 |
20110034176 | Lord et al. | Feb 2011 | A1 |
20110043644 | Munger et al. | Feb 2011 | A1 |
20110050546 | Swartz, Jr. et al. | Mar 2011 | A1 |
20110091098 | Yuille et al. | Apr 2011 | A1 |
20110092249 | Evanitsky | Apr 2011 | A1 |
20110143816 | Fischer et al. | Jun 2011 | A1 |
20110181745 | Nagatsuma et al. | Jul 2011 | A1 |
20110213664 | Osterhout et al. | Sep 2011 | A1 |
20110214082 | Osterhout et al. | Sep 2011 | A1 |
20110216179 | Dialameh et al. | Sep 2011 | A1 |
20110221656 | Haddick et al. | Sep 2011 | A1 |
20110221657 | Haddick et al. | Sep 2011 | A1 |
20110221658 | Haddick et al. | Sep 2011 | A1 |
20110221659 | King, III et al. | Sep 2011 | A1 |
20110221668 | Haddick et al. | Sep 2011 | A1 |
20110221669 | Shams et al. | Sep 2011 | A1 |
20110221670 | King, III et al. | Sep 2011 | A1 |
20110221671 | King, III et al. | Sep 2011 | A1 |
20110221672 | Osterhout et al. | Sep 2011 | A1 |
20110221793 | King, III et al. | Sep 2011 | A1 |
20110221896 | Haddick et al. | Sep 2011 | A1 |
20110221897 | Haddick et al. | Sep 2011 | A1 |
20110222735 | Imai et al. | Sep 2011 | A1 |
20110222745 | Osterhout et al. | Sep 2011 | A1 |
20110224967 | Van Schaik | Sep 2011 | A1 |
20110225536 | Shams et al. | Sep 2011 | A1 |
20110229023 | Jones et al. | Sep 2011 | A1 |
20110231757 | Haddick et al. | Sep 2011 | A1 |
20110267490 | Göktekin et al. | Nov 2011 | A1 |
20110279222 | LeGree | Nov 2011 | A1 |
20110292204 | Boncyk et al. | Dec 2011 | A1 |
20110295742 | Boncyk et al. | Dec 2011 | A1 |
20110298723 | Fleizach et al. | Dec 2011 | A1 |
20110298939 | Melikian | Dec 2011 | A1 |
20120001932 | Burnett et al. | Jan 2012 | A1 |
20120002872 | Boncyk et al. | Jan 2012 | A1 |
20120028577 | Rodriguez et al. | Feb 2012 | A1 |
20120029920 | Kurzweil et al. | Feb 2012 | A1 |
20120044338 | Lee et al. | Feb 2012 | A1 |
20120046947 | Fleizach | Feb 2012 | A1 |
20120053826 | Slamka | Mar 2012 | A1 |
20120054796 | Gagnon et al. | Mar 2012 | A1 |
20120062357 | Slamka | Mar 2012 | A1 |
20120062445 | Haddick et al. | Mar 2012 | A1 |
20120075168 | Osterhout et al. | Mar 2012 | A1 |
20120080523 | D'Urso et al. | Apr 2012 | A1 |
20120081282 | Chin | Apr 2012 | A1 |
20120092460 | Mahoney | Apr 2012 | A1 |
20120098764 | Asad et al. | Apr 2012 | A1 |
20120113019 | Anderson | May 2012 | A1 |
20120119978 | Border et al. | May 2012 | A1 |
20120120103 | Border et al. | May 2012 | A1 |
20120143495 | Dantu | Jun 2012 | A1 |
20120147163 | Kaminsky | Jun 2012 | A1 |
20120154144 | Betts et al. | Jun 2012 | A1 |
20120154561 | Chari | Jun 2012 | A1 |
20120163667 | Boncyk et al. | Jun 2012 | A1 |
20120163722 | Boncyk et al. | Jun 2012 | A1 |
20120179468 | Nestares | Jul 2012 | A1 |
20120183941 | Steinmetz | Jul 2012 | A1 |
20120194418 | Osterhout et al. | Aug 2012 | A1 |
20120194419 | Osterhout et al. | Aug 2012 | A1 |
20120194420 | Osterhout et al. | Aug 2012 | A1 |
20120194549 | Osterhout et al. | Aug 2012 | A1 |
20120194550 | Osterhout et al. | Aug 2012 | A1 |
20120194551 | Osterhout et al. | Aug 2012 | A1 |
20120194552 | Osterhout et al. | Aug 2012 | A1 |
20120194553 | Osterhout et al. | Aug 2012 | A1 |
20120195467 | Boncyk et al. | Aug 2012 | A1 |
20120195468 | Boncyk et al. | Aug 2012 | A1 |
20120200488 | Osterhout et al. | Aug 2012 | A1 |
20120200499 | Osterhout et al. | Aug 2012 | A1 |
20120200595 | Lewis et al. | Aug 2012 | A1 |
20120200601 | Osterhout et al. | Aug 2012 | A1 |
20120200724 | Dua et al. | Aug 2012 | A1 |
20120206322 | Osterhout et al. | Aug 2012 | A1 |
20120206323 | Osterhout et al. | Aug 2012 | A1 |
20120206334 | Osterhout et al. | Aug 2012 | A1 |
20120206335 | Osterhout et al. | Aug 2012 | A1 |
20120206485 | Osterhout et al. | Aug 2012 | A1 |
20120212398 | Border et al. | Aug 2012 | A1 |
20120212399 | Border et al. | Aug 2012 | A1 |
20120212400 | Border et al. | Aug 2012 | A1 |
20120212406 | Osterhout et al. | Aug 2012 | A1 |
20120212414 | Osterhout et al. | Aug 2012 | A1 |
20120212484 | Haddick et al. | Aug 2012 | A1 |
20120212499 | Haddick et al. | Aug 2012 | A1 |
20120218172 | Border et al. | Aug 2012 | A1 |
20120218301 | Miller | Aug 2012 | A1 |
20120227812 | Quinn et al. | Sep 2012 | A1 |
20120227813 | Meek et al. | Sep 2012 | A1 |
20120227820 | Poster | Sep 2012 | A1 |
20120235883 | Border et al. | Sep 2012 | A1 |
20120235884 | Miller et al. | Sep 2012 | A1 |
20120235885 | Miller et al. | Sep 2012 | A1 |
20120235886 | Border et al. | Sep 2012 | A1 |
20120235887 | Border et al. | Sep 2012 | A1 |
20120235900 | Border et al. | Sep 2012 | A1 |
20120236030 | Border et al. | Sep 2012 | A1 |
20120236031 | Haddick et al. | Sep 2012 | A1 |
20120242678 | Border et al. | Sep 2012 | A1 |
20120242697 | Border et al. | Sep 2012 | A1 |
20120242698 | Haddick et al. | Sep 2012 | A1 |
20120249797 | Haddick et al. | Oct 2012 | A1 |
20140303687 | Wall, III | Oct 2014 | A1 |
20150002808 | Rizzo, III | Jan 2015 | A1 |
Number | Date | Country |
---|---|---|
101076841 | Nov 2007 | CN |
200984304 | Dec 2007 | CN |
101791259 | Aug 2010 | CN |
102008059175 | May 2010 | DE |
102009059820 | Jun 2011 | DE |
2065871 | Jun 2009 | EP |
2096614 | Sep 2009 | EP |
2371339 | Oct 2011 | EP |
2395495 | Dec 2011 | EP |
2002-219142 | Aug 2002 | JP |
10-2003-0015936 | Feb 2003 | KR |
10-2006-0071507 | Jun 2006 | KR |
WO 9717043 | May 1997 | WO |
WO 9718523 | May 1997 | WO |
WO 9832044 | Jul 1998 | WO |
WO 9836793 | Aug 1998 | WO |
WO 9836795 | Aug 1998 | WO |
WO 9836796 | Aug 1998 | WO |
WO 9837691 | Aug 1998 | WO |
WO 9855833 | Dec 1998 | WO |
WO 0103635 | Jan 2001 | WO |
WO 02089053 | Nov 2002 | WO |
WO 03078929 | Sep 2003 | WO |
WO 03107039 | Dec 2003 | WO |
WO 2006083508 | Aug 2006 | WO |
WO 2006085310 | Aug 2006 | WO |
WO 2007063360 | Jun 2007 | WO |
WO 2007095621 | Aug 2007 | WO |
WO 2007138378 | Dec 2007 | WO |
WO 2008020362 | Feb 2008 | WO |
WO 2008052166 | May 2008 | WO |
WO 2008109781 | Sep 2008 | WO |
WO 2008116288 | Oct 2008 | WO |
WO 2009154438 | Dec 2009 | WO |
WO 2010145013 | Jan 2010 | WO |
WO 2010142689 | Dec 2010 | WO |
WO 2011017653 | Feb 2011 | WO |
WO 2013067539 | May 2013 | WO |
Entry |
---|
Chinese Office Action in Chinese Application No. 201280066158.8, dated Oct. 8, 2015, 28 pages (with English Translation). |
International Search Report and Written Opinion in International Application No. PCT/US2012/063577, mailed Feb. 20, 2013, 12 pages. |
International Preliminary Report on Patentability in International Application No. PCT/US2012/063577, mailed May 15, 2014, 8 pages. |
International Search Report and Written Opinion in International Application No. PCT/US2012/063619, mailed Mar. 29, 2013, 14 pages. |
International Preliminary Report on Patentability in International Application No. PCT/US2012/063619, mailed May 15, 2014, 9 pages. |
Adjouadi, “A man-machine vision interface for sensing the environment,” J. Rehabilitation Res. 29(2): 57-76 (1992). |
Censi et al., “Image Stabilization by Features Trading,” IEEE International Conference on Image Analysis and Processing, 1999, pp. 665-667. |
Chang et al., “Digital image translational and rotational motion stabilization using optimal flow technique,” IEEE Transactions on Consumer Electronics 48(1): 108-115 (2002). |
Chen et al., “Video Stabilization Using a Block-Based Parametric Motion Model,” Stanford University Department of Electrical Engineering Technical Report, 2000, 32 pages. |
Crawford, “Living Without a Balancing Mechanism,” Brit. J. Ophthal. 48: 357-360 (1964). |
Drahansky et al., “Accelerometer Based Digital Video Stabilization for General Security Surveillance Systems,” Int. J. of Security and Its Applications, 4(1): 1-10 (2010). |
Grossman et al., “Frequency and velocity of rotational head perturbations during locomotion,” Exp. Brain Res. 70: 470-476 (1988). |
Grossman et al., “Performance of the Human Vestibuloocular Reflex During Locomotion,” J. Neurophys. 62(1): 264-272 (1989). |
Grossman and Leigh, “Instability of Gaze during Locomotion in Patients with Deficient Vestibular Function,” Ann. Neurol. 27(5): 528-532 (1990). |
Hirasaki et al., “Effects of walking velocity on vertical head and body movements during locomotion,” Exp. Brain Research 127: 117-130 (1999). |
Horn et al., “Time to contact relative to a planar surface,” IEEE Intelligent Vehicles Symposium 1-3: 45-51 (2007). |
Horn et al., “Hierarchical framework for direct gradient-based time-to-contact estimation,” IEEE Intelligent Vehicles Symposium 1-2: 1394-1400 (2009). |
Itti et al., “Computational modelling of visual attention,” Nat. Rev. Neurosci. 2(3): 194-203 (2001). |
Karacs et al., “Bionic Eyeglass: an Audio Guide for Visually Impaired,” IEEE Biomedical Circuits and Systems Conference, 2006, pp. 190-193. |
Quattoni et al., “Recognizing Indoor Scenes,” IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 413-420. |
Roska et al., “System aspects of a bionic eyeglass,” Proceedings of the IEEE International Symposium on Circuits and Systems, 2006, pp. 164-167. |
Sachs et al., “Image Stabilization Technology Overview,” 2011, downloaded from internet address http://www.invensense.com/mems/gyro/documents/whitepapers/ImageStabilization Whitepaper—051606.pdf, 18 pages. |
Turk et al., “Eigenfaces for Recognition,” J. Cognitive Neuroscience 31(3): 71-86 (1991). |
Unknown, “Augmented Reality for the Totally Blind,” 2011, downloaded from internet address http://www.seeingwithsound.com, 1 page. |
Unknown, “LiDAR Glasses for Blind People,” 2011, downloaded from internet address http://blog.lidarnews.com/lidar-glasses-for-blind-people, 1 page. |
Uomori et al., “Automatic image stabilizing system by full-digital signal processing,” IEEE Transactions on Consumer Electronics 36(3): 510-519 (1990). |
Viola et al., “Rapid Object Detection using a Boosted Cascade of Simple Features,” IEEE Conference on Computer Vision and Pattern Recognition, 2001, pp. 511-518. |
Wagner et al., “Color Processing in Wearable Bionic Eyeglass,” Proceedings of the 10th IEEE International Workshop on Cellular Neural Networks and Their Applications, 2006, pp. 1-6. |
Xiao et al., “SUN Database: Large Scale Scene Recognition from Abbey to Zoo,” IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 3485-3492. |
Zijlstra et al., “Assessment of spatio-temporal gait parameters from trunk accelerations during human walking,” Gait & Posture 18: 1-10 (2003). |
Zöllner et al., “NAVI—Navigational Aids for the Visually Impaired,” 2011, downloaded from Internet address http://hci.uni-konstanz.de/blog/2011/03/15/navi/?lang=en, 2 pages. |
Number | Date | Country | |
---|---|---|---|
20140303687 A1 | Oct 2014 | US |
Number | Date | Country | |
---|---|---|---|
61555908 | Nov 2011 | US | |
61555930 | Nov 2011 | US |