1. Field of the Invention
This invention relates generally to otoscopes for imaging the interior of human or animal ears.
2. Description of the Related Art
Imaging inside of the human or animal ear is a common task for doctors. Typically a doctor uses an otoscope to look inside the ear of the patient. Such an exam is common procedure when trying to diagnose ear infections. Most doctors use a manual otoscope, which is simply a magnifier combined with an illuminator. The image that the doctor sees exists only in the doctor's memory. Therefore, comparing different images looked at different times is difficult and not objective.
There exist digital otoscopes that have a digital camera embedded in the otoscope or at the end of a fiber-optic cable that guides the light from the instrument head to an external module. The digital data are then viewed on an external display. Such digital otoscopes are marketed as solutions for telemedicine applications. Cameras currently used in digital otoscope consist of conventional imaging optics and sensors. With the rapid development of mobile platforms for smart healthcare applications, attachments for cell phones are being developed that allow the imaging of the inside of an ear with a smartphone for illumination, image capture, and display.
The features that doctors analyze when trying to make a diagnosis for ear inflammation (“otitis media”) include features such as bulging of the ear drum, translucency, and yellowness of tissue. However, these features are difficult to analyze from flat two-dimensional images taken by conventional cameras. Conventional otoscopes do not explicitly obtain three-dimensional (i.e., depth) information or wavelength-dependent characteristics. They are limited to images of a single focal plane inside the ear canal. Moreover, often objects such as wax or hair can obstruct the view onto the tympanic membrane (TM) or other objects of interest and must be removed before taking a picture of the TM, requiring some extra procedures before an otoscope can be used.
Therefore, there exists a need for improved data acquisition to allow the extraction of three dimensions and color features more reliably.
The present invention overcomes the limitations of the prior art by providing a plenoptic otoscope. A plenoptic otoscope can be designed to provide good quality data for feature extraction for otitis diagnosis. In one implementation, a plenoptic sensor and an optional filter module are combined with a conventional digital otoscope to create a plenoptic otoscope. With these additions, three-dimensional (3D) shapes, translucency and/or color information can be captured.
In one embodiment, a plenoptic otoscope includes a primary imaging system and a plenoptic sensor. The primary imaging system is characterized by a pupil plane, and includes an otoscope objective and relay optics, which cooperate to form an image of the inside of an ear at an intermediate image plane. The plenoptic sensor includes a microimaging array positioned at the intermediate image plane and a sensor array positioned at a conjugate of the pupil plane.
In one implementation, a plenoptic otoscope further includes a filter module positioned at a pupil plane conjugate (i.e., at the pupil plane or one of its conjugates). In one approach, the filter module is located in a detachable tip, and is positioned at an entrance pupil of the primary imaging system when the detachable tip is attached to the otoscope. In this way, different filter modules can be included in detachable tips, and the filter modules can be switched in and out of the plenoptic otoscope by switching detachable tips.
In another implementation, a plenoptic otoscope is operable in a depth imaging mode. In the depth imaging mode, a plenoptic image (also referred to as plenoptic data) captured by the sensor array is processed to provide a three-dimensional depth image of an inside of an ear. Alternately or additionally, a plenoptic otoscope is operable in a spectral imaging mode. In the spectral imaging mode, plenoptic data captured by the sensor array is processed to provide two or more different spectral images of an inside of an ear. Disparity or depth maps can also be determined. The plenoptic otoscope may be switchable between the depth imaging mode and the spectral imaging mode.
Another aspect relates to the use of the data captured by the plenoptic otoscope to assist in making a medical diagnosis. For example, the plenoptic data can be processed to produce enhanced imagery of the ear interior. Data based on the enhanced imagery can then be used to assist a person in making a medical diagnosis. This diagnostic data could be the enhanced imagery itself or it could involve further processing of the enhanced imagery. Alternately, the diagnosis can be made automatically by a computer system, for example by a classifier trained on prior data.
Enhanced imagery of the tympanic membrane is a good example. A plenoptic otoscope can simultaneously capture depth and spectral information about the tympanic membrane. A depth map of the tympanic membrane can produce information regarding its shape—whether it is bulging or retracting, and the estimated curvature. Spectral information can include an amber or yellow image, which is especially useful to diagnose conditions of the tympanic membrane. Many diagnoses are based on shape, color and/or translucency, which can all be captured simultaneously by a plenoptic otoscope.
Plenoptic data also includes multiple views of the same image. This allows the user to refocus to different depths in the image and to view the same image from different viewpoints. For example, the effect of occluding objects may be reduced by taking advantage of multiviews. This could be accomplished by refocusing. Alternately, it could be accomplished by segmenting the light field (multiple views) into depth layers.
Examples of diagnostic data that are not images but are derived from enhanced imagery, include classification of the tympanic membrane as bulging, retracting or neutral, estimated curvature of the tympanic membrane, estimated color of the tympanic membrane, and features and feature vectors reflecting any of the foregoing.
Other aspects of the invention include methods, devices, systems, and applications related to the approaches described above and its variants.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The invention has other advantages and features which will be more readily apparent from the following detailed description of the invention and the appended claims, when taken in conjunction with the accompanying drawings, in which:
The figures depict embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
The figures and the following description relate to preferred embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is claimed. To facilitate understanding, identical reference numerals have been used where possible, to designate identical elements that are common to the figures.
A plenoptic otoscope design can overcome the poor data quality of current otoscopes for feature extraction for otitis diagnosis. In one implementation, a plenoptic sensor is added to a conventional digital otoscope as well as an optional filter module inside the otoscopic instrument. With these additions, three-dimensional (3D) shapes, translucency, and/or detailed color information can be captured. This data and possibly other data captured by a plenoptic otoscope can be used to aid in medical diagnosis of a patient's ear.
As can be seen from
The plenoptic otoscope head can be mounted on top of a handle that houses an illumination source (e.g., portable system) or can be connected to an illumination source (e.g., wall-mounted system). Such an illumination source may be an LED light source, a standard white illumination source, etc. The illumination source may have polarization characteristics as well. For example, it may emit unpolarized, partially polarized, or completely polarized (e.g., TE, TM) light.
In one embodiment, the plenoptic image contains depth data. A computing module (not shown in
Another possible operational mode of the plenoptic otoscope is a spectral imaging mode. In the spectral imaging mode, the plenoptic image captured by the sensor array 350 contains spectral information and may be processed to provide two or more different spectral images of the object 310. In one embodiment, spectral imaging can be enabled by placing a filter module at a pupil plane conjugate of the plenoptic otoscope, as shown in
In one implementation of
In one embodiment, the plenoptic otoscope is switchable between the depth imaging mode and the spectral imaging mode. In one approach, a clear filter is used for the depth imaging mode and one or more different spectral filters are used for the spectral imaging mode. To switch between the two modes, the filter module 410 could include one section that is clear and another section that contains the spectral filters. The filter module could be translated relative to the primary imaging system, so that the appropriate section is illuminated. An example of this type of filter module is shown in
In
In
This particular filter module has RGB filters for color imaging, plus a yellow filter since yellowish or amber color of tissue is an indicator, and is only shown as an example. In one embodiment, the filter module may include a plurality of different spectral filters. Filters having different colors and/or layouts may also be used in the filter module. For example, see U.S. patent application Ser. No. 13/040,809, filed on May 4, 2011, which is hereby incorporated by reference in its entirety.
Spectral imaging is useful to help distinguish different ear conditions. Some of the ear conditions are shown in
As shown in
In many conventional otoscopes, the magnification of the primary imaging system is set such that the entire tympanic membrane (TM) can be imaged onto the sensor array 350 (as seen in
The average diameter for the TM of an adult is h=7 mm. Here we define optical system specifications for the example of a ⅓″ sensor array with width W=4.6 mm and height H=3.7 mm. For this sensor array, the magnification for the primary imaging system is given by M=3.7 mm/7 mm=0.53. Such a magnification is typical for a conventional otoscope. In contrast, a microscope typically has a much larger magnification (>20), and a consumer camera imaging people or natural scenes typically has a much smaller magnification.
The total magnification of the primary imaging system is M=M1*M2, where M1 is the magnification of the first lens group, and M2 is the magnification of the second lens group. For illustration purposes, assume M2=1. In other approaches, M2 can be any suitable number other than 1. In the example where M2=1, M1=M. The working F-number, Nw, of the first lens group with magnification M is defined as Nw=(1+M)N, where N is the F-number of the primary imaging system (i.e., N=f/D1, where D1 is the diameter of the entrance pupil of the primary imaging system, and f is the effective focal length of the primary imaging system.). In one embodiment, the primary imaging system of the plenoptic otoscope is faster than F/8.
The working distance, z1, for the otoscope is the distance between the object and the first lens group. For imaging a TM, a typical working distance is 27-30 mm. The bones behind the TM are located approximately up to a distance of 15 mm from the TM. As a result, the working distance may vary, for example, from 27 mm up to 45 mm. For illustration purposes, assume the working distance z1=30 mm. The entrance pupil is located in the narrow tip of the otoscope close to the first lens group, and is generally smaller than the tip of the otoscope. The tip of an otoscope has a typical diameter of 4-5 mm in order to fit into an ear canal. Let's assume the entrance pupil to have a diameter of 2 mm. Then the effective focal length of the first lens group is f=N*D1=10.4 mm. The second lens group relays the image of the first lens group onto an intermediate image plane, where the microlens array 340 is positioned. The sensor array 350 is positioned at a distance z3′ behind the microlens array 340 to capture the plenoptic image.
In one embodiment, the object is located near the hyperfocal distance of the first lens group. The hyperfocal distance is a distance beyond which all objects can be brought into an acceptable focus. Mathematically, the hyperfocal distance may be expressed as p=f2/(N c)+f, where f is the effective focal length, N is the F-number, and c is the circle of confusion diameter limit. In one implementation, the numerical aperture of a microlens matches the image-side numerical aperture of the primary imaging system. That means the working F-number of the primary imaging system matches the F-number of the microlens. Furthermore, the distance z3′ is chosen to be equal to the focal length of the microlens. In this configuration, the depth of field is bounded only in one direction, and therefore may be particularly suitable for imaging distant objects.
In one embodiment, the object is placed at a distance z1 away from the entrance pupil of the first lens group. The distance z2 between the exit pupil of the first lens group and the relay plane is determined by the lens equation as: z2=1/(1/f1−1/z1), where f1 is the effective focal length of the first lens group.
The relationship between the first lens group and the second lens group is given by D1exit/D1′=z2/z1′, where D1exit is the diameter of the exit pupil of the first lens group, D1′ is the diameter of the entrance pupil of the second lens group, and z1′ is the distance between the relay plane and the entrance pupil of the second lens group.
The distance z2′ between the exit pupil of the second lens group and the intermediate image plane is determined by the lens equation as: z2′=1/(1/f1′−1/z1′), where f1′ is the effective focal length of the second lens group.
The distance z3′ between the microlens array and the sensor array is chosen such that z3′=z2′Mmicrolens. Here Mmicrolens=D2/D1′ exit is the magnification of the microlens sub-system, where D2 is the diameter of the microlens (as shown in
In one embodiment, the filter module 410 is inserted at the aperture of the second lens group, as depicted in
Switching between depth imaging mode and spectral imaging mode may be accompanied by a change in the depth of field for the primary imaging system (in addition to changing filters). One way to change the depth of field is by adjusting the aperture size. For example, a larger aperture results in a shorter depth of field, which may benefit depth imaging due to the finer depth resolution. On the other hand, a smaller aperture results in a longer depth of field, which may be unsuitable for depth imaging but appropriate for spectral imaging.
In one embodiment, switching between depth and spectral imaging includes opening and closing the diaphragm/iris/shutter at the aperture plane of the second lens group. Two example configurations are given below. In the first configuration, with the effective focal length f=10 mm and a circle of confusion diameter of 0.019 mm, the aperture is wide open to enable a small F-number (e.g., F/5) and a small depth of field (<2 mm). This configuration is suitable for depth imaging or perhaps for combined depth+spectral imaging. In the second configuration, with the effective focal length f=10 mm and a circle of confusion diameter of 0.019 mm, the aperture is stopped down to enable a large F-number (e.g., F/16) and a large depth of field (>3.5 mm). This configuration may be suitable for spectral imaging only.
Switching between depth imaging mode and spectral imaging mode may also be accompanied by a change in focus for the primary imaging system. This may be done via a focusing mechanism. Such a focusing mechanism (e.g., a focusing ring) may move lenses in the primary imaging system and/or move the plenoptic sensor, so that objects at various distances can be focused onto the microlens array plane (i.e., the intermediate image plane). In one approach, the focusing mechanism is adjusted such that a region between 4-5 mm in front of the TM and up to 15 mm behind the TM can be imaged in focus onto the microlens array plane. This may enable different combinations of spectral and/or depth imaging at different regions of interest. For example, it may be desirable to have both depth and spectral imaging for a region near the TM (e.g., to fully distinguish the different ear conditions), while spectral imaging may be enough for other regions. By adjusting the focus, it is possible to select which portion of the ear canal should “receive more attention.” For instance, one can adjust the focus with a fine step size (i.e., a fine depth resolution) near the TM to increase the 3D depth information for that region of interest, and adjust the focus with a coarse step size for other regions of the ear canal.
In one embodiment, the plenoptic otoscope is in the spectral imaging mode when the primary imaging system has a depth of field >5 mm. This is useful, for example, for imaging both the TM and the bones behind the TM in focus onto the microlens array plane. Conversely, the plenoptic otoscope is in the depth imaging mode when the primary imaging system has a depth of field <5 mm. In this mode, depth estimation of the TM is possible, for example, by focusing on the bones behind the TM and/or the narrow part of the ear canal in front of the TM. Illustratively, the first lens group may have a working distance up to 45 mm (about 15 mm behind the TM).
In a plenoptic otoscope, it is also possible to include a view finder to enable the examiner to view an image through the view finder of the otoscope at the time of image capture. A beam splitter or a single lens reflex can be used to split the optical path and direct the image to the plenoptic sensor and to the view finder. For example, either a single lens reflex or a beam splitter may be inserted at the relay plane between the first lens group and the second lens group of an otoscope (as shown in
In other embodiments, a plenoptic otoscope system may include a set of detachable tips. Each detachable tip includes a different filter module. Each filter module may be used for a different purpose. For example, one filter module may be used for spectral imaging, while another filter module may be used for depth imaging. These detachable tips can be exchanged with one another, and are also referred to interchangeable tips. When a detachable tip is attached to the otoscope, the filter module included in that detachable tip is positioned at the entrance pupil of the primary imaging system.
The plenoptic otoscopes described can be designed and manufactured as original plenoptic instruments. Alternately, existing otoscopes can be modified to become plenoptic. In one embodiment, an after-market plenoptic conversion kit may be used to convert a conventional digital otoscope to a plenoptic digital otoscope. The conversion kit includes a plenoptic sensor with a microimaging array and a sensor array. The digital otoscope is equipped with a conventional sensor. During the conversion, the plenoptic sensor replaces the conventional sensor, such that the microimaging array (e.g., a microlens array or a pinhole array) is positioned at an image plane of the digital otoscope. For example, the microimaging array may be positioned at the plane where the conventional sensor was previously located.
Data acquired with a plenoptic otoscope can contain volumetric data of the ear canal and can also provide spectral measurement as well as polarization states of objects. Enhanced imagery such as a depth map, disparity map, spectral images, polarization images and images showing the translucency of objects can be computed from the plenoptic data. In addition, higher-level information such as deformation of the shape of the TM, focusing on a selected object, and segmentation of the objects in the ear canal with respect to depth in the ear canal can also be performed. Results may be displayed to the user.
Consider first depth estimation. Given the design of a plenoptic otoscope, the data obtained with that system during a single data acquisition step can be processed to provide enhanced imagery with depth measurements of the TM as well as the ear canal at sub-millimeter resolution. These measurements can be used to aid the assessment of a medical condition.
An example of a method to estimate depth imagery from a lightfield is described in U.S. patent application Ser. No. 14/064,090, “Processing of light fields by transforming to scale and depth space,” which is incorporated by reference in its entirety herein. It is a multi-scale depth estimation approach, which analyzes the 4D lightfield data and can find a dense disparity map with sub-pixel precision. The method is based on extrema localization in light field scale and depth spaces, which are constructed by convolving a two-dimensional spatio-angular slice of a given 4D lightfield with a kernel designed to represent the structure of lightfields and to provide a simple way of estimating disparity values for the imaged object. Disparity values can then be converted to depth using a mapping based on system modeling.
Other methods for depth estimation can also be used. For example, multi-view stereo depth estimation algorithms can be applied. These include algorithms that pose the depth estimation problem as an energy minimization problem, where the energy includes a data fidelity term and a depth map smoothness term. The energy function can then be optimized using methods such as graph cuts, belief propagation, total variation, semi-global matching, etc. Sometimes image segmentation can be used in combination to depth estimation, in order to improve the final depth accuracy. Another approach that can be applied exploits the structure of the lightfield to obtain dense depth maps. An example of such a method is an algorithm that computes the structure tensor of the light field slices and uses that as a data fidelity term, while using total variation as a smoothness term. Computationally efficient methods based on normalized cross-correlation can also be applied to obtain a coarse depth map.
Yet another approach is described in U.S. patent application Ser. No. 14/312,586, “Disparity estimation for multiview imaging systems,” which is incorporated by reference in its entirety herein. This estimates a depth/disparity map using multiple multiview images and taking advantage of the relationship between disparities for images taken from different viewpoints.
Depth estimation techniques are used to obtain 920 enhanced imagery (in this case, depth/disparity maps) of the ear canal and/or objects inside it from lightfields obtained with a plenoptic otoscope. Moreover, depth map information can also be used to extract other diagnostic data, such as relevant three-dimensional shape information about the ear canal, ear drum or other objects in the ear canal, in order to help in three-dimensional visualization and diagnosis of medical conditions of the ear (e.g., variants of otitis media). The depth measurements preferably are taken with respect to the front of the camera and are available for different spatial locations in the scene. They can be also calculated for objects in the field of view of the camera (e.g., the eardrum or the mallus).
Depth map processing 930 includes different methods for extraction of relevant three-dimensional diagnostic data of the ear canal, ear drum and/or other objects in the ear canal. For example, the curvature of the ear drum can be estimated from the depth map data, by fitting one-dimensional or two-dimensional polynomials to the depth map values. Using the curvature estimate we can classify the shape of the eardrum into bulging, neutral or retracting (convex, planar or concave). Moreover, we can evaluate the amount of bulging or retracting of the ear drum. This can be used to aid in medical diagnosis. For example, see the “Position” row of Table 1 above.
In
The depth measurements and classification results can be used to assist a human to make a diagnosis. Alternately, it may be used, possibly in combination with other data, to make an automated diagnosis.
Returning to
Plenoptic data can also be used to compute a two-dimensional rendering of a selected focal plane, or a three-dimensional volumetric rendering of the ear canal taking advantage of multiple viewpoints. With such visualization, the medical professional can switch between different views of e.g. some hair or wax in the ear canal and the ear drum, or can switch between views of the ear drum from different viewpoints.
The problem of “seeing through” occlusions in the ear canal can be addressed by refocusing the lightfield data at a selected focal plane, and then rendering the image with a large synthetic aperture. The focal plane can be selected by the user in various ways. For example:
A disparity map is calculated 1330 from the different views. In this example, not all the views are used to calculate the disparity map. Rather, the disparity map is calculated from selected views. The views can be selected by algorithm, or can be selected by the user.
{circumflex over (n)}p(x,y)=arg max{corr(I1, . . . ,IN)} (1)
where {circumflex over (n)}p(x,y) is the estimated disparity at pixel (x,y), I1 . . . IN are the translated images, and corr is a correlation computation operator. The correlation can be calculated either globally or locally by using a sliding window. Different types of correlation computations can be used, such as sum of absolute different, normalized cross correlation, multiplied eigenvalues of covariance matrix, phase correlation, etc. Further description is given in U.S. patent application Ser. No. 14/312,586, “Disparity estimation for multiview imaging systems,” which is incorporated by reference in its entirety herein.
The disparity value assigned to the highest number of pixels in the image is determined. In this example, that disparity value corresponds to a depth plane that is chosen 1340 as the reference plane. A histogram 1440 of number of pixels at different disparities is shown in
In this example, a synthetic aperture image 1460 is created 1360 by averaging shifted (i.e., disparity-corrected) multiview images. Different number of views can be used in the averaging process to render the output image with different synthetic apertures. In this example, the synthesized image Is is computed as a weighted average of the views, according to:
IS=(ΣwiVi′)/(Σwi) (2)
Vi′ is the ith view after shifting to account for disparity. The image shift could be done in the spatial domain or in the frequency domain. wi is a weighting factor, for example to compensate for non-uniformity such as due to vignetting. The summation is over the number of views used to construct the synthesized image.
Spectral responses of tissue or the TM can be measured by using spectral narrow- or wide bandpass filters in the plenoptic otoscope. With such spectral measurements, a characterization of the properties of the TM, such as translucency or coloration, can be obtained in conjunction with depth measurements. Spectral measurements can be obtained for selected locations in the scene, e.g. on the TM. When choosing a near infrared (NIR) filter, longer wavelengths are penetrating the object at deeper layers, making it, e.g. possible to obtain characterization of objects behind semi-translucent objects (e.g., behind the TM).
Spectral measurements of the ear canal can be obtained when inserting a spectral filter array into the lightfield otoscope. Examples are described above with respect to
Referring again to Table 1 above, the three conditions of the ear shown in Table 1 are different and they can be distinguished from one another based on one or more of the following features: color, position (e.g., 3D shape), and translucency. In order to make correct diagnosis of the ear condition, a plenoptic otoscopic can be used to capture accurate information about color, three-dimensional shape and/or translucency of an inside of an ear (e.g., a tympanic membrane in an ear canal). These spectral measurements, individually or together with depth, polarization, translucency and/or bulging estimation, might be input to a machine learning algorithm to classify different medical conditions. The trained machine may be used to aid or automate diagnosis.
Although the detailed description contains many specifics, these should not be construed as limiting the scope of the invention but merely as illustrating different examples and aspects of the invention. It should be appreciated that the scope of the invention includes other embodiments not discussed in detail above. Various other modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus of the present invention disclosed herein without departing from the spirit and scope of the invention as defined in the appended claims. Therefore, the scope of the invention should be determined by the appended claims and their legal equivalents.
This application is a continuation-in-part of U.S. patent application Ser. No. 13/896,924, “Plenoptic Otoscope,” filed May 17, 2013; which claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application Ser. No. 61/754,327, titled “Plenoptic Otoscope,” filed Jan. 18, 2013. This application also claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application Ser. No. 61/946,267, “Use of Lightfield Otoscope Data for Aiding Medical Diagnosis,” filed Feb. 28, 2014. The subject matter of all of the foregoing are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6110106 | MacKinnon et al. | Aug 2000 | A |
7058441 | Shahar et al. | Jun 2006 | B2 |
7399275 | Goldfain et al. | Jul 2008 | B2 |
7399375 | Leiser et al. | Jul 2008 | B2 |
7433042 | Cavanaugh et al. | Oct 2008 | B1 |
7448753 | Chinnock | Nov 2008 | B1 |
7544163 | MacKinnon et al. | Jun 2009 | B2 |
7723662 | Levoy et al. | May 2010 | B2 |
7901351 | Prescott | Mar 2011 | B2 |
7936392 | Ng et al. | May 2011 | B2 |
7995214 | Forster et al. | Aug 2011 | B2 |
8066634 | Andreassen et al. | Nov 2011 | B2 |
8100826 | MacKinnon et al. | Jan 2012 | B2 |
8107086 | Hart | Jan 2012 | B2 |
8143565 | Berkner et al. | Mar 2012 | B2 |
8617061 | Magalhaes Mendes et al. | Dec 2013 | B2 |
8824779 | Smyth | Sep 2014 | B1 |
8845526 | Hart | Sep 2014 | B2 |
8944596 | Wood et al. | Feb 2015 | B2 |
8949078 | Berkner et al. | Feb 2015 | B2 |
9001326 | Goldfain | Apr 2015 | B2 |
9565993 | Berkner et al. | Feb 2017 | B2 |
20050228231 | MacKinnon et al. | Oct 2005 | A1 |
20080259274 | Chinnock | Oct 2008 | A1 |
20100004513 | MacKinnon et al. | Jan 2010 | A1 |
20110026037 | Forster et al. | Feb 2011 | A1 |
20110152621 | Magalhaes Mendes et al. | Jun 2011 | A1 |
20120065473 | Andreassen et al. | Mar 2012 | A1 |
20120182438 | Berkner et al. | Jul 2012 | A1 |
20120226480 | Berkner et al. | Sep 2012 | A1 |
20120320340 | Coleman, III | Dec 2012 | A1 |
20120327426 | Hart et al. | Dec 2012 | A1 |
20120327427 | Hart et al. | Dec 2012 | A1 |
20130002426 | Hart et al. | Jan 2013 | A1 |
20130002824 | Hart et al. | Jan 2013 | A1 |
20130003078 | Hart et al. | Jan 2013 | A1 |
20130027516 | Hart et al. | Jan 2013 | A1 |
20130128223 | Wood et al. | May 2013 | A1 |
20130237754 | Berglund et al. | Sep 2013 | A1 |
20130289353 | Seth et al. | Oct 2013 | A1 |
20140012141 | Kim et al. | Jan 2014 | A1 |
20140192255 | Shroff et al. | Jul 2014 | A1 |
20140316238 | Berkner et al. | Oct 2014 | A1 |
20140350379 | Verdooner | Nov 2014 | A1 |
20150005640 | Davis et al. | Jan 2015 | A1 |
20150005644 | Rhoads | Jan 2015 | A1 |
20150116526 | Meng et al. | Apr 2015 | A1 |
20150117756 | Tosic et al. | Apr 2015 | A1 |
20150126810 | Wood et al. | May 2015 | A1 |
Number | Date | Country |
---|---|---|
2000-126116 | May 2000 | JP |
2002-034916 | Feb 2002 | JP |
2007-004471 | Jan 2007 | JP |
2007-500541 | Jan 2007 | JP |
2009-244429 | Oct 2009 | JP |
2014-138858 | Jul 2014 | JP |
2014-530697 | Nov 2014 | JP |
WO 2012058641 | May 2012 | WO |
WO 2012066741 | May 2012 | WO |
WO 2013138081 | Sep 2013 | WO |
WO 2014021994 | Feb 2014 | WO |
Entry |
---|
Bedard, N. et al., “Light Field Otoscope,” Imaging and Applied Optics 2014, OSA Technical Digest (online), Optical Society of America, 2014, Paper IM3C.6, 4 pages. |
Bedard, N. et al., “In Vivo Middle Ear Imaging with a Light Field Otoscope,” Optics in the Life Sciences, OSA Technical Digest, (online), Optical Society of America, Paper BW3A.3, 3 pages. |
Berkner, K. et al., “Measuring Color and Shape Characteristics of Objects from Light Fields,” Imaging and Applied Optics, 2015, 3 pages. |
Cho, N.H. et al., “Optical Coherence Tomography for the Diagnosis of Human Otitis Media,” Proc. SPIE, 2013; 5 pages, vol. 8879, 88790N. |
Hernandez-Montes, M.S. et al., “Optoelectronic Holographic Otoscope for Measurement of Nanodisplacements in Tympanic Membranes,” Journal of Biomedical Optics, Proceedins of the XIth International Congress and Exposition, Society for Experimental Mechanics Inc., Jun. 2-5, 2008, 7 pages, vol. 14, No. 3. |
Kim, C. et al., “Scene Reconstruction from High Spatio-Angular Resolution Light Fields,” Transactions on Graphics (TOG), 2013, 11 pages, vol. 32, No. 4. |
Kubota A. et al.,“View Interpolation using Defocused Multi-View Images,” Proceedings of APSIPA Annual Summit and Conference, Asia-Pacific Signal and Information Processing Association, 2009 Annual Summit and Conference, Saporo, Japan, Oct. 4, 2009, pp. 358-362. |
Kuruvilla, A. et al., “Otitis Media Vocabulary and Grammar,” CMU, ICIP, 2012, 4 pages. |
Levoy, M. “Light Field and Computational Imaging,” IEEE Computer Magazine, 2006, pp. 46-55, vol. 39. |
Levoy, M. et al., “Light Field Microscopy,” Proc. SIGGRAPH, ACM Transactions on Graphics, 2006, pp. 1-11, vol. 25, No. 3. |
Ng, R. et al., “Light Field Photography with a Hand-Held Plenoptic Camera,” Stanford Tech Report, 2005, pp. 1-11. |
Sundberg, M., et al., “Fibre Optic Array for Curvature Assessment—Application in Otitis Media,” Medical & Biological Engineering & Computing, Mar. 2004, pp. 245-252, vol. 42, No. 2. |
Sundberg, M., et al., “Diffuse Reflectance Spectroscopy of the Human Tympanic Membrane in Otitis Media, ”Physiological Measurement, 2004, pp. 1473-93, vol. 25, No. 6. |
Tao, M. et al., “Depth from Combining Defocus and Correspondence Using Light-Field Cameras,” in Proceedings of the International Conference on Computer Vision, 2013, 8 pages. |
Wanner, S. et al., “Globally Consistent Depth Labeling of 4D Light Fields,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2012, pp. 41-48. |
Yang, T. et al., “High Performance Imaging Through Occlusion via Energy Minimization-Based Optimal Camera Selection,” International Journal of Advanced Robotic Systems, 2013, pp. 19, vol. 10. |
U.S. Appl. No. 14/312,586, Lingfei Meng et al., filed Jun. 23, 2014, [Copy not enclosed]. |
United States Office Action, U.S. Appl. No. 13/896,924, dated Jul. 8, 2015, 15 pages. |
Japanese Office Action, Japanese Application No. 2016-211050, dated Sep. 26, 2017, 5 pages (with concise explanation of relevance). |
Japanese Office Action, Japanese Application No. 2014-006668, dated Sep. 26, 2017, 4 pages (with concise explanation of relevance). |
Number | Date | Country | |
---|---|---|---|
20140316238 A1 | Oct 2014 | US |
Number | Date | Country | |
---|---|---|---|
61754327 | Jan 2013 | US | |
61946267 | Feb 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13896924 | May 2013 | US |
Child | 14318578 | US |