Method and apparatus for a software enabled high resolution ultrasound imaging device

Information

  • Patent Grant
  • 12229922
  • Patent Number
    12,229,922
  • Date Filed
    Sunday, April 17, 2022
    3 years ago
  • Date Issued
    Tuesday, February 18, 2025
    10 months ago
Abstract
Various computational methods and techniques are presented to increase the lateral and axial resolution of an ultrasound imager in order to allow a medical practitioner to use an ultrasound imager in real time to obtain a 3D map of a portion of a body with a resolution that is comparable to an MRI machine.
Description
TECHNICAL FIELD

Embodiments of this invention relate to devices and methods for enhancing images captured by an ultrasound imaging device. In particular, the resolution of a low resolution image of an object captured by an ultrasound imaging device is improved via at least one higher resolution image taken of at least a portion of the same or a similar object.


BACKGROUND

MRI and ultrasound imagers are two of the main medical imaging devices that are used to see inside a living organism non-intrusively. Current MRI systems are huge, bulky, slow and expensive. Additionally, they can't be used for real time applications by medical practitioners. A physical therapist, for example, needs to know how the tissues, ligaments or muscles respond to various exercises or stretches or how much they have healed or improved. Also medical practitioners often need to assess the impact of extra force when for example they stretch a muscle. They also need to know the nature of an obstacle when stretching leads to pain or it is not possible beyond a limit, as a result of an injury. Is there a piece of broken bone or a detached ligament? Currently, physical therapists can't see inside an organ or a body part that they interact with. Instead, they try to infer, sense, or gauge the body's reaction in order to decide about the best course of action. If they could see the inside before and after an exercise or in real time, they could make better and faster decisions which could lead to shorter recovery time for patients and lower cost for health care providers. In other words, it will be hugely beneficial to both patients and care providers to have a real time or even near real time high resolution images of an area of interest. Ultrasound imagers are much smaller and less expensive than MRI machines and they can provide real time images. However, the images from ultrasound imagers are noisy and have much less image resolution (details) than MRI images.


SUMMARY

MRI machines can generate high resolution images of a portion of a living organism but they are big, bulky and expensive. Additionally, it takes days to get the results back. Ultrasound imagers are compact and low cost and can generate images in real time. An object of this invention is to improve the resolution of the ultrasound imagers using computational techniques so that ultrasound imagers are used more often by medical practitioners. This will reduce the healthcare cost and make it possible for more medical facilities to have accurate imaging devices.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows exemplary steps of an imaging method, according to certain aspects of the present disclosure.



FIG. 2 shows a schematic block diagram of an imaging apparatus, according to certain aspects of the present disclosure.





DETAILS OF THE SOLUTIONS

To improve the resolution and to lower the noise in ultrasound imagers, in this disclosure, many imaging systems and configurations are presented. Each solution has its own advantages and may be the best depending on the circumstance.


In the first imaging system, high resolution images from a volume of interest or a portion of a body is captured using an MRI machine. These images are high resolution 2D slices of a volume or a portion of a body and they are fused together to create a high-resolution 3D map of the volume of interest. These MRI images and the constructed 3D map are stored in a database and at least one slice of the 3-D map is used as a training image or a high resolution reference image to increase the resolution of a corresponding ultrasound image.


When an ultrasound image is captured, the relative location of the imager's head with respect to the body part is used to generate a corresponding 2D slice from the 3D map of the body part. A pattern recognition algorithm can be used to find a slice from the 3D map that best matches the ultrasound image. At this stage, we will have two images: a low resolution from the ultrasound imager and a high resolution image generated from the 3D map. Next a bandwidth extrapolation technique is used to enhance the ultrasound image using high resolution image in an iterative fashion. The ultrasound imager will be equipped with a 9-axis accelerometer in order to relate various ultrasound images to each other. These ultrasound images can also be combined to create a new 3D map of the body part or volume of interest. The advantage is that this later 3D map can be created very quickly. It is also possible to feed the MRI images to a computer that controls the ultrasound imager. A medical practitioner will select an slice of interest from the 3D MRI map and requests for a corresponding slice using the ultrasound imager. The starting point (location) of the ultrasound imager is decided by the medical practitioner and the ultrasound imager uses the 9-axes accelerometer to fine tune its position before capturing an image and displaying it on a screen.


The second approach is to use multi ultrasound imagers, similar to multi-camera imaging approach disclosed in the U.S. patent application Ser. No. 15/400,399. The depth of view of an ultrasound imager is inversely proportional with the frequency of the ultrasound waves used by the ultrasound machine. An ultrasound imager with higher frequency penetrates less in the body but it creates an image that its axial resolution is higher than another ultrasound imager with a lower frequency. Lateral resolution is a function of the beam angle. A wider beam angle (larger field of view) results in a lower lateral resolution than a narrower angle beam (smaller field of view). As already disclosed in the U.S. patent application Ser. No. 15/400,399, one can fuse (combine) a low resolution image and a higher resolution image that at least partially overlap to generate a higher resolution version of the low resolution image. In doing so, a high resolution image of a slice of an organ or part of a body is obtained. By combining many of such slices, a high resolution 3D map of the volume is obtained without a need for an MRI machine.


Another method to improve the resolution of an ultrasound imager is to use publicly available images as reference. A medical practitioner will interact with a computer program installed on a computer to select a part of a body from a menu and also select a region of interest from a map provided via the menu on a display. Once a part of a body is selected, the program on the computer will access the general description and features of the part or parts. The user can additionally input what areas within the part of the body might have been damaged. These suspected damaged areas will be enhanced once the image resolution of the surrounding healthy areas is improved. With these inputs, the program will have an estimate of how an image may look like and where the soft and hard tissues might be at. The program will use the reflectivity and the shapes of various parts within the area of interest as well. This in effect allows creating a smart ultrasound imager that uses some prior knowledge before creating an image. Presently, an ultrasound imager only creates an output image and it's the task of a practitioner to use her own knowledge to evaluate an ultrasound image. By applying rules, restrictions and a number of other known information about an area or volume of interest, a high resolution image of at least one cross section of the volume of interest is created and portions of the image that does not follow the general rules or lack the expected properties are marked. This makes the task of a medical practitioner much easier while providing images with a higher resolution and quality than a standard ultrasound imager. This method is in effect an application of machine learning and artificial intelligence to ultrasound imaging.


For certain areas of an ultrasound image, a training image or a property of an organ or a tissue is used or taken into account to remove the noise or increase the resolution of the ultrasound image. A training image may be an image from an MRI machine or from a public database. Rules are extracted from a database of similar images using artificial intelligence. An artificial intelligent program looks for common features in a volume or in a plane or in a slice or cross-section of the volume. For example, when a torn ligament is studied, high resolution images of the surrounding bones and other tissues can be used to let the imager know how the healthy area surrounding the ligament look like. The healthy area image portion will be used to enhance the image of the area in question. As a result, the image of the torn ligament can be significantly enhanced and its resolution can be increased.


Another method to obtain higher resolution images from an ultrasound imager is to use a technique similar to multi-frame sub-pixel super-resolution technique. In this method, the lateral resolution of an ultrasound imager is improved by shifting the imaging head in small increments and taking an image at each step. An actuator is used to move the imaging head and an accelerometer is used to measure and control the shifts accurately. The location of the imager with respect to the body part is monitored and recorded for each recorded image frame and several images are taken in succession with lateral shifts with respect to each other. The captured several images are mapped into a higher resolution grid and image registration is used to properly align the images and combine them to create a higher resolution image.


Resolution of an ultrasound imager is determined by its frequency. Larger frequencies result in higher resolution images. However, higher frequencies penetrate less in a tissue. We suggest combining two ultrasound images captured via a low frequency and high frequency ultrasound imager to create a high resolution ultrasound image with sufficient penetration depth through a tissue. Only one imager will be active at a time and the two images will take pictures alternatively and provide captured images to a processor. The processor will use those images to generate at least a single high resolution image. In fact, by using a range of ultrasound frequencies, we can obtain a number of images with different resolutions and depths. Such multi-resolution images when combined properly with a bandwidth extrapolation technique could result in a single high resolution image with sufficient depth and lateral and axial resolution.


In some applications, to shorten the time required for image capturing, an imaging head is used that is deformable to easily match the contour of a portion of a body. The imaging head will include many imaging blocks. Depending on the resultant contour, at least one imaging block is fired up and the remaining blocks are used to record the reflected and scattered ultrasound waves sent by the at least one block. The information from all the capturing blocks is used to create a single image of a cross-section or slice of a portion of a body.

Claims
  • 1. An imaging method for capturing an image of a slice of a volume, the method comprising: acquiring, via an ultrasound imager, a first image corresponding to a first slice of the volume, the first image having a first resolution;receiving, from a memory unit, a training image corresponding to a second slice of the volume, the training image having a second resolution that is higher than the first resolution, at least a portion of the second slice of the volume corresponding to at least a portion of the first slice of the volume; andexecuting an image resolution enhancement procedure to increase the image resolution of a subset of the first image based at least in part on the training image.
  • 2. The imaging method of claim 1, wherein the increasing step comprises: recognizing patterns in the first image and the training image; andlearning from the recognized patterns to increase the image resolution of the subset of the first image.
  • 3. The imaging method of claim 1, wherein the first slice of the volume is a subset of the second slice of the volume.
  • 4. The imaging method of claim 1, wherein the image resolution is increased in the increasing step via an iterative image resolution enhancement procedure.
  • 5. The imaging method of claim 1, further comprising the step of generating a 3D map of the volume using the first image.
  • 6. The imaging method of claim 1, further comprising the step of capturing the position and orientation of the ultrasound imager via a 9-axis accelerometer when the first image is captured.
  • 7. The imaging method of claim 1, wherein the image resolution is also increased in the increasing step based on the position and orientation of the ultrasound imager.
  • 8. An imaging apparatus comprising: an ultrasound imaging unit for capturing a first image corresponding to a first slice of a volume, the first image having a first resolution;a memory unit for storing a training image corresponding to a second slice of the volume, the training image having a second resolution higher than the first resolution, the training image being captured with an MRI machine and prior to capturing the first image, at least a portion of the second slice of the volume corresponding to at least a portion of the first slice of the volume; anda processor, in communication with the ultrasound imaging unit and the memory unit, the processor configured to: execute an image resolution enhancement procedure to increase the image resolution of a subset of the first image based at least in part on the training image.
  • 9. The imaging apparatus of claim 8, wherein the image resolution enhancement procedure comprises: recognizing patterns in the first image and the training image; andlearning from the recognized patterns to increase the image resolution of the subset of the first image.
  • 10. The imaging apparatus of claim 8, wherein the first slice of the volume is a subset of the second slice of the volume.
  • 11. The imaging apparatus of claim 8, wherein the image resolution enhancement procedure is an iterative image resolution enhancement procedure.
  • 12. The imaging apparatus of claim 8, wherein the processor is further configured to generate a 3D map of the volume using the first image.
  • 13. The imaging apparatus of claim 8, wherein the processor is further configured to capture the position and orientation of the ultrasound imager via a 9-axis accelerometer when the first image is captured.
  • 14. The imaging apparatus of claim 8, wherein the image resolution is also increased based on the position and orientation of the ultrasound imager.
  • 15. An imaging apparatus comprising: an ultrasound imaging unit for capturing a first image corresponding to a first slice of a first volume of a first human body, the first image having a first resolution;a memory unit for storing a training image corresponding to a second slice of a second volume of a second human body, the training image having a second resolution higher than the first resolution, the training image being captured with an MRI machine and prior to capturing the first image, at least a portion of the first volume of the first human body corresponding to at least a portion of the second volume of the second human body; anda processor, in communication with the ultrasound imaging unit and the memory unit, the processor configured to: execute an image resolution enhancement procedure to increase the image resolution of a subset of the first image based at least in part on the training image.
  • 16. The imaging apparatus of claim 15, wherein the first and the second human bodies are the same.
  • 17. The imaging apparatus of claim 15, wherein the processor is further configured to generate a 3D map of the first volume using the at least one first image.
  • 18. The imaging apparatus of claim 15, wherein the processor is further configured to capture the position and orientation of the ultrasound imager via a 9-axis accelerometer when the first image is captured.
  • 19. The imaging apparatus of claim 15, wherein the image resolution is also increased based on the position and orientation of the ultrasound imager.
  • 20. The imaging apparatus of claim 15, further comprising capturing a second image via the ultrasound imager corresponding to a second slice of the first volume of the first human body, the ultrasound imager configured to operate at multiple frequencies and the first and the second images are taken at two different frequencies.
  • 21. The imaging method of claim 1, wherein the volume is a portion of a human body.
  • 22. The imaging method of claim 1, wherein the training image is acquired via an imager other than an ultrasound imager.
CROSS-REFERENCE TO RELATED APPLICATIONS

This patent application claims the benefits of U.S. patent application Ser. No. 16/507,910, filed Jul. 10, 2019, and entitled “METHOD AND APPARATUS FOR A SOFTWARE ENABLED HIGH RESOLUTION ULTRASOUND IMAGING DEVICE” which claims the benefits of U.S. Provisional Application No. 62/696,346, filed Jul. 11, 2018, and entitled “METHOD AND APPARATUS FOR A SOFTWARE ENABLED HIGH RESOLUTION ULTRASOUND IMAGING DEVICE” all of which entirely are incorporated by reference herein.

US Referenced Citations (98)
Number Name Date Kind
4028725 Lewis Jun 1977 A
4907296 Blecha Mar 1990 A
5262871 Wilder Nov 1993 A
5856811 Shih Jan 1999 A
5859921 Suzuki Jan 1999 A
6163336 Richards Dec 2000 A
6198485 Mack Mar 2001 B1
6307526 Mann Oct 2001 B1
6434280 Peleg Aug 2002 B1
6486799 Still Nov 2002 B1
6661495 Popovich Dec 2003 B1
6766067 Freeman Jul 2004 B2
6850629 Jeon Feb 2005 B2
7023464 Harada Apr 2006 B1
7331671 Hammond Feb 2008 B2
7391887 Durnell Jun 2008 B2
7492926 Kang Feb 2009 B2
7697024 Currivan Apr 2010 B2
7715658 Cho May 2010 B2
7894666 Mitarai Feb 2011 B2
8014632 Matsumoto Sep 2011 B2
8139089 Doyle Mar 2012 B2
8159519 Kurtz Apr 2012 B2
8305899 Luo Nov 2012 B2
8432492 Deigmoeller Apr 2013 B2
8872910 Vaziri Oct 2014 B1
9230140 Ackley Jan 2016 B1
9438491 Van Broeck Sep 2016 B1
9438819 Van Broeck Sep 2016 B2
9618746 Browne Apr 2017 B2
9674490 Koravadi Jun 2017 B2
9779311 Lee Jun 2017 B2
9727790 Vaziri Aug 2017 B1
9858676 Bostick Jan 2018 B2
9864372 Chen Jan 2018 B2
10039445 Torch Aug 2018 B1
10064552 Vaziri Sep 2018 B1
10708514 Haltmaier Jul 2020 B2
11189017 Baqai Nov 2021 B1
11287262 Dooley Mar 2022 B2
20030122930 Schofield Jul 2003 A1
20040212882 Liang Oct 2004 A1
20040218834 Bishop Nov 2004 A1
20060033992 Solomon Feb 2006 A1
20070041663 Cho Feb 2007 A1
20070115349 Currivan May 2007 A1
20080010060 Asano Jan 2008 A1
20080030592 Border Feb 2008 A1
20080036875 Jones Feb 2008 A1
20080198324 Fuziak Aug 2008 A1
20080291295 Kato Nov 2008 A1
20080297589 Kurtz Dec 2008 A1
20090189974 Deering Jul 2009 A1
20100053555 Enriquez Mar 2010 A1
20100103276 Border Apr 2010 A1
20100128135 Filipovich May 2010 A1
20100157078 Atanassov Jun 2010 A1
20100157079 Atanassov Jun 2010 A1
20100208207 Connell, II Aug 2010 A1
20100240988 Varga Sep 2010 A1
20100254630 Ali Oct 2010 A1
20100277619 Scarff Nov 2010 A1
20100289941 Ito Nov 2010 A1
20100290668 Friedman Nov 2010 A1
20100290685 Wein Nov 2010 A1
20110263946 El Kaliouby Oct 2011 A1
20110279666 Stromborn Nov 2011 A1
20120257005 Browne Oct 2012 A1
20130106911 Salsman May 2013 A1
20130121525 Chen May 2013 A1
20130242057 Hong Sep 2013 A1
20140146153 Birnkrant May 2014 A1
20140267890 Lelescu Sep 2014 A1
20140313335 Koravadi Oct 2014 A1
20150009550 Misago Jan 2015 A1
20150209002 De Beni Jul 2015 A1
20160012280 Ito Jan 2016 A1
20160179093 Prokorov Jun 2016 A1
20160225192 Jones Aug 2016 A1
20170007351 Yu Jan 2017 A1
20170019599 Muramatsu Jan 2017 A1
20170099479 Browd Apr 2017 A1
20170142312 Dal Mutto May 2017 A1
20170181802 Sachs Jun 2017 A1
20170225336 Deyle Aug 2017 A1
20170322410 Watson Nov 2017 A1
20170360578 Shin Dec 2017 A1
20180012413 Jones Jan 2018 A1
20180188892 Levac Jul 2018 A1
20180330473 Foi Nov 2018 A1
20190175214 Wood Jun 2019 A1
20190254754 Johnson Aug 2019 A1
20190272336 Ciecko Sep 2019 A1
20200041261 Bernstein Feb 2020 A1
20200077033 Chan Mar 2020 A1
20200117025 Sauer Apr 2020 A1
20200330179 Ton Oct 2020 A1
20210067764 Shau Mar 2021 A1
Non-Patent Literature Citations (48)
Entry
Brattain et al. (Machine learning for medical ultrasound: status, methods, and future opportunities. Abdom Radiol (NY). Apr. 2018;43 (4):786-799. doi: 10.1007/s00261-018-1517-0. PMID: 29492605; PMCID: PMC5886811.
F. Coly et al. “Ultrasound Orientation Sensor”, BS thesis, Worcester Polytechnic Institute. Apr. 25, 2019.
“A High Speed Eye Tracking System with Robust Pupil Center Estimation Algorithm”, Proceedings of the 29th Ammal International Conference of the IEEE EMBS, 25 Kyon, France, pp. 3331-3334, Aug. 2007.
“A Novel Method of Video-Based Pupil Tracking”, Proceedings of the 2009 IEEE International Conference on Systems, Man and Cybernetics, San Antonio, Tex., USA—pp. 1255-1262, Oct. 2009
Athanasios Papoulis; A New Algorithm in Spectral Analysis and Band-Limited Extrapolation; IEEE Transactions on Circuits and Systems, Sep. 1975; vol. CAS-22, No. 9; pp. 735-742.
Barbara Zitova et al.; Image Registration Methods: a Survey; Department of Image Processing; Institute of Information Theory and Automation Academy of Sciences of the Szech Republic; Image and Vision Computing; pp. 977-1000.
B. K. Gunturk, “Super-resolution imaging”, in Compu-tational Photography Methods and Applications, by R. Lukac, CRC Press, 2010 [Abstract Provided].
Cheng et al. Developing a Real-Time Identify-and-Locate System for the Blind. Workshop on Computer Vision Applications for the Visually Impaired, James Coughlan and Roberto Manduchi, Oct. 2008, Marseille, France.
Danny Keren et al.; Image Sequence Enhancement Using Sub-pixel Displacements; Department of computer science; The Hebrew University of Jerusalem; 1988 IEEE; pp. 742-746.
D. Li, D. Winfield and D. Parkhurst, “Starburst: A Hybrid algorithm for video based eye tracking combining feature-based and model-based approaches”, Iowa State Univer-sity, Ames, Iowa.
Edward R. Dowski, Jr. et al.; Extended Depth of Field Through Wave-Front Coding; Apr. 10, 1995; Optical Society of America; vol. 34, No. 11; Applied Optics pp. 1859-1866.
Eran Gur and Zeev Zalevsky; Single-Image Digital Super-Resolution a Revised Gerchberg-Papoulis Algorithm; AENG International Journal of Computer Science; Nov. 17, 2007; pp. 1-5.
Extrema.m, http:/lwww.malhworks.com/matlabcentral/fileexchange/12275-extrema-m-extrema2-m, Sep. 14, 2006.
Eyelink User Manual, SR Research Lid., Copyright 2005-2008, 134 pages.
Eyelink Data Viewer User's Manual, SR Research Lid., Copyright 2002-2008, 149 pages.
Fritz Gleyo, Microsoft May Make Life-Sized Cortana In Person For HoloLens, (Dec. 14, 2015).
Guestrin et al. “General Theory of Remote Gaze Estimation Using the Pupil Center and Comeal Reflections”, IEEE Trans. Biomedical Eng., vol. 53, No. 6, pp. 1124-1133, June (2006).
Jessi Hempel, Project HoloLens: Our Exclusive Hands-On With Microsoft's Holographic Goggles, (Jan. 21, 2015).
John Bardsley et al.; Blind Iterative Restoration of Images With Spatially-Varying Blur, 9 pages.
J. Goodman, Introduction to Fourier Optics, rd edition, 160-165, McGraw-Hill, 1988.
Jordan Novet, Microsoft could build a life-sized Cortana for HoloLens, https://www.technologyrecord.com/Article/introducing-microsoft-hololens-development-edition-48296.
Kennet Kubala et al.; Reducing Complexity in Computational Imaging Systems; CDM Optics, Inc.; Sep. 8 J003; vol. 11, No. 18; Optics Express; pp. 2102-2108.
Lees et al. (Ultrasound Imaging in Three and Four Dimensions, Seminars in Ultrasound, CT, and MR/, vol. 22, No. 1 (February), 2001: pp. 85-105, (Year: 2001).
Lindsay James, Introducing Microsoft HoloLens Development Edition, https://blogs.windows.com/ :levices/2015/04/30/build-2015-a-closer-look-at-the-microsoft-hololens-hardware/.
Lisa Gottesfeld Brown; A Survey of Image Registration Techniques; Department of Computer Science; Columbia University; Jan. 12, 1992; pp. 1-60.
Malcolm et al. Combining topdown processes to guide eye movements during real-world scene search. Journal of Vision, 10(2):4, p. 1-11 (2010).
Maria E. Angelopoulou et al.; FPGA-based Real-lime Super-Resolution on an Adaptive Image Sensor; Department of Electrical and Electronic Engineering, Imperial College London; 9 pages.
Maria E. Angelopoulou et al.; Robust Real-Time Super-Resolution on FPGA and an Application to Video Enhancement; Imperial College London; ACM Journal Name; Sep. 2008; vol. V, No. N; pp. 1-27.
Moreno et al. Classification of visual and linguistic tasks using eye-movement features; Journal of Vision (2014) 14(3):11, 1-18.
Oliver Bowen et al.; Real-Time Image Super Resolution Using an FPGA; Department of Electrical and Electronic Engineering; Imperial College London; 2008 IEEE; pp. 89-94.
Patrick Vandewalle et al.; A Frequency Domain Approach to Registration of Aliased Images with Application to Super-resolution; Ecole Polytechnique Federal de Lausanne, School of Computer and Communication Sciences; Department of Electrical Engineering and Computer Sciences, University of California; EU RAS IP Journal on Applied Signal Processing; vol. 2006, Article ID 71459, pp. 1-14.
P. C. Hansen, J. G. Nagy, D. P. O'Leary, Deblurring Images: matrices, Spectra and Filtering, SIAM (2006) [Abstract Provided].
P. Milanfar, Super-Resolution Imaging, CRC Press (2011) [Abstract Provided].
Pravin Bhat et al.; Using Photographs to Enhance Videos of a Static Scene; University of Washington; Microsoft Research; Adobe Systems; University of California; The Eurographics Association 2007; pp. 1-12.
R. W. Gerchberg, “Super-resolution through error energy reduction”, Optica Acta, vol. 21, No. 9, pp. 709-720,(1974).
S. Chaudhuri, Super-Resolution Imaging, Kluwer Aca-demic Publishers (2001) [Abstract Provided].
Sang-Hyuck Lee et al.; Breaking Diffraction Limit of a Small F-Number Compact Camera Using Wavefront Coding; Center for Information Storage Device; Department of Mechanical Engineering, Yonsei University, Shinchondong, Sudaemungu, Seoul 120-749, Korea; Sep. 1, 2008; vol. 16, No. 18; pp. 13569-13578.
Suk Hwan Lim and Amnon Silverstein; Estimation and Removal of Motion Blur by Capturing Two Images With Different Exposures; HP Laboratories and NVidia Corp.; HPL-2008-170; Oct. 21, 2008; 8 pages.
S.-W Jung and S.-J. Ko, “Image deblurring using multi-exposed images” in Computational Photography 65 Methods and Applications, by R. Lukac, CRC Press, 2010. [Abstract Provided].
Todd Holmdahl, Build 2015: A closer look at the Microsoft HoloLens hardware, https://blogs.windows.com/ :levices/2015/04/30/build-2015-a-closer-look-at-the-microsoft-hololens-hardware/.
Tod R. Lauer; Deconvolution With a Spatially-Variant PSF; National Optical Astronomy Observatory; Tucson, AZ; arXiv:astro-ph/0208247v1; Aug. 12, 2002; 7 pages.
V. Barmore, Iterative-Interpolation Super-Resolution Image Reconstruction, Springer (2009) [Abstract Provided].
W. Thomas Cathey et al.; New Paradigm for Imaging Systems; Optical Society of America; Applied Optics; Oct. 10, 2002; vol. 41, No. 29; pp. 6080-6092.
William T. Freeman et al.; Example-Based Super-Resolution; Mitsubishi Electric Research Labs; March/April J002; IEEE Computer Graphics and Applications; pp. 56-65.
Sawhney, H. et al.; “Hybrid Stereo Camera: An IBR Approach for Synthesis of Very High Resolution Stereoscopic Image Sequences”; AC SIGGRAPH, pp. 451-460; 2001 (10 pages).
Zitnick, L. et al.; “Stereo for Image-Based Rendering Using Image Over-Segmentation”; International Journal of Computer Visions; 2006 (32 pages).
A Zandifar, R. Duraiswami, L.S. Davis, A video-based framework for the analysis of presentations/posters, 2003 (Year: 2003) 10 pages.
Z. Zalevsky, D. Mendlovic, Optical Superresolution, 2004 (Year: 2004) 261 pages.
Provisional Applications (1)
Number Date Country
62696346 Jul 2018 US
Continuations (1)
Number Date Country
Parent 16507910 Jul 2019 US
Child 17722347 US