Camera array for a mediated-reality system

Information

  • Patent Grant
  • 12126916
  • Patent Number
    12,126,916
  • Date Filed
    Monday, August 30, 2021
    3 years ago
  • Date Issued
    Tuesday, October 22, 2024
    2 months ago
Abstract
A camera array for a mediated-reality system includes a plurality of hexagonal cells arranged in a honeycomb pattern in which a pair of inner cells include respective edges adjacent to each other and a pair of outer cells are separated from each other by the inner cells. A plurality of cameras are mounted within each of the plurality of hexagonal cells. The plurality of cameras include at least one camera of a first type and at least one camera of a second type. The camera of the first type may have a longer focal length than the camera of the second type.
Description
BACKGROUND
Technical Field

The disclosed embodiments relate generally to a camera array, and more specifically, to a camera array for generating a virtual perspective of a scene for a mediated-reality viewer.


Description of the Related Art

In a mediated reality system, an image processing system adds, subtracts, or modifies visual information representing an environment. For surgical applications, a mediated reality system may enable a surgeon to view a surgical site from a desired perspective together with contextual information that assists the surgeon in more efficiently and precisely performing surgical tasks.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram illustrating an example embodiment of an imaging system.



FIG. 2 is an example of a surgical environment employing the imaging system for mediated-reality assisted surgery.



FIG. 3 is simplified cross-sectional view of an example embodiment of a camera array.



FIG. 4 is a detailed bottom view of an example embodiment of a camera array.



FIG. 5 is a top perspective view of an example embodiment of a camera array.





DETAILED DESCRIPTION

The figures and the following description relate to preferred embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is claimed.


Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict embodiments of the disclosed system (or method) for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.


OVERVIEW

A camera array includes a plurality of hexagonal cells arranged in a honeycomb pattern in which a pair of inner cells include respective edges adjacent to each other and a pair of outer cells are separated from each other by the inner cells. A plurality of cameras is mounted within each of the plurality of hexagonal cells. The plurality of cameras includes at least one camera of a first type and at least one camera of a second type. For example, the camera of the first type may have a longer focal length than the camera of the second type. The plurality of cameras within each of the plurality of hexagonal cells are arranged in a triangular grid approximately equidistant from neighboring cameras. In an embodiment, at least one camera of the second type within each of the plurality of hexagonal cells is at a position further from or equidistant from a center point of the camera array relative to cameras of the first type.


Mediated-Reality System



FIG. 1 illustrates an example embodiment of a mediated-reality system 100. The mediated-reality system 100 comprises an image processing device 110, a camera array 120, a display device 140, and an input controller 150. In alternative embodiments, the mediated-reality system 100 may comprise additional or different components.


The camera array 120 comprises a plurality of cameras 122 (e.g., a camera 122-1, a camera 122-2, . . . , a camera 122-N) that each capture respective images of a scene 130. The cameras 122 may be physically arranged in a particular configuration as described in further detail below such that their physical locations and orientations relative to each other are fixed. For example, the cameras 122 may be structurally secured by a mounting structure to mount the cameras 122 at predefined fixed locations and orientations. The cameras 122 of the camera array 120 may be positioned such that neighboring cameras may share overlapping views of the scene 130. The cameras 122 in the camera array 120 may furthermore be synchronized to capture images of the scene 130 substantially simultaneously (e.g., within a threshold temporal error). The camera array 120 may furthermore comprise one or more projectors 124 that projects a structured light pattern onto the scene 130. The camera array 120 may furthermore comprise one or more depth sensors 126 that perform depth estimation of a surface of the scene 150.


The image processing device 110 receives images captured by the camera array 120 and processes the images to synthesize an output image corresponding to a virtual camera perspective. Here, the output image corresponds to an approximation of an image of the scene 130 that would be captured by a camera placed at an arbitrary position and orientation corresponding to the virtual camera perspective. The image processing device 110 synthesizes the output image from a subset (e.g., two or more) of the cameras 122 in the camera array 120, but does not necessarily utilize images from all of the cameras 122. For example, for a given virtual camera perspective, the image processing device 110 may select a stereoscopic pair of images from two cameras 122 that are positioned and oriented to most closely match the virtual camera perspective.


The image processing device 110 may furthermore perform a depth estimation for each surface point of the scene 150. In an embodiment, the image processing device 110 detects the structured light projected onto the scene 130 by the projector 124 to estimate depth information of the scene. Alternatively, or in addition, the image processing device 110 includes dedicated depth sensors 126 that provide depth information to the image processing device 110. In yet other embodiments, the image processing device 110 may estimate depth only from multi-view image data without necessarily utilizing any projector 124 or depth sensors 126. The depth information may be combined with the images from the cameras 122 to synthesize the output image as a three-dimensional rendering of the scene as viewed from the virtual camera perspective.


In an embodiment, functions attributed to the image processing device 110 may be practically implemented by two or more physical devices. For example, in an embodiment, a synchronization controller controls images displayed by the projector 124 and sends synchronization signals to the cameras 122 to ensure synchronization between the cameras 122 and the projector 124 to enable fast, multi-frame, multi-camera structured light scans. Additionally, this synchronization controller may operate as a parameter server that stores hardware specific configurations such as parameters of the structured light scan, camera settings, and camera calibration data specific to the camera configuration of the camera array 120. The synchronization controller may be implemented in a separate physical device from a display controller that controls the display device 140, or the devices may be integrated together.


The virtual camera perspective may be controlled by an input controller 150 that provides a control input corresponding to the location and orientation of the virtual imager perspective. The output image corresponding to the virtual camera perspective is outputted to the display device 140 and displayed by the display device 140. The image processing device 110 may beneficially process received inputs from the input controller 150 and process the captured images from the camera array 120 to generate output images corresponding to the virtual perspective in substantially real-time as perceived by a viewer of the display device 140 (e.g., at least as fast as the frame rate of the camera array 120).


The image processing device 110 may comprise a processor and a non-transitory computer-readable storage medium that stores instructions that when executed by the processor, carry out the functions attributed to the image processing device 110 as described herein.


The display device 140 may comprise, for example, a head-mounted display device or other display device for displaying the output images received from the image processing device 110. In an embodiment, the input controller 150 and the display device 140 are integrated into a head-mounted display device and the input controller 150 comprises a motion sensor that detects position and orientation of the head-mounted display device. The virtual perspective can then be derived to correspond to the position and orientation of the head-mounted display device such that the virtual perspective corresponds to a perspective that would be seen by a viewer wearing the head-mounted display device. Thus, in this embodiment, the head-mounted display device can provide a real-time rendering of the scene as it would be seen by an observer without the head-mounted display. Alternatively, the input controller 150 may comprise a user-controlled control device (e.g., a mouse, pointing device, handheld controller, gesture recognition controller, etc.) that enables a viewer to manually control the virtual perspective displayed by the display device.



FIG. 2 illustrates an example embodiment of the mediated-reality system 100 for a surgical application. Here, an embodiment of the camera array 120 is positioned over the scene 130 (in this case, a surgical site) and can be positioned via a swing arm 202 attached to a workstation 204. The swing arm 202 may be manually moved or may be robotically controlled in response to the input controller 150. The display device 140 in this example is embodied as a virtual reality headset. The workstation 204 may include a computer to control various functions of the camera array 120 and the display device 140, and may furthermore include a secondary display that can display a user interface for performing various configuration functions, or may mirror the display on the display device 140. The image processing device 120 and the input controller 150 may each be integrated in the workstation 204, the display device 140, or a combination thereof.



FIG. 3 illustrates a bottom plan view of an example embodiment of a camera array 120. The camera array 120 include a plurality of cells 202 (e.g., four cells) each comprising one or more cameras 122. In an embodiment, the cells 202 each have a hexagonal cross-section and are positioned in a honeycomb pattern. Particularly, two inner cells 202-A, 202-B are each positioned adjacent to other cells 202 along three adjacent edges, while two outer cells 202-C, 202-D are each positioned adjacent to other cells 202 along only two adjacent edges. The inner cells 202-A, 202-B are positioned to have respective edges adjacent to each other and may share a side wall, while the outer cells 202-C, 202-D are separated from each other (are not in direct contact). Here, the outer cells 202-C, 202-D may each have a respective pair of edges that are adjacent to respective edges of the inner cells 202-A, 202-B. Another feature of the illustrated cell arrangement is that the outer cells 202-C, 202-D each include four edges that form part of the outer perimeter of the camera array 120 and the inner cells 202-A, 202-B each include three edges that form part of the outer perimeter of the camera array 120.


The hexagonal shape of the cells 202 provides several benefits. First, the hexagonal shape enables the array 120 to be expanded to include additional cells 202 in a modular fashion. For example, while the example camera array 120 includes four cells 202, other embodiments of the camera array 120 could include, for example eight or more cells 202 by positioning additional cells 202 adjacent to the outer edges of the cells 202 in a honeycomb pattern. By utilizing a repeatable pattern, camera arrays 120 of arbitrary size and number of cameras 120 can be manufactured using the same cells 202. Furthermore, the repeatable pattern can ensure that spacing of the cameras 122 is predictable, which enables the image processor 120 to process images from different sizes of camera arrays 120 with different numbers of cameras 122 without significant modification to the image processing algorithms.


In an embodiment, the walls of the cells 202 are constructed of a rigid material such as metal or a hard plastic. The cell structure provides strong structural support for holding the cameras 122 in their respective positions without significant movement due to flexing or vibrations of the array structure.


In an embodiment, each cell 202 comprises a set of three cameras 122 arranged in a triangle pattern with all cameras 122 oriented to focus on a single point. In an embodiment, each camera 122 is approximately equidistant from each of its neighboring cameras 122 within the cell 202 and approximately equidistant from neighboring cameras 122 in adjacent cells 202. This camera spacing results in a triangular grid, where each set of three neighboring cameras 122 are arranged in triangle of approximately equal dimensions. This spacing simplifies the processing performed by the image processing device 110 when synthesizing the output image corresponding to the virtual camera perspective. The triangular grid furthermore allows for a dense packing of cameras 122 within a limited area. Furthermore, the triangular grid enables the target volume to be captured with a uniform sampling rate to give smooth transitions between camera pixel weights and low variance in generated image quality based on the location of the virtual perspective.


In an embodiment, each cell 202 comprises cameras 122 of at least two different types. For example, in an embodiment, each cell 202 includes two cameras 122-A of a first type (e.g., type A) and one camera 122-B of a second type (e.g., type B). In an embodiment, the type A cameras 122-A and the type B cameras 122-B have different focal lengths. For example, the type B cameras 122-B may have a shorter focal length than the type A cameras 122-A. In a particular example, the type A cameras 122-A have 50 mm lenses while the type B cameras 122-B have 35 mm lenses. In an embodiment, the type B cameras 122-B are generally positioned in their respective cells 202 in the camera position furthest from a center point of the array 120.


The type B cameras 122-B have a larger field-of-view and provide more overlap of the scene 130 than the type A cameras 122-A. The images captured from these cameras 122-B are useful to enable geometry reconstruction and enlargement of the viewable volume. The type A cameras 122-A conversely have a smaller field-of-view and provide more angular resolution to enable capture of smaller details than the type B cameras 122-B. In an embodiment, the type A cameras occupy positions in the center of the camera array 120 so that when points of interest in the scene 150 (e.g., a surgical target) are placed directly below the camera array 120, the captured images will benefit from the increased detail captured by the type A cameras 122-A relative to the type B cameras 122-B. Furthermore, by positioning the type B cameras 122-B along the exterior of the array 120, a wide baseline between the type B cameras 122-B is achieved, which provides the benefit of enabling accurate stereoscopic geometry reconstruction. For example, in the cells 202-A, 202-C, 202-D, the type B camera 122-B is at the camera position furthest from the center of the array 120. In the case of a cell 202-B having two cameras equidistant from the center point, one of the camera positions may be arbitrarily selected for the type B camera 122-B. In an alternative embodiment, the type B cameras 122-B may occupy the other camera position equidistant from the center of the array 120.


In an embodiment, the camera array 120 further includes a projector 124 that can project structured light onto the scene 130. The projector 124 may be positioned near a center line of the camera array 120 in order to provide desired coverage of the scene 130. The projector 124 may provide illumination and project textures and other patterns (e.g., to simulate a laser pointer or apply false or enhanced coloring to certain regions of the scene 150). In an embodiment, the camera array 120 may also include depth sensors 126 adjacent to the projector 124 to use for depth estimation and object tracking.



FIG. 4 illustrates a more detailed bottom plan view of an embodiment of a camera array 120. In this view, the orientation of the cameras can be seen as pointing towards a centrally located focal point. Furthermore, in this embodiment, the type A cameras 122-A are 50 mm focal length cameras and the type B cameras 122-B are 35 mm focal length cameras. As further illustrated in this view, an embodiment of the camera array 120 may include one or more cooling fans to provide cooling to the camera array 120. For example, in one embodiment, a pair of fans may be positioned in the outer cells 202-C, 202-D of the camera array 120. In an alternative embodiment, the camera array 120 may incorporate off-board cooling via tubing that carries cool air to the camera array 120 and/or warm air away from the camera array 120. This embodiment may be desirable to comply with restrictions on airflow around a patient in an operating room setting.



FIG. 5 illustrates a perspective view of the camera array 120. In this view, a top cover 504 is illustrated to cover the hexagonal cells 202 and provide structural support to the camera array 120. Additionally, the top cover may include a mounting plate 506 for coupling to a swing arm 202 as illustrated in FIG. 2. The top cover 504 may further include mounting surfaces on the outer cells 202-C, 202-D for mounting the fans 402.


Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for the disclosed embodiments as disclosed from the principles herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and system disclosed herein without departing from the scope of the described embodiments.

Claims
  • 1. A mediated-reality system, comprising: a camera array to capture a plurality of images of a scene, the camera array including: a plurality of cameras mounted within each of a plurality of cells and arranged in a same pattern within each of the plurality of cells, the plurality of cameras including at least one camera of a first focal length and at least one camera of a second focal length different than the first focal length, wherein the first focal length is longer than the second focal length, and at least one camera of the second focal length within each of the plurality of cells is at a position further from or equidistant from a center point of the camera array relative to cameras of the first focal length;an image processing device to synthesize a virtual image corresponding to a virtual perspective of the scene based on at least two of the plurality of images; anda display device to display the virtual image.
  • 2. The mediated-reality system of claim 1, wherein the first focal length is 50 mm and the second focal length is 35 mm.
  • 3. The mediated-reality system of claim 1, further comprising: a projector configured to project structured light onto a portion of the scene that is within a field of view of the camera array.
  • 4. The mediated-reality system of claim 1, further comprising: a depth sensor configured to sense a distance to a surface within the scene, the surface within a field of view of the camera array.
  • 5. The mediated-reality system of claim 1, wherein the scene is a surgical site.
  • 6. The mediated-reality system of claim 1, wherein the display device is part of a head-mounted display (HMD) that is configured to present the virtual image based in part on a position and orientation of the HMD.
  • 7. The mediated-reality system of claim 1, further comprising: a swing arm configured to position the camera array to capture the plurality of images of the scene.
  • 8. A method comprising: capturing, via a camera array, a plurality of images of a scene, the camera array including a plurality of cameras mounted within each of a plurality of cells and arranged in a same pattern within each of the plurality of cells, the plurality of cameras including at least one camera of a first focal length and at least one camera of a second focal length different than the first focal length, wherein the first focal length is longer than the second focal length, and at least one camera of the second focal length within each of the plurality of cells is at a position further from or equidistant from a center point of the camera array relative to cameras of the first focal length;synthesizing a virtual image corresponding to a virtual perspective based on at least two of the plurality of images of the scene; anddisplaying the virtual image.
  • 9. The method of claim 8, wherein the first focal length is 50 mm and the second focal length is 35 mm.
  • 10. The method of claim 8, further comprising: projecting, via a projector, structured light onto a portion of the scene that is within a field of view of the camera array.
  • 11. The method of claim 8, further comprising: sensing, via a depth sensor, a distance to a surface within the scene, the surface within a field of view of the camera array.
  • 12. The method of claim 8, wherein the scene is a surgical site.
  • 13. The method of claim 8, wherein displaying the virtual image comprises: presenting, via a head-mounted display (HMD), the virtual image based in part on a position and orientation of the HMD.
  • 14. The method of claim 8, further comprising: receiving an instruction to position a swing arm coupled to the camera array; andpositioning, via the swing arm, the camera array to capture the plurality of images of the scene.
  • 15. A non-transitory computer readable medium configured to store program code instructions, when executed by a processor of a mediated-reality system, cause the mediated-reality system to perform steps comprising: capturing, via a camera array, a plurality of images of a scene, the camera array including a plurality of cameras mounted within each of a plurality of cells and arranged in a same pattern within each of the plurality of cells, the plurality of cameras including at least one camera of a first focal length and at least one camera of a second focal length different than the first focal length, wherein the first focal length is longer than the second focal length, and at least one camera of the second focal length within each of the plurality of cells is at a position further from or equidistant from a center point of the camera array relative to cameras of the first focal length;synthesizing a virtual image corresponding to a virtual perspective based on at least two of the plurality of images of the scene; anddisplaying the virtual image.
  • 16. The computer readable medium of claim 15, wherein the first focal length is 50 mm and the second focal length is 35 mm.
  • 17. The computer readable medium of claim 15, wherein the program code instructions, when executed by the processor of a mediated-reality system, further cause the mediated-reality system to perform steps comprising: projecting, via a projector, structured light onto a portion of the scene that is within a field of view of the camera array.
  • 18. The computer readable medium of claim 15, wherein the program code instructions, when executed by the processor of a mediated-reality system, further cause the mediated-reality system to perform steps comprising: sensing, via a depth sensor, a distance to a surface within the scene, the surface within a field of view of the camera array.
  • 19. The computer readable medium of claim 15, wherein displaying the virtual image comprises: presenting, via a head-mounted display (HMD), the virtual image based in part on a position and orientation of the HMD.
  • 20. The computer readable medium of claim 15, wherein the program code instructions, when executed by the processor of a mediated-reality system, further cause the mediated-reality system to perform steps comprising: receiving an instruction to position a swing arm coupled to the camera array; andpositioning, via the swing arm, the camera array to capture the plurality of images of the scene.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 16/808,194, filed Mar. 3, 2020, which is a continuation of U.S. application Ser. No. 16/582,855, filed Sep. 25, 2019, now U.S. Pat. No. 10,623,660, which application claims the benefit of U.S. Provisional Application No. 62/737,791 filed on Sep. 27, 2018, all of which are incorporated by reference herein.

US Referenced Citations (213)
Number Name Date Kind
4383170 Takagi et al. May 1983 A
4694185 Weiss Sep 1987 A
5334991 Wells et al. Aug 1994 A
5757423 Tanaka et al. May 1998 A
5876325 Mizuno et al. Mar 1999 A
5905525 Ishibashi et al. May 1999 A
5999840 Grimson et al. Dec 1999 A
6483535 Tamburrino et al. Nov 2002 B1
6491702 Heilbrun et al. Dec 2002 B2
6577342 Wester Jun 2003 B1
6675040 Cosman Jan 2004 B1
6985765 Morita et al. Jan 2006 B2
8010177 Csavoy et al. Aug 2011 B2
8041089 Drumm et al. Oct 2011 B2
8179604 Prada et al. May 2012 B1
8295909 Goldbach Oct 2012 B2
8384912 Charny et al. Feb 2013 B2
8548563 Simon et al. Oct 2013 B2
8657809 Schoepp Feb 2014 B2
8885177 Ben-yishai et al. Nov 2014 B2
8914472 Lee et al. Dec 2014 B1
8933935 Yang et al. Jan 2015 B2
9119670 Yang et al. Sep 2015 B2
9220570 Kim et al. Dec 2015 B2
9237338 Maguire Jan 2016 B1
9323325 Perez et al. Apr 2016 B2
9462164 Venkataraman et al. Oct 2016 B2
9497380 Jannard et al. Nov 2016 B1
9503709 Shi et al. Nov 2016 B2
9513113 Yang et al. Dec 2016 B2
9618621 Barak et al. Apr 2017 B2
9916691 Takano et al. Mar 2018 B2
9918066 Schneider et al. Mar 2018 B2
9967475 Schneider et al. May 2018 B2
10074177 Piron et al. Sep 2018 B2
10089737 Krieger et al. Oct 2018 B2
10165981 Schoepp Jan 2019 B2
10166078 Sela et al. Jan 2019 B2
10166079 Mclachlin et al. Jan 2019 B2
10194131 Casas Jan 2019 B2
10244991 Shademan et al. Apr 2019 B2
10345582 Schneider et al. Jul 2019 B2
10353219 Hannaford et al. Jul 2019 B1
10390887 Bischoff et al. Aug 2019 B2
10398514 Ryan et al. Sep 2019 B2
10424118 Hannemann et al. Sep 2019 B2
10426345 Shekhar et al. Oct 2019 B2
10426554 Siewerdsen et al. Oct 2019 B2
10433916 Schneider et al. Oct 2019 B2
10455218 Venkataraman et al. Oct 2019 B2
10546423 Jones et al. Jan 2020 B2
10575906 Wu Mar 2020 B2
10650573 Youngquist et al. May 2020 B2
10653495 Gregerson et al. May 2020 B2
10657664 Yu May 2020 B2
10664903 Haitani et al. May 2020 B1
10667868 Malackowski Jun 2020 B2
10682188 Leung et al. Jun 2020 B2
10792110 Leung et al. Oct 2020 B2
10799315 Leung et al. Oct 2020 B2
10799316 Sela et al. Oct 2020 B2
10810799 Tepper et al. Oct 2020 B2
10828114 Abhari et al. Nov 2020 B2
10832408 Srimohanarajah et al. Nov 2020 B2
10918444 Stopp et al. Feb 2021 B2
10925465 Tully et al. Feb 2021 B2
10949986 Colmenares et al. Mar 2021 B1
10973581 Mariampillai et al. Apr 2021 B2
11179218 Calef et al. Nov 2021 B2
11295460 Aghdasi et al. Apr 2022 B1
11354810 Colmenares et al. Jun 2022 B2
11612307 Smith et al. Mar 2023 B2
20010048732 Wilson et al. Dec 2001 A1
20020065461 Cosman May 2002 A1
20020075201 Sauer et al. Jun 2002 A1
20020077533 Bieger et al. Jun 2002 A1
20020082498 Wendt et al. Jun 2002 A1
20020113756 Tuceryan et al. Aug 2002 A1
20030209096 Pandey et al. Nov 2003 A1
20030210812 Khamene et al. Nov 2003 A1
20030227470 Genc et al. Dec 2003 A1
20030227542 Zhang et al. Dec 2003 A1
20040070823 Radna et al. Apr 2004 A1
20040169673 Crampe et al. Sep 2004 A1
20050046700 Bracke Mar 2005 A1
20050070789 Aferzon Mar 2005 A1
20050090730 Cortinovis et al. Apr 2005 A1
20050203380 Sauer et al. Sep 2005 A1
20050206583 Lemelson et al. Sep 2005 A1
20060203959 Spartiotis et al. Sep 2006 A1
20070046776 Yamaguchi et al. Mar 2007 A1
20070121423 Rioux May 2007 A1
20070236514 Agusanto et al. Oct 2007 A1
20080004533 Jansen et al. Jan 2008 A1
20090033588 Kajita et al. Feb 2009 A1
20090085833 Otsuki Apr 2009 A1
20090303321 Olson et al. Dec 2009 A1
20100045783 State et al. Feb 2010 A1
20100076306 Daigneault et al. Mar 2010 A1
20100099981 Fishel Apr 2010 A1
20100295924 Miyatani et al. Nov 2010 A1
20100329358 Zhang et al. Dec 2010 A1
20110015518 Schmidt et al. Jan 2011 A1
20110098553 Robbins et al. Apr 2011 A1
20110115886 Nguyen et al. May 2011 A1
20120050562 Perwass et al. Mar 2012 A1
20120068913 Bar-zeev et al. Mar 2012 A1
20120218301 Miller Aug 2012 A1
20130002827 Lee et al. Jan 2013 A1
20130050432 Perez et al. Feb 2013 A1
20130058591 Nishiyama et al. Mar 2013 A1
20130076863 Rappel et al. Mar 2013 A1
20130084970 Geisner et al. Apr 2013 A1
20130088489 Schmeitz et al. Apr 2013 A1
20130135180 Mcculloch et al. May 2013 A1
20130135515 Abolfadl et al. May 2013 A1
20130141419 Mount et al. Jun 2013 A1
20130222369 Huston et al. Aug 2013 A1
20130265485 Kang et al. Oct 2013 A1
20130274596 Azizian et al. Oct 2013 A1
20130307855 Lamb et al. Nov 2013 A1
20130335600 Gustavsson Dec 2013 A1
20140005485 Tesar et al. Jan 2014 A1
20140031668 Mobasser et al. Jan 2014 A1
20140092281 Nisenzon et al. Apr 2014 A1
20140192187 Atwell et al. Jul 2014 A1
20140232831 Shi Aug 2014 A1
20140375772 Gabara Dec 2014 A1
20150055929 Van Hoff et al. Feb 2015 A1
20150173846 Schneider et al. Jun 2015 A1
20150201176 Graziosi et al. Jul 2015 A1
20150244903 Adams Aug 2015 A1
20150348580 van Hoff Dec 2015 A1
20160073080 Wagner et al. Mar 2016 A1
20160080734 Aguirre-valencia Mar 2016 A1
20160091705 Ben Ezra et al. Mar 2016 A1
20160191815 Annau Jun 2016 A1
20160191887 Casas Jun 2016 A1
20160217760 Chu et al. Jul 2016 A1
20160225192 Jones et al. Aug 2016 A1
20160253809 Cole et al. Sep 2016 A1
20160307372 Pitts et al. Oct 2016 A1
20160317035 Hendriks et al. Nov 2016 A1
20160352982 Weaver et al. Dec 2016 A1
20170007334 Crawford et al. Jan 2017 A1
20170068081 Hirayama Mar 2017 A1
20170085855 Roberts et al. Mar 2017 A1
20170099479 Browd et al. Apr 2017 A1
20170109931 Knorr et al. Apr 2017 A1
20170167702 Mariampillai et al. Jun 2017 A1
20170186183 Armstrong et al. Jun 2017 A1
20170188011 Panescu et al. Jun 2017 A1
20170202626 Kula et al. Jul 2017 A1
20170237971 Pitts Aug 2017 A1
20170296293 Mak et al. Oct 2017 A1
20170318235 Schneider et al. Nov 2017 A1
20170359565 Ito Dec 2017 A1
20180012413 Jones et al. Jan 2018 A1
20180018827 Stafford et al. Jan 2018 A1
20180070009 Baek et al. Mar 2018 A1
20180078316 Schaewe et al. Mar 2018 A1
20180082482 Motta et al. Mar 2018 A1
20180091796 Nelson et al. Mar 2018 A1
20180097867 Pang et al. Apr 2018 A1
20180239948 Rutschman et al. Aug 2018 A1
20180263706 Averbuch Sep 2018 A1
20180263707 Sela et al. Sep 2018 A1
20180263710 Sakaguchi et al. Sep 2018 A1
20180293744 Yu Oct 2018 A1
20180302572 Barnes Oct 2018 A1
20190038362 Nash et al. Feb 2019 A1
20190058870 Rowell et al. Feb 2019 A1
20190080519 Osman Mar 2019 A1
20190094545 Lo et al. Mar 2019 A1
20190158799 Gao et al. May 2019 A1
20190158813 Rowell et al. May 2019 A1
20190183584 Schneider et al. Jun 2019 A1
20190209080 Gullotti et al. Jul 2019 A1
20190235210 Nakai et al. Aug 2019 A1
20190260930 Van Hoff et al. Aug 2019 A1
20190282307 Azizian et al. Sep 2019 A1
20190289284 Smith Sep 2019 A1
20190290366 Pettersson et al. Sep 2019 A1
20190328465 Li et al. Oct 2019 A1
20190336222 Schneider et al. Nov 2019 A1
20190350658 Yang et al. Nov 2019 A1
20200005521 Youngquist et al. Jan 2020 A1
20200059640 Browd et al. Feb 2020 A1
20200084430 Kalarn et al. Mar 2020 A1
20200105065 Youngquist et al. Apr 2020 A1
20200154049 Steuart May 2020 A1
20200170718 Peine Jun 2020 A1
20200197100 Leung et al. Jun 2020 A1
20200197102 Shekhar et al. Jun 2020 A1
20200242755 Schneider et al. Jul 2020 A1
20200296354 Bickerstaff et al. Sep 2020 A1
20200297427 Cameron et al. Sep 2020 A1
20200342673 Lohr et al. Oct 2020 A1
20200352651 Junio et al. Nov 2020 A1
20200405433 Sela et al. Dec 2020 A1
20210037232 Lin et al. Feb 2021 A1
20210038340 Itkowitz et al. Feb 2021 A1
20210045618 Stricko et al. Feb 2021 A1
20210045813 Wickham et al. Feb 2021 A1
20210077195 Saeidi et al. Mar 2021 A1
20210145517 Pierrepont et al. May 2021 A1
20210186355 Ben-yishai et al. Jun 2021 A1
20210192763 Liu et al. Jun 2021 A1
20210196385 Shelton et al. Jul 2021 A1
20210382559 Segev et al. Dec 2021 A1
20220012954 Buharin Jan 2022 A1
20220020160 Buharin Jan 2022 A1
20220174261 Hornstein et al. Jun 2022 A1
Foreign Referenced Citations (41)
Number Date Country
1672626 Sep 2005 CN
101742347 Jun 2010 CN
104918572 Sep 2015 CN
204854653 Dec 2015 CN
1027627 Aug 2000 EP
1504713 Jul 2008 EP
2139419 Jan 2010 EP
2372999 Oct 2011 EP
3077956 Apr 2017 EP
1924197 Oct 2017 EP
3197382 Jun 2018 EP
2852326 Dec 2018 EP
3102141 Aug 2019 EP
3076892 Oct 2019 EP
2903551 Nov 2021 EP
3824621 Apr 2022 EP
262619 Apr 2020 IL
2007528631 Oct 2007 JP
2011248723 Dec 2011 JP
2015524202 Aug 2015 JP
2001005161 Jan 2001 WO
2003002011 Jan 2003 WO
2005081547 Sep 2005 WO
2007115825 Oct 2007 WO
2008130354 Oct 2008 WO
2008130355 Oct 2008 WO
2010067267 Jun 2010 WO
2013082387 Jun 2013 WO
2013180748 Dec 2013 WO
2014037953 Mar 2014 WO
2015084462 Jun 2015 WO
2015151447 Oct 2015 WO
2015179446 Nov 2015 WO
2016044934 Mar 2016 WO
2017042171 Mar 2017 WO
2018097831 May 2018 WO
2020018931 Jan 2020 WO
2020069403 Apr 2020 WO
2020163316 Aug 2020 WO
2021003401 Jan 2021 WO
2021231337 Nov 2021 WO
Non-Patent Literature Citations (28)
Entry
US 9,492,073 B2, 11/2016, Tesar et al. (withdrawn)
European Patent Office, Extended European Search Report and Opinion, EP Patent Application No. 19864255.5, Jun. 14, 2022, eight pages.
PCT International Search Report and Written Opinion, PCT Application No. PCT/US19/53300, Dec. 19, 2019, 15 pages.
United States Office Action, U.S. Appl. No. 16/808,194, May 13, 2021, eight pages.
OpenVC 4.1.1, Open Source Computer Vision, Jul. 26, 2019, http://opencv.org/ [retrieved Nov. 13, 2019] 2 pages.
Point Closest to as Set Four of Lines in 3D, Postin in Mathematics Stack Exchange, May 2, 2011, https://math.stackexchange.com/questions/36398/point-closest-t-a-set-four-of-lines-in-3d/55286#55286 [retrieved Aug. 15, 2019], 3 pages.
Road to VR, <http://www.roadlovr.com/wp-content/uploads/2016/01/htc-vive-pre-system.jpg. [retrieved Nov. 13, 2019].
Eade Ethan,“Lie Groups for 2D and 3D Transformations,” 2013, updated May 20, 2017, www.ethaneade.com [retrieved Nov. 13, 2019] 25 pages.
Extended European Search Report mailed Dec. 8, 2017 in European Patent Application No. 15795790.3, 10 pages.
Extended European Search Report mailed May 29, 2020 in European Patent Application No. 16922208.0, 11 pages.
Geng, Jason, “Structured-light 3D surface imaging: a tutorial,” Advances in Optics and Photonics 3:125-160, Jun. 2011.
Gortler et al. “The Lumigraph,” Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM 1996), pp. 43-54.
Herakleous et al. “3DUnderworld—SLC: An-Open Source Structured-Light Scanning System for Rapid Geometry Acquisition,” arXiv prepring arXiv: 1406.6595v1 (2014), Jun. 26, 2014, 28 pages.
International Search Report and Written Opinion mailed Aug. 18, 2015 in corresponding International Application No. PCT/US2015/031637, 11 pages.
International Search Report and Written Opinion mailed Nov. 18, 2021 in corresponding International Application No. PCT/US2021/031653, 2 pages.
International Search Report and Written Opinion received in Application No. PCT/US21/31653, dated Jun. 30, 2021, 17 pages.
Kang et al. “Stereoscopic augmented reality for laparoscopic surgery,” Surgical Endoscopy, 2014 28(7):2227-2235, 2014.
Levoy et al. “Light Field Rendering,” Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (ACM 1996), pp. 31-42.
Levoy et al. “Light Filed Microscopy,” ACM Transactions on Graphic 25(3), Proceedings of Siggraph 2006.
Luke et al. “Near Real-Time Estimation of Super-Resolved Depth and All-in-Focus Images from a Plenoptic Camera Using Graphics Processing Units,” International Journal of Digital Multimedia Broadcasting, 2010, 1-12, Jan. 2010.
Mezzana et al. “Augmented Reality in Ocuplastic Surgery: First iPhone Application,” Plastic and Reconstructive Surgery, Mar. 2011, pp. 57e-58e.
Multiscale gigapixel photography, D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, S. D. Feller, Nature 486, 386-389 (Jun. 21, 2012) doi:10.1038/nature11150.
Ng et al. “Light Field Photography with a Hand-held Plenoptic Camera,” Stanford Tech Report CTSR 2005.
Suenaga et al. “Real-time in situ three-dimensional integral videography and surgical navigation using augmented reality: a pilot study,” International Journal of Oral Science, 2013, 5:98-102.
Tremblay et al. “Ultrathin cameras using annular folded optics,” Applied Optics, Feb. 1, 2007, 46(4):463-471.
U.S. Appl. No. 16/457,780, titled “Synthesizing an Image From a Virtual Perspective Using Pixels From a Physical Imager Array Weighted Based on Depth Error Sensitivity,” filed Jun. 28, 2019.
U.S. Appl. No. 17/140,885, titled “Methods and Systems for Registering Preoperative Image Data to Intraoperative Image Data of a Scene, Such as a Surgical Scene,” filed Jan. 4, 2021.
User1551, “Point closest to a set four of lines in 3D,” posting in Mathematics Stack Exchange, Apr. 25, 2016, <http:math.stackexchange.com/users/1551/user1551> [retrieved Aug. 15, 2019] 3 pages.
Related Publications (1)
Number Date Country
20210392275 A1 Dec 2021 US
Provisional Applications (1)
Number Date Country
62737791 Sep 2018 US
Continuations (2)
Number Date Country
Parent 16808194 Mar 2020 US
Child 17461588 US
Parent 16582855 Sep 2019 US
Child 16808194 US