Endoscope small imaging system

Information

  • Patent Grant
  • 9510739
  • Patent Number
    9,510,739
  • Date Filed
    Friday, July 12, 2013
    11 years ago
  • Date Issued
    Tuesday, December 6, 2016
    8 years ago
  • Inventors
  • Original Assignees
  • Examiners
    • Tran; Thai
    • Braniff; Christopher T
    Agents
    • D. Kligler I.P. Services Ltd.
Abstract
An endoscope camera, including a cylindrical enclosure having an enclosure diameter, and an imaging array mounted within the enclosure so that a plane face of the imaging array is parallel to the enclosure diameter. The camera includes a right-angle transparent prism having a rectangular entrance face, an exit face, and an hypotenuse configured to reflect radiation from the entrance face to the exit face. The entrance face has a first edge longer than a second edge, and the prism is mounted within the enclosure so that the first edge is parallel to the enclosure diameter and so that the exit face mates with the plane face of the imaging array. The camera further includes optics, configured to receive incoming radiation from an object, which are mounted so as to transmit the incoming radiation to the imaging array via the entrance and exit faces of the prism.
Description
FIELD OF THE INVENTION

The present invention relates generally to imaging, and specifically to imaging using an endoscope having a small external diameter.


BACKGROUND OF THE INVENTION

U.S. Pat. No. 8,179,428, to Minami et al., whose disclosure is incorporated herein by reference, describes an imaging apparatus for an electronic endoscope which uses a “bare chip” of a CCD (charge coupled device) together with a circuit board having approximately the same thickness as the bare chip.


U.S. Pat. No. 6,659,940, to Adler, whose disclosure is incorporated herein by reference, describes an endoscope having restricted dimensions. The endoscope has an image “gatherer,” an image distorter, and an image sensor shaped to fit within the restricted dimensions.


U.S. Pat. No. 4,684,222, to Borelli et al., whose disclosure is incorporated herein by reference, describes a method for producing small lenses which may be formed to be anamorphic.


Documents incorporated by reference in the present patent application are to be considered an integral part of the application except that to the extent any terms are defined in these incorporated documents in a manner that conflicts with the definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.


SUMMARY OF THE INVENTION

An embodiment of the present invention provides an endoscope camera, including:


a cylindrical enclosure having an enclosure diameter;


an imaging array mounted within the enclosure so that a plane face of the imaging array is parallel to the enclosure diameter;


a right-angle transparent prism having a rectangular entrance face, an exit face, and an hypotenuse configured to reflect radiation from the entrance face to the exit face, the entrance face having a first edge longer than a second edge, the prism being mounted within the enclosure so that the first edge is parallel to the enclosure diameter and so that the exit face mates with the plane face of the imaging array; and


optics, configured to receive incoming radiation from an object, mounted so as to transmit the incoming radiation to the imaging array via the entrance and exit faces of the prism.


In a disclosed embodiment the optics include gradient-index (GRIN) optics.


Typically, the optics have a circular cross-section.


In a further disclosed embodiment the imaging array is rectangular having sides equal to the first edge and the second edge.


Typically, the optics focus the incoming radiation to have a first magnification and a second magnification orthogonal to and different from the first magnification. An optics-ratio of the first magnification to the second magnification may be responsive to a prism-ratio of the first edge to the second edge. Alternatively or additionally, a ratio of the first magnification to the second magnification may be responsive to an aspect ratio of an object imaged by the camera.


In a yet further disclosed embodiment the optics introduce a distortion into an image, of an object, acquired by the imaging array so as to produce a distorted image thereon, and the camera includes a processor which applies an un-distortion factor to the distorted image so as to produce an undistorted image of the object. Typically, the distortion includes an optical distortion, and the processor is configured to apply the un-distortion factor as a numerical factor.


In an alternative embodiment the right-angle transparent prism includes an isosceles prism.


In a further alternative embodiment the imaging array is mounted so that an axis of the cylindrical enclosure is parallel to the plane face of the imaging array.


In a yet further alternative embodiment the imaging array is square having a side equal to the first edge.


There is further provided, according to an embodiment of the present invention, a method for forming an endoscope camera, including:


providing a cylindrical enclosure having an enclosure diameter;


mounting an imaging array within the enclosure so that a plane face of the imaging array is parallel to the enclosure diameter;


mounting a right-angle transparent prism within the enclosure, the prism having a rectangular entrance face, an exit face, and an hypotenuse configured to reflect radiation from the entrance face to the exit face, the entrance face having a first edge longer than a second edge, the prism being mounted within the enclosure so that the first edge is parallel to the enclosure diameter and so that the exit face mates with the plane face of the imaging array;


configuring optics to receive incoming radiation from an object; and


mounting the optics so as to transmit the incoming radiation to the imaging array via the entrance and exit faces of the prism.


The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic illustration of an endoscopic imaging system, according to an embodiment of the present invention;



FIG. 2A is a schematic perspective illustration of a camera of the imaging system, according to an embodiment of the present invention;



FIG. 2B and FIG. 2C are schematic sectional views of the camera, according to an embodiment of the present invention;



FIG. 2D is a schematic perspective view of an element of the camera, according to an embodiment of the present invention;



FIG. 2E is a schematic top view of elements of the camera, according to an embodiment of the present invention;



FIG. 3A is a schematic sectional view of an alternative camera, and FIG. 3B is a schematic top view of elements of the alternative camera, according to an embodiment of the present invention;



FIG. 4 is a schematic conceptual representation of the operation of optics of the camera of FIGS. 2A-2E, according to an embodiment of the present invention; and



FIG. 5 is a flowchart describing steps in operation of the imaging system, according to an embodiment of the present invention.





DETAILED DESCRIPTION OF EMBODIMENTS
Overview

Endoscopes used in surgery preferably have small dimensions. Especially for minimally invasive surgery, the smaller the dimensions, such as the diameter, of the endoscope, the less the trauma on patients undergoing the surgery. A system which enables a reduction in diameter of the endoscope, without a concomitant reduction in efficiency of operation of the endoscope, would be advantageous.


An embodiment of the present invention provides an endoscope camera of extremely small dimensions. The camera may be incorporated into a cylindrical enclosure which is typically part of a tube configured to traverse a lumen of a patient during surgery. In embodiments of the present invention, the enclosure diameter may be of the order of 1 mm.


The camera comprises an imaging array which is mounted within the enclosure so that a plane face of the array is parallel to an enclosure diameter. Typically, the array is also mounted so that the plane face is parallel to an axis of the enclosure.


The camera also comprises a right-angle transparent prism having a rectangular entrance face, an exit face, and an hypotenuse configured to reflect radiation from the entrance face to the exit face. Typically the prism is isosceles having congruent entrance and exit faces. A first edge of the entrance face is longer than a second edge. The prism is mounted within the enclosure so that the first edge is parallel to the enclosure diameter and so that the exit face of the prism mates with the plane face of the imaging array; typically the prism is mounted to the array using optical cement.


The camera further comprises optics, which are typically mounted to mate with the entrance face of the prism. The optics receive incoming radiation from an object to be imaged by the camera, and the incoming radiation transmits through the prism entrance face, reflects from the hypotenuse of the prism, then transmits through the exit face to the imaging array.


The optics are typically anamorphic optics, having different magnifications in orthogonal directions. The different magnifications are selected so that the image of an object having a predetermined aspect ratio, such as a “standard” aspect ratio of 4:3, completely fills the exit face of the prism. (Except for the case where an aspect ratio of the exit face is the same as the aspect ratio of the object, the complete filling of the exit face requires the different magnifications.)


The anamorphic optics consequently optically distort the image formed on the array. The camera typically comprises circuitry, coupled to the array, which receives the image from the array in the form of a distorted frame, or set, of pixel values. The circuitry may be configured to apply an “un-distortion” numerical factor to the distorted frame, so as to generate an undistorted frame of pixels. The undistorted frame of pixels may be used to display an undistorted image of the object, i.e., the displayed image has the same aspect ratio as the object aspect ratio.


The combination of a right-angle prism having faces with the unequal edges, mounted onto an imaging array, enables implementation of endoscope cameras with millimeter dimensions.


DETAILED DESCRIPTION

Reference is now made to FIG. 1, which is a schematic illustration of an endoscopic imaging system 10, according to an embodiment of the present invention. System 10 may be used in an invasive medical procedure, typically a minimally invasive procedure, on a body cavity 12 of a human patient in order to image part or all of the body cavity. By way of example, in the present description the body cavity is assumed to be the bladder of a patient, and body cavity 12 is also referred to herein as bladder 12. However, it will be understood that system 10 may be used to image substantially any human body cavity, such as the gastrointestinal organs, the bronchium, or the chest, or a non-human cavity.


System 10 comprises an imaging apparatus 14 which enables delivery of an endoscope 16 to bladder 12. Apparatus 14 is typically in the form of a tube which is able to traverse a lumen of a patient's body, so that apparatus 14 is also referred to herein as tube 14. Endoscope 16 is controlled by an endoscope module 18 having a processor 20 communicating with a memory 22. Apparatus 14 is connected at its proximal end 26 to a handle 28 which enables an operator, herein assumed to be a physician, of system 10 to insert the apparatus into the bladder as well as to manipulate the endoscope so as to acquire images of the bladder. In some embodiments of the present invention, rather than manual manipulation of endoscope 16 using handle 28, the endoscope is manipulated automatically, such as by scanning, so as to acquire its images.


The operator is able to provide input to module 18 via controls 30, which typically comprise at least one of a keyboard, a pointing device, or a touch screen. Alternatively or additionally, at least some of controls 30 may be incorporated into handle 28. For simplicity, controls 30 are herein assumed to comprise a mouse, so that the controls are also referred to herein as mouse 30.


The processor uses software, typically stored in memory 22, to control system 10. Results of the actions performed by processor 20 may be presented on a screen 32 to the operator of system 10, the screen typically displaying an image of bladder 12 that is generated by system 10. The image displayed on screen 32 is assumed to be rectangular, and to have a display aspect ratio (DAR) of s:1, where DAR is the ratio of the image width to the image height. Typically, although not necessarily, the DAR of the image corresponds to the physical dimensions of screen 32, and the image DAR may be one of the standard ratios known in the art, such as 4:3. A difference between the DAR of the image and the dimensions of the screen may be accommodated by incorporating black “bands” on the screen, as is done in projecting high definition images with an aspect ratio of 16:9 onto a screen with width:height dimensions 4:3. As is explained in more detail below, embodiments of the present invention are able to present undistorted images of an object viewed by system 10 on screen 32 for substantially any desired value of DAR.


By way of example, in the following description, except where otherwise indicated, DAR of screen 32 is assumed to be 4:3, and the image formed on screen 32 is assumed to be in a format of 768 pixels wide×576 pixels high.


The software for operating system 10 may be downloaded to processor 20 in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.


To operate system 10, the physician inserts tube 14 through a urethra 34 until a distal end 36 of the tube enters the bladder. Distal end 36 of tube 14 comprises a camera 38. The structure and operation of camera 38 are described below with reference to FIGS. 2A-2E.



FIG. 2A is a schematic perspective view of camera 38, FIGS. 2B and 2C are schematic sectional views of the camera; FIG. 2D is a schematic perspective view of an element of the camera, and FIG. 2E is a schematic top view of elements of the camera, according to an embodiment of the present invention. Camera 38 comprises a cylindrical enclosure 40, having an internal enclosure diameter 42, the cylindrical enclosure being terminated at its distal end by an approximately plane surface 44. Typically, cylindrical enclosure 40 is integral with tube 14. For clarity in the description of camera 38, cylindrical enclosure 40 is assumed to define a set of xyz orthogonal axes, with the z axis corresponding to a symmetry axis 48 of the cylindrical enclosure, and the x axis in the plane of the paper in FIG. 2A. FIG. 2B is a schematic sectional view of camera 38 in an xy plane, the view being drawn with plane surface 44 removed.


Camera 38 comprises four generally similar light channels 46, which traverse tube 14 and which, at the distal end of the tube, are approximately parallel to the z axis. Channels 46 exit surface 44, and the channels are typically tubes which contain fiber optics (not shown in the diagram) for transmitting light that exits from surface 44. The light from the fiber optics illuminates elements of cavity 12, and returning light from the illuminated elements is used by camera 38 to generate an image of the elements, as described below. Alternatively, in some embodiments light channels 46 are fiber optics.


Camera 38 also comprises a working channel 50 which traverses tube 14, and which, at the distal end of the tube, is approximately parallel to the z axis. Working channel 50 is typically larger than light channels 46, and may be used by the physician to insert a variety of surgical tools, such as a biopsy tool, into cavity 12.


Camera 38 generates its images in a rectangular array 60 of imaging pixels, the array being mounted within enclosure 40. The rectangular array is typically a charge coupled device (CCD) that is formed on a planar substrate 62, the planar substrate acting as a supporting frame for the array. Array 60 has a face 64 which receives radiation forming the images generated by the array. The array has two edges, a first edge 66 having a length “b,” and a second edge 68 having a length “a.”


In embodiments of the present invention the two edges of array 60 are unequal in length, i.e., a ≠ b, and for clarity in the disclosure, edge 66 is assumed to be longer than edge 68, i.e., b>a, so that edge 66 may also be referred to as the longer edge or the width, and edge 68 may also be referred to as the shorter edge or the height. Array 60 has an array aspect ratio (AAR) of b:a, and if the pixels of array 60 are square, then a pixel aspect ratio (PAR) of array 60, corresponding to the ratio of the number of pixels in a row to the number of pixels in a column, is also b:a. Rectangular array 60 has a center of symmetry 70.


In a disclosed embodiment array 60 has b=500 μm and a=280 μm, and the array is formed of 2.5 μm square pixels. In this case the pixel dimensions of the array are 200×112, and AAR=PAR=500:280=200:112. Arrays with dimensions similar to these are known in the art, and may be supplied by imaging sensor providers, such as Forza Silicon Corporation, of Pasadena, CA.


Planar substrate 62 is mounted within enclosure 40 so that axis 48 of the enclosure is parallel to face 64 of the rectangular array, and so that the longer edge of the array is parallel to diameter 42.


As shown in FIGS. 2C and 2D, a right-angle transparent prism 80 is mounted within enclosure 40. Prism 80 has three rectangular faces: a hypotenuse face 82, a base face 84, also herein termed exit face 84, and an upright face 86, also herein termed entrance face 86. The prism also has a first isosceles right-angle triangle face 88 and a second isosceles right-angle triangle face 90. The dimensions of prism 80 are implemented so that exit face 84 has the same dimensions as array 60, i.e., the exit face is a rectangle having edge lengths a and b. Entrance face 86 has the same dimensions as exit face 84, i.e., the entrance face is a rectangle having edge lengths a and b. Entrance face 86 and exit face 84 have a common edge 92 with length a, i.e., the common edge is a longer edge of the exit and entrance faces.


The lengths of the sides forming the right angle of isosceles right-angle triangle face 88, and of isosceles right-angle triangle face 90, correspond to the length of the shorter edge of array 60, so that the two isosceles triangles of faces 88, 90, have lengths: a, a, a√{square root over (2)}. Rectangular hypotenuse face 82 has edge lengths a√{square root over (2)}, b.


Prism 80 is mounted onto array 60 so that exit face 84 mates with the array, i.e., so that the shorter edge of the exit face aligns with the shorter edge of the array, and so that the longer edge of the exit face aligns with the longer edge of the array. The mounting of the prism onto the array may be implemented using an optical adhesive, typically an epoxy resin, that cements the prism to the array. Such a mounting reduces undesired reflections from the exit face of the prism, as well as from face 64 of the array.


Optical elements 100, herein termed optics 100, are mounted within enclosure 40 so that they align with the entrance face of prism 80. Typically, optics 100 are cylindrical as illustrated in the figures. Typically, the mounting comprises cementing optics 100 to entrance face 86 using an optical adhesive. Optics 100 have an optic axis 102, and the optics are mounted so that the optic axis, after reflection in hypotenuse 82, intersects center 70 of array 60.



FIG. 3A is a schematic sectional view of a camera 438, and FIG. 3B is a schematic top view of elements of the camera, according to an alternative embodiment of the present invention. Apart from the differences described below, the operation of camera 438 is generally similar to that of camera (FIGS. 2A-2E), and elements indicated by the same reference numerals in both cameras 38 and 438 are generally similar in construction and in operation.


In contrast to camera 38, which uses rectangular array 60 having unequal edges, camera 438 uses a square array 440. Square array 440 is configured to have its edge equal in length to the longer side of exit face 84, i.e., array 440 has an edge length b.


Prism 80 is mounted onto array 440 so that the shorter edges of the exit face align with the edges of the array. The mounting is typically symmetrical, so that as illustrated in FIG. 3B, there are approximately equal sections 442 which do not receive radiation from the exit face, and a rectangular section 444, having dimensions of b×a, which aligns with and is cemented to the exit face so as to receive radiation from the face. Optics 100 are mounted so that optic axis 102, after reflection in hypotenuse 82, intersects a center of symmetry 446 of section 444.


In a disclosed embodiment array 440 has b=500 μm and the array is formed of 2.5 μm square pixels. In this case the pixel dimensions of the array are 200×200. (Arrays with dimensions similar to these are also known in the art, and may be supplied by imaging sensor providers, such as the provider referred to above.) In the disclosed embodiment section 444 has dimensions of 500 μm×280 μm and pixel dimensions of 200×112, corresponding to the parameters of camera 38.


When camera 438 operates, section 444 is an active region of array 440, acquiring images projected onto the section via the completely filled exit face of the prism, whereas sections 442 are inactive regions.



FIG. 4 is a schematic conceptual representation of the operation of optics 100, according to an embodiment of the present invention. The figure has been drawn using the set of xyz axes defined above for FIGS. 2A-2E, and assumes that camera 38 is being considered. Those having ordinary skill in the art will be able to adapt the following description for the case of camera 438. For simplicity, the figure has been drawn without the presence of prism 80, so that array 60 with center 70 is represented by a congruent array 60′ with a center 70′. Array 60′ has edges 66′ and 68′, corresponding to edges 66 and 68 of array 60. Edges 66′ and 68′ are parallel to the y and x axes, and center 70′ is on the z axis. The following description reverts to referring to array 60 with center 70.


From a conceptual point of view, optics 100 may be considered to have the properties of an anamorphic lens, having different magnifications in the x direction and in the y direction. For simplicity the following description assumes that an object 130 is very distant from optics 100, so that the object is effectively at infinity, and those having ordinary skill in the art will be able to adapt the description for objects that are closer to optics 100. Furthermore, object 130 is assumed to be rectangular, with a center 132 on the z axis and edges 134 and 136 respectively parallel to the x and y axes. Edge 134 has a height h and edge 136 has a width w, giving an object aspect ratio of w:h.


Optics 100 is assumed to focus rays from center point 132 of object 130 to a focal point 112 on the z axis, and the optics are positioned so that center 70 of array 60 coincides with focal point 112. Object 130 may thus be completely in focus on array 60.


Typically, optics 100 are configured so that the image of object 130 completely fills the exit face of the prism and completely covers array 60; this configuration utilizes all the pixels of array 60. However, except for the case where w:h=b:a, the complete coverage entails optics 100 distorting the image of object 130, so that the image produced by the optics is no longer geometrically similar to the object. The distortion introduced by the optics is equivalent to the optics behaving as an anamorphic system, i.e., generating magnifications of the image on the array which are different in the x direction and in the y direction.


The magnifications for optics 100 are given by the following equations:











m
x

=

a
h


;


m
y

=

b
w






(
1
)







where mx is a height magnification of the optics, in the x direction, and


my is a width magnification of the optics, in the y direction.


A measure of the distortion produced by the optics is given by the ratio of the width:height magnifications in the two directions, i.e., a ratio of the width magnification my to the height magnification mx:









D
=



m
y


m
x


=

bh
aw






(
2
)







where D is a distortion metric for optics 100, equal to the ratio of the width:height magnifications.


As a first numerical example of the distortion introduced by optics 100, assume that object 130 has dimensions of w=4000 μm and h=3000 μm so that the object has an aspect ratio of 4:3. This aspect ratio is a typical value for “standard” imaging optics. Assume further that array 60 has the dimensions of the disclosed embodiment above, i.e., a width of 500 μm and a height of 280 μm. In this case, from equation (1), optics 100 are configured to have the following magnifications:











m
x

=


280
3000

=
0.093


;


m
y

=


500
4000

=
0.125






(
3
)







From equation (2), the ratio of the width:height magnifications, the distortion D, of optics 100 in this case is:









D
=


bh
aw

=



500
·
3000


280
·
4000


=
1.34






(
4
)







As a second numerical example, assume that object 130 is square, so that w=h, corresponding to an aspect ratio of 1:1. In this case the distortion D introduced by optics 100, from equation (2), is equal to the aspect ratio of array 60, i.e., for the disclosed embodiment above,









D
=


b
a

=


500
280

=
1.79






(
5
)







As a third numerical example, assume that object 130 has an aspect ratio of b:a, equal to the aspect ratio of array 60. In this case there is no distortion introduced by optics 100, i.e., the magnifications in the x and y directions are equal, mx=my, and D=1.


The description of optics 100 above has referred to the height and width magnifications, mx, my in the x and y directions, required by the optics in order to image object 130 onto array 60. For each specific magnification, there is a corresponding focal length fx, fy of optics 100. An approximation for the focal lengths may be determined from equation (6) for a simple lens:









f
=


md
o


m
+
1






(
6
)







where f is a required focal length of optics 100,


do is the distance from the optics to object 130, and


m is a desired magnification.


Those having ordinary skill in the art will be able to use equation (6), or other equations well known in the optical arts, in order to calculate focal length fx, fy of optics 100, and to calculate other parameters for the optics and for system 10.


Optics 100 may be implemented using individual “conventional” components or lenses, or even as a single lens, by methods which are well-known in the art. For example, U.S. Pat. No. 4,684,222, referenced above, describes a method for producing small anamorphic lenses. Alternatively or additionally, optics 100 may be implemented using gradient-index (GRIN) optics, using methods known in the art. Using GRIN optics allows a face of optics 100 that is to mate with prism entrance face 86 to be made plane, facilitating the cementing of the optics to the entrance face. In addition, GRIN optics may reduce the size of optics 100 compared to the size required by conventional components.


Returning to FIGS. 2C and 2E, circuitry 200, which is typically implemented as an integrated circuit, generates clocking signals which drive array 60 and which are provided to the array by connectors 210. Circuitry 200 is driven by a local processor 205, which has overall control of the operation of the circuitry. Signals generated by the array in response to radiation incident on the array are transferred by connectors 210 to circuitry 200. The signals generated by array 60 are initially in analog form, and circuitry 200, inter alia, amplifies and digitizes the analog signals, typically to form frames of digital images, corresponding to the optical images incident on array 60. Circuitry 200 then transfers the digitized frames to processor 20 (FIG. 1) via conducting elements 220. At least some of elements 220 are typically formed on, or are connected to, a flexible printed circuit board 240 which is installed in tube 14. However, any other method known in the art, such as using fibre optics and/or a wireless transmitter, may be implemented to transfer data generated by circuitry 200 to processor 20.


The digitized images output from array 60 have been optically distorted by optics 100 according to the distortion metric D, defined above with respect to equation (2). In order to display the images acquired by array 60 in an undistorted manner on screen 32, circuitry 200 applies a numerical “un-distortion” factor U to the received digitized images, so that the digitized images received by processor 20 are in an undistorted format. Alternatively, the un-distortion factor U may be applied by processor 20 to the distorted digitized images output by circuitry 200.


An expression for the un-distortion factor U is given by equation (7):









U
=


1
D

=


m
x


m
y







(
7
)







In other words, from equation (7), the ratio of the width:height magnifications, U, applied to the digitized images output from array 60, for display on screen 32, is the inverse of the ratio of the width:height magnifications, D, generated by optics 100. An example for applying the required magnifications to the digitized images from array 60 is described below.



FIG. 5 is a flowchart 500 describing steps in operation of system 10, according to an embodiment of the present invention. The steps of flowchart 500 assume that camera 38 is being used, and those having ordinary skill in the art will be able to adapt the description, mutatis mutandis, for the case of camera 438.


In an initial step 502, the elements of system 10 are implemented, generally as described above with respect to FIG. 1. The implementation includes forming optics 100, and the optics are typically formed according to equations (1)-(6) and the associated descriptions, for a predetermined object distance from the optics, a predetermined aspect ratio of the object, and a predetermined aspect ratio and size of array 60. Optics 100 are assumed to be anamorphic, having a distortion factor D, as defined by equation (2).


In an irradiation step 504 radiation is irradiated from optical channels 46 into cavity 12. Typically, although not necessarily, the radiation comprises light in the visible spectrum. However, in some embodiments the radiation comprises non-visible components, such as infra-red and/or ultra-violet radiation.


The radiation illuminates objects within cavity 12, including walls of the cavity, and returning radiation from the illuminated optics is acquired by optics 100.


In an image acquisition step 506 optics 100 receive incoming radiation from the illuminated objects. The optics focus the acquired incoming radiation to an image of the illuminated objects, the image being formed on array 60. The focusing of the radiation is performed by the optics transmitting the acquired incoming radiation to array 60 via entrance face 86 of prism 80, hypotenuse face 82 of the prism, and exit face 84 of the prism.


In a digital image step 508 array 60 and circuitry 200 digitize the image focused onto array 60, to form a frame, or set, of pixels of the distorted image. The circuitry then applies an un-distortion factor U, defined above by equation (7) to the frame of pixels, to generate a set of pixels representative of an undistorted image. The application of un-distortion factor U typically involves addition of pixels, removal of pixels, and/or change of value of pixels of the digitized image received from array 60, so as to produce a frame of digitized pixels in an undistorted format. The following examples explain how pixels of an undistorted image are generated.


A first example assumes that optics 100 image object 130, with an aspect ratio of 4:3, onto array 60, and that array 60 corresponds to the array of the disclosed embodiment referred to above, having an aspect ratio of 200:112. The image from array 60 is then “undistorted” by circuitry 200 to be suitable for display on screen 32 as a 768 pixels wide×576 pixels high image, i.e., as an image having the same aspect ratio as object 130.


Optics 100 are configured to have a distortion factor D corresponding to the first numerical example above, i.e. the ratio of the width:height magnifications is 1.34.


Circuitry 200 “undistorts” the digitized image from array by applying an un-distortion factor U, equal to







1
1.34

=
0.75





from equation (7). This factor corresponds to the ratio of width:height magnifications introduced by circuitry 200 into the pixels received from array 60


In the y direction, array 60 generates 200 pixels, and screen 32 displays 768 pixels in this direction, for a width magnification of 3.84.


In the x (height) direction, array 60 generates 112 pixels, and screen 32 displays 576 pixels in this direction, for a height magnification of 5.14. The ratio of the width:height actual magnifications,







3.84
5.14

,





corresponds to the un-distortion factor U=0.75, introduced by circuitry 200.


A second example assumes that screen 32 has pixel dimensions of 1280×720, for an aspect ratio of 16:9. This aspect ratio substantially corresponds to the aspect ratio of array 60 (200:112). Thus an object with aspect ratio 16:9 may be imaged without distortion onto array 60, and there is no “undistortion” required in generating the 1280×720 pixels for screen 32. Since there is no distortion introduced by optics 100, the optics in this case may be spherical optics, or equivalent to spherical optics. In this second example the width magnification for screen 32 is







1280
200

,





and the height magnification is







720
112

,





both magnifications having the same value of approximately 6.4.


Consideration of the values above shows that for these examples circuitry 200 introduces pixels into the digitized values received from array 60, so as to produce a frame of pixels representative of an undistorted image. The introduction is typically by interpolation between the values from the array. Thus, in the y direction, circuitry 200 interpolates between the 200 values received to generate 768 pixels for the first example, or 1280 pixels for the second example, corresponding to the number of columns displayed by screen 32. Similarly, in the x direction, circuitry 200 interpolates between the 112 values received to generate 576 pixels for the first example, or 720 pixels for the second example, corresponding to the number of rows displayed by screen 32. The method of interpolation implemented by circuitry 200 may comprise any convenient interpolation method known in the art.


Those having ordinary skill in the art will be able to adapt the examples above to evaluate width and height magnifications introduced by circuitry 200 for other object aspect ratios, and for other array aspect ratios.


In a final display step 510, processor 20 receives from circuitry 200 a frame of pixels corresponding to an undistorted image of object 130, and displays the undistorted image on screen 32.


It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.

Claims
  • 1. An endoscope camera, comprising: a cylindrical enclosure having an enclosure diameter;a rectangular array of imaging pixels formed on a planar substrate acting as a supporting frame for the array, which has a first side of a first length and a second side of a second length, wherein the first length is longer than the second length, and the array is mounted within the enclosure so that a plane face of the array is parallel to the enclosure diameter;a right-angle transparent prism having a rectangular entrance face, an exit face, and an hypotenuse configured to reflect radiation from the entrance face to the exit face, the entrance face having a first edge of the first length and a second edge of the second length, the prism being mounted within the enclosure so that the first edge is parallel to the enclosure diameter with the exit face cemented to the plane face of the array; andanamorphic optics, configured to receive incoming radiation from an object, mounted so as to transmit the incoming radiation to the array via the entrance and exit faces of the prism and to form an image on the array that is distorted responsively to an aspect ratio of the exit face.
  • 2. The endoscope camera according to claim 1, wherein the optics comprise gradient-index (GRIN) optics.
  • 3. The endoscope camera according to claim 1, wherein the optics have a circular cross-section.
  • 4. The endoscope camera according to claim 1, wherein the optics focus the incoming radiation to have a first magnification and a second magnification orthogonal to and different from the first magnification.
  • 5. The endoscope camera according to claim 4, wherein an optics-ratio of the first magnification to the second magnification is responsive to a prism-ratio of the first edge to the second edge.
  • 6. The endoscope camera according to claim 4, wherein a ratio of the first magnification to the second magnification is responsive to an aspect ratio of an object imaged by the camera.
  • 7. The endoscope camera according to claim 1, and comprising a processor which applies an un-distortion factor to the distorted image so as to produce an undistorted image of the object.
  • 8. The endoscope camera according to claim 7, wherein the processor is configured to apply the un-distortion factor as a numerical factor.
  • 9. The endoscope camera according to claim 1, wherein the right-angle transparent prism comprises an isosceles prism.
  • 10. The endoscope camera according to claim 1, wherein the array is mounted so that an axis of the cylindrical enclosure is parallel to the plane face of the array.
  • 11. A method for forming an endoscope camera, comprising: providing a cylindrical enclosure having an enclosure diameter;mounting a rectangular array of imaging pixels formed on a planar substrate acting as a supporting frame for the array, which has a first side of a first length and a second side of a second length, wherein the first length is longer than the second length, and the array is mounted within the enclosure so that a plane face of the array is parallel to the enclosure diameter;mounting a right-angle transparent prism within the enclosure, the prism having a rectangular entrance face, an exit face, and an hypotenuse configured to reflect radiation from the entrance face to the exit face, the entrance face having a first edge of the first length and a second edge of the second length, the prism being mounted within the enclosure so that the first edge is parallel to the enclosure diameter with the exit face cemented to the plane face of the imaging array;configuring anamorphic optics to receive incoming radiation from an object and to form an image on the array that is distorted responsively to an aspect ratio of the exit face; andmounting the optics so as to transmit the incoming radiation to the array via the entrance and exit faces of the prism.
  • 12. The method according to claim 11, wherein the optics comprise gradient-index (GRIN) optics.
  • 13. The method according to claim 11, wherein the optics have a circular cross-section.
  • 14. The method according to claim 11, and comprising the optics focusing the incoming radiation to have a first magnification and a second magnification orthogonal to and different from the first magnification.
  • 15. The method according to claim 14, and comprising determining an optics-ratio of the first magnification to the second magnification in response to a prism-ratio of the first edge to the second edge.
  • 16. The method according to claim 14, and comprising determining a ratio of the first magnification to the second magnification is response to an aspect ratio of an object imaged by the camera.
  • 17. The method according to claim 14, and comprising applying an un-distortion factor to the distorted image so as to produce an undistorted image of the object.
  • 18. The method according to claim 17, wherein the un-distortion factor is a numerical factor.
  • 19. The method according to claim 11, wherein the right-angle transparent prism comprises an isosceles prism.
  • 20. The method according to claim 11, and comprising mounting the array so that an axis of the cylindrical enclosure is parallel to the plane face of the array.
US Referenced Citations (221)
Number Name Date Kind
3321656 Sheldon May 1967 A
3971065 Bayer Jul 1976 A
4253447 Moore et al. Mar 1981 A
4261344 Moore et al. Apr 1981 A
4278077 Mizumoto Jul 1981 A
4429328 Jones, Jr. et al. Jan 1984 A
4467361 Ohno et al. Aug 1984 A
4491865 Danna et al. Jan 1985 A
4555768 Lewis, Jr. et al. Nov 1985 A
4569335 Tsuno Feb 1986 A
4573450 Arakawa Mar 1986 A
4576146 Kawazoe et al. Mar 1986 A
4602281 Nagasaki et al. Jul 1986 A
4604992 Sato Aug 1986 A
4622954 Arakawa et al. Nov 1986 A
4625236 Fujimori et al. Nov 1986 A
4633304 Nagasaki Dec 1986 A
4643170 Miyazaki et al. Feb 1987 A
4646721 Arakawa Mar 1987 A
4651201 Schoolman Mar 1987 A
4656508 Yokota Apr 1987 A
4682219 Arakawa Jul 1987 A
4684222 Borrelli et al. Aug 1987 A
4692608 Cooper et al. Sep 1987 A
4697208 Eino Sep 1987 A
4710807 Chikama Dec 1987 A
4713683 Fujimori et al. Dec 1987 A
4714319 Zeevi et al. Dec 1987 A
4720178 Nishioka et al. Jan 1988 A
4741327 Yabe May 1988 A
4745470 Yabe et al. May 1988 A
4746203 Nishioka et al. May 1988 A
4757805 Yabe Jul 1988 A
4768513 Suzuki Sep 1988 A
4784133 Mackin Nov 1988 A
4803550 Yabe et al. Feb 1989 A
4803562 Eino Feb 1989 A
4809680 Yabe Mar 1989 A
4819065 Eino Apr 1989 A
4827907 Tashiro May 1989 A
4827909 Kato et al. May 1989 A
4831456 Takamura et al. May 1989 A
4832003 Yabe May 1989 A
4832033 Maher et al. May 1989 A
4857724 Snoeren Aug 1989 A
4862873 Yajima et al. Sep 1989 A
4866526 Ams et al. Sep 1989 A
4869256 Kanno et al. Sep 1989 A
4873572 Miyazaki et al. Oct 1989 A
4884133 Kanno et al. Nov 1989 A
4890159 Ogiu Dec 1989 A
4905670 Adair Mar 1990 A
4926257 Miyazaki May 1990 A
4934339 Kato Jun 1990 A
4939573 Teranishi et al. Jul 1990 A
4953539 Nakamura et al. Sep 1990 A
4967269 Sasagawa et al. Oct 1990 A
4986642 Yokota et al. Jan 1991 A
4998972 Chin et al. Mar 1991 A
5010875 Kato Apr 1991 A
5021888 Kondou et al. Jun 1991 A
5022399 Biegelisen Jun 1991 A
5029574 Shimamura et al. Jul 1991 A
5122650 McKinley Jun 1992 A
5144442 Ginosar et al. Sep 1992 A
5166787 Irion Nov 1992 A
5184223 Mihara Feb 1993 A
5187572 Nakamura et al. Feb 1993 A
5191203 McKinley Mar 1993 A
5216512 Bruijns Jun 1993 A
5219292 Dickirson et al. Jun 1993 A
5222477 Lia Jun 1993 A
5233416 Inoue Aug 1993 A
5264925 Shipp et al. Nov 1993 A
5294986 Tsuji et al. Mar 1994 A
5301090 Hed Apr 1994 A
5306541 Kasatani Apr 1994 A
5311600 Aghajan et al. May 1994 A
5323233 Yamagami et al. Jun 1994 A
5325847 Matsuno Jul 1994 A
5335662 Kimura et al. Aug 1994 A
5343254 Wada et al. Aug 1994 A
5363135 Inglese Nov 1994 A
5365268 Minami Nov 1994 A
5376960 Wurster Dec 1994 A
5381784 Adair Jan 1995 A
5408268 Shipp Apr 1995 A
5430475 Goto et al. Jul 1995 A
5432543 Hasegawa et al. Jul 1995 A
5436655 Hiyama et al. Jul 1995 A
5444574 Ono et al. Aug 1995 A
5450243 Nishioka Sep 1995 A
5471237 Shipp Nov 1995 A
5494483 Adair Feb 1996 A
5498230 Adair Mar 1996 A
5512940 Takasugi et al. Apr 1996 A
5547455 McKenna et al. Aug 1996 A
5557324 Wolff Sep 1996 A
5575754 Konomura Nov 1996 A
5577991 Akui et al. Nov 1996 A
5588948 Takahashi Dec 1996 A
5594497 Ahern et al. Jan 1997 A
5598205 Nishioka Jan 1997 A
5603687 Hori et al. Feb 1997 A
5604531 Iddan et al. Feb 1997 A
5607436 Pratt et al. Mar 1997 A
5668596 Vogel Sep 1997 A
5673147 McKinley Sep 1997 A
5700236 Sauer et al. Dec 1997 A
5712493 Mori et al. Jan 1998 A
5728044 Shan Mar 1998 A
5734418 Danna Mar 1998 A
5751341 Chaleki et al. May 1998 A
5754280 Kato et al. May 1998 A
5784098 Shoji et al. Jul 1998 A
5792045 Adair Aug 1998 A
5797837 Minami Aug 1998 A
5819736 Avny et al. Oct 1998 A
5827176 Tanaka et al. Oct 1998 A
5847394 Alfano et al. Dec 1998 A
5905597 Mizouchi et al. May 1999 A
5907178 Baker et al. May 1999 A
5909633 Haji et al. Jun 1999 A
5928137 Green Jul 1999 A
5929901 Adair et al. Jul 1999 A
5940126 Kimura Aug 1999 A
5944655 Becker Aug 1999 A
5984860 Shan Nov 1999 A
5986693 Adair et al. Nov 1999 A
6001084 Riek et al. Dec 1999 A
6006119 Soller et al. Dec 1999 A
6009189 Schaack Dec 1999 A
6010449 Selmon et al. Jan 2000 A
6039693 Seward et al. Mar 2000 A
6043839 Adair et al. Mar 2000 A
6075235 Chun Jun 2000 A
6099475 Seward et al. Aug 2000 A
6100920 Miller et al. Aug 2000 A
6124883 Suzuki et al. Sep 2000 A
6128525 Zeng et al. Oct 2000 A
6129672 Seward et al. Oct 2000 A
6130724 Hwang Oct 2000 A
6134003 Tearney et al. Oct 2000 A
6139490 Breidenthal et al. Oct 2000 A
6142930 Ito et al. Nov 2000 A
6148227 Wagnieres et al. Nov 2000 A
6156626 Bothra Dec 2000 A
6177984 Jacques Jan 2001 B1
6178346 Amundson et al. Jan 2001 B1
6184923 Miyazaki Feb 2001 B1
6206825 Tsuyuki Mar 2001 B1
6240312 Alfano et al. May 2001 B1
6260994 Matsumoto et al. Jul 2001 B1
6281506 Fujita et al. Aug 2001 B1
6284223 Luiken Sep 2001 B1
6327374 Piironen et al. Dec 2001 B1
6331156 Haefele et al. Dec 2001 B1
6409658 Mitsumori Jun 2002 B1
6416463 Tsuzuki et al. Jul 2002 B1
6417885 Suzuki et al. Jul 2002 B1
6449006 Shipp Sep 2002 B1
6459919 Lys et al. Oct 2002 B1
6464633 Hosoda et al. Oct 2002 B1
6476851 Nakamura Nov 2002 B1
6485414 Neuberger Nov 2002 B1
6533722 Nakashima Mar 2003 B2
6547721 Higuma et al. Apr 2003 B1
6659940 Adler Dec 2003 B2
6670636 Hayashi et al. Dec 2003 B2
6692430 Adler Feb 2004 B2
6697110 Jaspers et al. Feb 2004 B1
6900527 Miks et al. May 2005 B1
6943837 Booth Sep 2005 B1
6976956 Takahashi et al. Dec 2005 B2
6984205 Gazdzinski Jan 2006 B2
7019387 Miks et al. Mar 2006 B1
7030904 Adair et al. Apr 2006 B2
7106910 Acharya et al. Sep 2006 B2
7116352 Yaron Oct 2006 B2
7123301 Nakamura et al. Oct 2006 B1
7127280 Dauga Oct 2006 B2
7133073 Neter Nov 2006 B1
7154527 Goldstein et al. Dec 2006 B1
7189971 Spartiotis et al. Mar 2007 B2
7308296 Lys et al. Dec 2007 B2
7347817 Glukhovsky et al. Mar 2008 B2
7355625 Mochida et al. Apr 2008 B1
7804985 Szawerenko et al. Sep 2010 B2
8179428 Minami May 2012 B2
8194121 Blumzvig et al. Jun 2012 B2
8263438 Seah et al. Sep 2012 B2
8438730 Ciminelli May 2013 B2
20010017649 Yaron Aug 2001 A1
20010031912 Adler Oct 2001 A1
20010040211 Nagaoka Nov 2001 A1
20010051766 Gazdzinski Dec 2001 A1
20020089586 Suzuki et al. Jul 2002 A1
20020103417 Gazdzinski Aug 2002 A1
20020154215 Schechterman et al. Oct 2002 A1
20020198439 Mizuno Dec 2002 A1
20030171648 Yokoi et al. Sep 2003 A1
20030171649 Yokoi et al. Sep 2003 A1
20030171652 Yokoi et al. Sep 2003 A1
20030174208 Glukhovsky et al. Sep 2003 A1
20030174409 Nagaoka Sep 2003 A1
20040019255 Sakiyama Jan 2004 A1
20040197959 Ujiie et al. Oct 2004 A1
20050165279 Adler et al. Jul 2005 A1
20050259487 Glukhovsky et al. Nov 2005 A1
20050267328 Blumzvig et al. Dec 2005 A1
20060158512 Iddan et al. Jul 2006 A1
20080229573 Wood et al. Sep 2008 A1
20090147076 Ertas Jun 2009 A1
20090266598 Katagiri et al. Oct 2009 A1
20090294978 Ota et al. Dec 2009 A1
20100283818 Bruce et al. Nov 2010 A1
20110210441 Lee et al. Sep 2011 A1
20120161312 Hossain et al. Jun 2012 A1
20120274705 Petersen et al. Nov 2012 A1
20130129334 Wang et al. May 2013 A9
20140249368 Hu Sep 2014 A1
Foreign Referenced Citations (14)
Number Date Country
4243452 Jun 1994 DE
19851993 Jun 1999 DE
1661506 May 2006 EP
1215383 Dec 1970 GB
5944775 Mar 1984 JP
6215514 Jan 1987 JP
0882751 Mar 1996 JP
H09-173288 Jul 1997 JP
2006141884 Jun 2006 JP
9417493 Aug 1994 WO
9641481 Dec 1996 WO
9848449 Oct 1998 WO
0033727 Jun 2000 WO
2013073578 May 2013 WO
Non-Patent Literature Citations (5)
Entry
International Application # PCT/US14/42825 Search Report dated Oct. 6, 2014.
International Application PCT/US2014/042826 Search Report dated Sep. 8, 2014.
Finkman et al., U.S. Appl. No. 13/933,145, filed Jul. 2, 2013.
U.S. Appl. No. 14/179,577 Office Action dated Oct. 13, 2015.
JP Application # 2015-563153 Office Action dated Jul. 15, 2016.
Related Publications (1)
Number Date Country
20150015687 A1 Jan 2015 US