Method and apparatus for displaying images in combination with taking images

Information

  • Patent Application
  • 20030193599
  • Publication Number
    20030193599
  • Date Filed
    March 27, 2002
    22 years ago
  • Date Published
    October 16, 2003
    21 years ago
Abstract
An imaging apparatus. The apparatus includes a display means for displaying an image. The apparatus includes a camera means distributed throughout the display means to take images of objects in view of the display means. A screen. The screen includes a display for displaying an image. The screen includes a plurality of pinholes distributed in the display. The screen includes a plurality of sensors with at least one sensor of the plurality of sensors in alignment with and corresponding with one pinhole of the plurality of pinholes to receive light passing through the pinhole to image a ray of a specific size and a specific direction out from the display. A method for imaging.
Description


FIELD OF THE INVENTION

[0001] The present invention is related to displays that have a camera positioned to allow a viewer to look at the display and be seen as looking directly at the camera. More specifically, the present invention is related to a display having sensors distributed throughout the display to form an image of a viewer looking at the display.



BACKGROUND OF THE INVENTION

[0002] A basic problem with video communication equipment is that the display device and the imaging device (camera) are physically separated, eliminating the ability for direct, natural eye contact and transparent gaze awareness. Extensive work has been done researching this problem and proposing physical and/or compute based solutions (see references below). The need to both image and display is fundamental to many activities, including communicating, documenting, document scanning, security, etc.


[0003] This invention combines an electronic display with an imaging device in a way that permits the point of view of the imaging device to be placed directly behind the display itself, enabling direct, natural eye contact. In addition, the device has the ability to scan like a flatbed scanner, and functionally zoom, pan, tilt, and shift without moving parts within the limits of its design, like a PTZ camera.


[0004] Prior art for the present invention includes displays with integrated cameras, such as CRTs, notebook computers, PDAs, monitor top cameras, etc. Devices using partially silvered mirrors, projectors, CRTs with cameras integrated into the tube, and other approaches have been investigated in the past, but did not have any integration of display and imaging at the pixel level.



SUMMARY OF THE INVENTION

[0005] The present invention pertains to an imaging apparatus. The apparatus comprises a display means for displaying an image. The apparatus comprises a camera means distributed throughout the display means to take images of objects in view of the display means.


[0006] The present invention pertains to a screen. The screen comprises a display for displaying an image. The screen comprises a plurality of pinholes distributed in the display. The screen comprises a plurality of sensors with at least one sensor of the plurality of sensors in alignment with and corresponding with one pinhole of the plurality of pinholes to receive light passing through the pinhole to image a ray of a specific size and a specific direction out from the display.


[0007] The present invention pertains to a method for imaging. The method comprises the steps of applying a first image on a first portion of a screen. There is the step of receiving light from objects at a second portion of the screen. The second portion is distributed in the first portion. There is the step of forming a second image from the light from objects received at the second portion.


[0008] The present invention allows the creation of a combination display/imaging device which functions like a window does, allowing a remote “outside” person to look in, while allowing the person “inside” to see out. The combination of functions is natural and solves the well known eye gaze problem in visual communication. The design takes advantage of electronic integration techniques to reduce part count, cost, and moving parts, and enables the development of display/image devices with very wide application areas.







BRIEF DESCRIPTION OF THE DRAWINGS

[0009] In the accompanying drawings, the preferred embodiment of the invention and preferred methods of practicing the invention are illustrated in which:


[0010]
FIG. 1 is a schematic representation of an apparatus of the present invention.


[0011]
FIG. 2 is a schematic representation of a screen of the present invention.


[0012]
FIG. 3 is an image taken by the apparatus.







DETAILED DESCRIPTION

[0013] Referring now to the drawings wherein like reference numerals refer to similar or identical parts throughout the several views, and more specifically to FIG. 1 thereof, there is shown an imaging apparatus 10. The apparatus 10 comprises a display means 12 for displaying an image. The apparatus 10 comprises a camera means 14 distributed throughout the display means 12 to take images of objects in view of the display means 12.


[0014] The present invention pertains to a screen 16, as shown in FIG. 2. The screen 16 comprises a display 18 for displaying an image. The screen 16 comprises imaging means 11 for forming an images, such as a plurality of pinholes 20 or lenses distributed in the display 18. The screen 16 comprises a plurality of sensors 22 with at least one sensor of the plurality of sensors 22 in alignment with and corresponding with one pinhole of the plurality of pinholes 20 to receive light passing through the pinhole to image a ray of a specific size and a specific direction out from the display 18.


[0015] Connected to each sensor is preferably image processing means 15 having algorithms to enhance the image by addressing diffraction artifacts and light fall off. Preferably, the algorithms are embedded in the screen and the data from the sensors are fed to them by connections built into the screen, as is well known to one skilled in the art.


[0016] Preferably, the sensors 22 are single pixel sensors 22. The sensors 22 are preferably arranged to converge to a point. Preferably, wherein the sensors 22 are arranged to define an effective point of view behind the display 18. There are preferably multiple sensors 22 in alignment with each pinhole to combine multiple rays of light behind each panel. Preferably, each sensor is about 0.2 mm from the respective pinhole.


[0017] The present invention pertains to a method for imaging. The method comprises the steps of applying a first image on a first portion 24 of a screen 16. There is the step of receiving light from objects at a second portion 26 of the screen 16. The second portion 26 is distributed in the first portion 24. There is the step of forming a second image from the light from objects received at the second portion 26.


[0018] Preferably, the forming step includes the step of converging rays of the light to a desired point. The receiving step preferably includes the step of receiving the light at the second portion 26 having sensors 22 distributed in the first portion 24. Preferably, the receiving step includes the step of receiving rays of light at the sensors 22 that pass through pinholes 20 in the second portion 26.


[0019] In the operation of the invention, utilizing flat panel display technology, such as organic polymer displays, integrate a large number of single pixel (3 color) pinhole “cameras” across the display 18 surface. By using IC fabrication techniques, fabricate and align the single pixel sensors 22 with accurately positioned and shaped pin holes, so that each pixel accurately images a ray of a specific size in a specific direction out from the screen 16. By selecting and ordering a set of imaging pixels with rays that converge to a point, an image can be generated. The effective point of view of this image is (typically) behind the panel, providing a more natural image without wide-angle view distortion, while maintaining a thin panel profile. Alternatively, lenses can be used instead of the pinholes. The lenses are formed in a like manner as the pinholes.


[0020] By having many such single pixel “cameras” and intelligently designing the size, direction, and sensor distance of the pinholes 20, the angle of view (zoom), direction of view (pan and tilt), and position of view (shift) can be controlled by selection and order of a subset of pixels. Focus is fixed by the pinhole effect to be equal to the resolution of the selected set of pixels. By placing more than 1 pixel's sensors 22 behind a pinhole, and varying the geometry of the sensors 22, multiple rays can be combined behind a single pinhole.


[0021] Use of the panel as a scanner is possible by using all pixels in physical order due to the nearness of the imaged plane. The display 18 light can be used to illuminate the paper or other object placed on the panel. By scanning display 18 light across the panel, it is possible to increase contrast and provide data appropriate for fingerprint/hand recognition. Some image processing may be necessary to deal with the small pixel size compared with the total screen 16 area.


[0022] By using the data from pixels of non-convergent rays, distance and position information can be developed, providing a method for 3D position sensing.


[0023] Use of a light pen, where the pen device emits a known color or modulated pattern, could be used to implement graphic tablet functionality. Changing the modulation of the pen could be used to indicate different color pens, varying pressure, the user's identity, etc.


[0024] Stereo imaging can be performed by appropriately designing and selecting the sensor pixel rays.


[0025] The convergence point for a set of rays can be behind or in front of the panel, providing some control of effective point of view, and allowing a thin device close to the viewer to have a more “comfortable” point of view well behind the screen 16.


[0026] Many other effects can be generated by using the data from a superset of the pixels required for an image. The actual design of the imaging pixel patterns would be somewhat application specific, with tradeoffs being made for cost, resolution, zoom/pan/tilt/shift range, and special functions such as stereo imaging and 3D position sensing.


[0027] The basic geometry of a single pixel pinhole camera, assuming a 640×480 resolution, square pixels, and a 70° horizontal angle of view, requires 640/70°=0.109° per pixel. With the size of an RGB CCD element set at about 0.01 mm (0.007 for each color in current CCDs), the pinhole needs to be about 0.2 mm in front of the sensors 22, which is practical to implement. The size of the pinhole would need to be determined for correct exposure and diffraction tradeoffs for a given application. There needs to be no reflection generated from the sides of the path between the pinhole and the sensor. Refraction indices of the materials involved would change these values slightly. The biggest issue is the diffraction affects from the small pinhole size. This must be dealt with or the geometry changed to achieve an optimal image.


[0028] The panel area consumed by the pinholes 20 is, ideally, the entire surface. This would eliminate the mismatch between the ray imaged by a single element and the equivalent ray from a traditional camera located at the convergence point. Note that a panel of nothing but holes is a window. The practical requirement of using a small portion of this area, (so that the display 18 retains sufficient image quality), introduces some tradeoffs. The rays that are imaged by a series of pinholes 20 spaced around the panel is somewhat different than the optimal view from the convergence point. If the distance to the imaged object is known, this can be dealt with, otherwise there will be an un-imaged gridline region close to the screen 16 and overlapping pixels farther away. A compromise can be reached here based on the practical use distance of the screen 16, and note that the un-imaged gridlines are similar to the coverage of the sensors 22 in a traditional CCD array with a focused image. Pinholes 20 that are significantly smaller than a display 18 pixel (<0.01″) would not significantly affect the viewability of a 100 DPI display 18. Also of interest is the transparency characteristic of the newer organic display 18 technologies, resulting in placing an imaging panel behind the display 18 panel and synchronizing image capture with display 18 pixel blanking. Micro-machine techniques to adjust the image characteristics can also be used.


[0029] Image quality of pinhole cameras is limited by the tradeoff between the diffraction of a small pinhole vs. the geometric resolution imposed by the pinhole size. Unlike film, in the electronic domain, there is the ability to work with the data that comes from the sensors 22 in real time, doing a convolution or other filtering to compensate for the far-field (Fresnel) and near-field (Fraunhofer) diffraction affects. If necessary, this can be done for multiple frequencies based on the 3 color detectors, compensating or using the positional offset of the triplet sensors 22 to improve the image further.


[0030] Images were taken with a small CCD board camera with a ¼″ CCD array was used. An image was taken with the supplied 3.8 mm lens, and a roughly corresponding image was taken using a good quality 100 micron (0.004″) pinhole. This pinhole diameter is optimal for a 2-3 mm focal length. The image was generated by handholding the pinhole above the CCD sensor, with minimal shielding of the sensor from stray light, which reduced the contrast somewhat. FIG. 3 shows the resulting image taken by the array.


[0031] Interconnection and interfacing to the imaging pixels is done by a scanning technique similar to existing CCDs. However, the potential for multiple megapixels will require extensions to enable real time video performance. This is accomplished by parallel scanning several rows at a time, smart grouping of pixels into subsets that are not used at the same time, or an addressing technique. Interface from the panel to the host should not be burdened with the extraneous pixel data. Use of an FPGA device utilizing Adaptive Computing techniques to select, order, and otherwise process the data coming from the sensors 22 allows full flexibility and performance to be maintained. An LVDS electrical interface to the host is provided for interconnect, in parallel to the display 18 drive interface.


[0032] The image processing addresses diffraction artifacts and light fall off due to each lens or pinhole. The image processing is well known to one skilled in the art for this purpose as is well known to one skilled in the art. The imaging data from each sensor is enhanced by well known image processing algorithms. The algorithms themselves are preferably implanted in a part of the screen itself, which is really just a very large chip.


[0033] Although the invention has been described in detail in the foregoing embodiments for the purpose of illustration, it is to be understood that such detail is solely for that purpose and that variations can be made therein by those skilled in the art without departing from the spirit and scope of the invention except as it may be described by the following claims.


Claims
  • 1. An imaging apparatus comprising: a display means for displaying an image; and a camera means distributed throughout the display means to take images of objects in view of the display means.
  • 2. A screen comprising: a display for displaying an image; a plurality of imaging means for forming an image distributed in the display; and a plurality of sensors with at least one sensor of the plurality of sensors in alignment with and corresponding with one pinhole of the plurality of pinholes to receive light passing through the pinhole to image a ray of a specific size and a specific direction out from the display.
  • 3. A screen as described in claim 2 wherein each imaging means is a pinhole.
  • 4. A screen as described in claim 3 wherein the sensors are single pixel sensors.
  • 5. A screen as described in claim 4 wherein the sensors are arranged to converge to a point.
  • 6. A screen as described in claim 5 wherein the sensors are arranged to define an effective point of view behind the display.
  • 7. A screen as described in claim 6 wherein there are multiple sensors in alignment with each pinhole to combine multiple rays of light behind each panel.
  • 8. A screen as described in claim 7 wherein each sensor is about 0.2 mm from the respective pinhole.
  • 9. A screen as described in claim 2 wherein each imaging means is a lens.
  • 10. A screen as described in claim 4 including image processing means connected to each sensor for enhancing the image.
  • 11. A screen as described in claim 10 wherein the image processing means compensates for diffraction artifacts and light fall off.
  • 12. A screen as described in claim 11 wherein the image processing means is in contact with the display.
  • 13. A method for imaging comprising the steps of: applying a first image on a first portion of a screen; receiving light from objects at a second portion of the screen, the second portion distributed in the first portion; and forming a second image from the light from objects received at the second portion.
  • 14. A method as described in claim 13 wherein the forming step includes the step of converging rays of the light to a desired point.
  • 15. A method as described in claim 14 wherein the receiving step includes the step of receiving the light at the second portion having sensors distributed in the first portion.
  • 16. A method as described in claim 15 wherein the receiving step includes the step of receiving rays of light at the sensors that pass through pinholes in the second portion.