The invention relates to user interface systems for use in an imaging device.
In a conventional film and/or digital camera a photographer views an image of a scene to be captured by observing the scene through an optical viewfinder. The viewfinder focuses light from a portion of the scene on the eye of the photographer, to define an area of the scene that will be included in an image that will be captured based upon current camera settings. Traditionally, cameras are held in a fixed position relative to a photographer's eyes during image composition and capture so that the photographer can view the focused light that is provided by the viewfinder.
Recently, hybrid film/digital cameras, digital cameras and video cameras have begun to incorporate electronic displays that are operable in a mode that allows such cameras to present a “virtual viewfinder” which captures images electronically during composition and presents to the photographer a stream of the captured images on an electronic display. When the virtual viewfinder shows an image of the scene that is pleasing to the photographer, the photographer can cause an image of the scene to be stored. While some of the displays that are used for virtual viewfinder purposes are incorporated into a camera like a conventional optical viewfinder, it is more common to find that cameras present the virtual viewfinder images on a display that is external to a camera. When, an external display is used as a virtual viewfinder, a photographer must typically position the camera at a distance from the face of the photographer so that the photographer can see what is being displayed.
It will be appreciated that, while a camera is so positioned it can be challenging for the photographer to operate camera controls while also watching the virtual viewfinder. Thus, what is needed in the art is a camera that allows a photographer to compose an image in the virtual viewfinder mode of operation without requiring that the photographer operate a plurality of controls. Of particular interest in the art is the ability of a photographer to rapidly and intuitively adjust the field of view of the image capture system of such a camera such as by adjusting the zoom settings without requiring the photographer to make adjustments using manual controls.
It will further be appreciated that as hybrid, digital, and video cameras become smaller, there is a general desire in the art of camera design to reduce the number of manual controls that are required to operate the camera as each manual control on the camera requires at least a minimum amount of camera space in which to operate. Accordingly, there is a need for cameras that provide user controls such as a user controlled zooming capability, but that do so without requiring independent controllers zoom and/or aspect ratio adjustment.
One approach to meeting this need is to combine multiple camera functions into a single camera controller, as described in U.S. Pat. No. 5,970,261, entitled “Zoom Camera, Mode Set Up Device And Control Method For Zoom Camera”, filed by Ishiguro et al. on Sep. 11, 1997. However, this approach is confusing for novice users and still requires users to make zoom adjustments using a manual controller.
In the art of controlling display devices, it is known to monitor the movement of people and things within a space so that control inputs can be made in response to sensed movement. U.S. patent Publication No. 2003/0210255 entitled “Image Display Processing Apparatus, Image Display Processing Method and Computer Program” filed by Hiraki on Mar. 13, 2003, describes an image display processing method and program that determines what is to be displayed on an image based upon the three dimensional movement of a controller. This system allows a user to scroll about in an image to be presented on a display by moving the controller. Gesture based methods for controlling an image display are also known. For example, the EyeToy camera and PlayStation video game console sold by Sony Computer Entertainment America Inc. (SCEA), San Mateo, Calif., USA allows a user to control action in a video game based upon body movements of the user.
Such techniques are not well suited for use during an image capture operation as the gesticulating and movements required thereby can interfere with the scene image being captured, can interfere with the physical ability of the photographer to capture an image and can consume substantial amounts of electrical power and processing power necessary to operate the camera.
What is needed in the art therefore is a camera control system and method for operating a camera such as a digital camera that allows a user to execute control inputs to a camera such as selecting a zoom setting and/or an aspect ratio in a more intuitive manner.
In one aspect of the invention, a method is provided for operating an imaging system capable of forming images based upon adjustable image capture settings and a viewing frame in which evaluation images of a scene are observable. In accordance with the method, an initial viewing distance is detected from the viewing frame evaluation image of a scene to an anatomical feature of the user; and an initial image capture setting is determined.
A change is detected in the viewing distance, and a revised image capture setting is determined based upon the initial image capture setting and an extent of the change in the viewing distance. The image capture setting is adjusted based upon the revised image capture setting.
In another aspect of the invention, a method is provided for operating an image capture system having an image capture device. In accordance with this method, a field of view in a scene is determined based upon a portion of a scene that is observable by a user who views the scene using a viewing frame that is positioned separately from the image capture device and at least one image capture setting is determined based upon the determined field of view; and capturing an image of the scene using the determined image capture setting and providing an image of the field of view.
In still another aspect of the invention, an image capture device is provided. The image capture device has:
an image capture system adapted to receive light and to form an image based upon the received light and a viewing frame allowing a user of the image capture system to view an image of the scene and to define a field of view in the scene based upon what the user views using the viewing frame and a sensor system sampling a viewing area behind the viewing frame and providing a positioning signal indicative of a distance from the viewing frame to a part of the user's body; and
a controller adapted to determine an image capture setting based upon the positioning signal, to cause an image of the scene to be captured and to cause an output image to be generated that is based upon the determined setting.
In still another aspect of the invention, an image capture device is provided. The image capture system has:
an image capture device adapted to receive light and to form an image based upon the received light, a viewing frame defining a framing area through which a user views a portion of the scene, and a viewing frame position determining circuit adapted to detect the position of the viewing frame.
An eye position determining circuit is adapted to detect the position of an eye.
A controller is adapted to a provide an image based upon an image captured by the image capture system, the position of the viewing frame and the position of an eye of the user, so that the image corresponds to the portion of the scene that is within the field of view as observed by the eye of the user.
In yet another embodiment, an image capture device is provided.
The image capture device has a body having an image capture means for capturing an image of a scene in accordance with at least one image capture setting.
A viewing frame is provided for allowing a user to observe a sequence of images depicting a portion of a scene during image composition.
Means are provided for determining a viewing distance from the viewing frame to the user, and for determining at least one image capture setting based upon any detected change in the viewing distance during image composition.
A setting means is provided for setting the image capture system in accordance with the determined image capture setting.
Lens system 23 can be of a fixed focus type or can be manually or automatically adjustable. In the embodiment shown in
The focus position of lens system 23 can be automatically selected using a variety of known strategies. For example, in one embodiment, image sensor 24 is used to provide multi-spot autofocus using what is called the “through focus” or “whole way scanning” approach. In such an approach the scene is divided into a grid of regions or spots, and the optimum focus distance is determined for each image region. The optimum focus distance for each region is determined by moving lens system 23 through a range of focus distance positions, from the near focus distance to the infinity position, while capturing images. Depending on the design of digital camera 12, between four and thirty-two images may need to be captured at different focus distances. Typically, capturing images at eight different distances provides suitable accuracy.
The captured image data is then analyzed to determine the optimum focus distance for each image region. This analysis begins by band-pass filtering the sensor signal using one or more filters, as described in commonly assigned U.S. Pat. No. 5,874,994 “Filter Employing Arithmetic Operations for an Electronic Synchronized Digital Camera” filed by Xie et al. on Dec. 11, 1995, the disclosure of which is herein incorporated by reference. The absolute value of the bandpass filter output for each image region is then peak detected, in order to determine a focus value for that image region, at that focus distance. After the focus values for each image region are determined for each captured focus distance position, the optimum focus distances for each image region can be determined by selecting the captured focus distance that provides the maximum focus value, or by estimating an intermediate distance value, between the two measured captured focus distances which provided the two largest focus values, using various interpolation techniques.
The lens focus distance to be used to capture a digital image can now be determined. In a preferred embodiment, the image regions corresponding to a target object (e.g. a person being photographed) are determined. The focus position is then set to provide the best focus for these image regions. For example, an image of a scene can be divided into a plurality of sub-divisions. A focus evaluation value representative of the high frequency component contained in each subdivision of the image can be determined and the focus evaluation values can be used to determine object distances as described in commonly assigned U.S. Pat. No. 5,877,809 entitled “Method Of Automatic Object Detection In An Image”, filed by Omata et al. on Oct. 15, 1996, the disclosure of which is herein incorporated by reference. If the target object is moving, object tracking may be performed, as described in commonly assigned U.S. Pat. No. 6,067,114 entitled “Detecting Compositional Change in Image” filed by Omata et al. on Oct. 26, 1996, the disclosure of which is herein incorporated by reference. In an alternative embodiment, the focus values determined by “whole way scanning” are used to set a rough focus position, which is refined using a fine focus mode, as described in commonly assigned U.S. Pat. No. 5,715,483, entitled “Automatic Focusing Apparatus and Method”, filed by Omata et al. on Oct. 11, 1998, the disclosure of which is herein incorporated by reference.
In one embodiment, bandpass filtering and other calculations used to provide auto-focus information for digital camera 12 are performed by digital signal processor 26. In this embodiment, digital camera 12 uses a specially adapted image sensor 24, as is shown in commonly assigned U.S. Pat. No 5,668,597 entitled “An Electronic Camera With Rapid Automatic Focus Of An Image Upon A Progressive Scan Image Sensor”, filed by Parulski et al. on Dec. 30, 1994, the disclosure of which is herein incorporated by reference, to automatically set the lens focus position. As described in the '597 patent, only some of the lines of sensor photoelements (e.g. only ¼ of the lines) are used to determine the focus. The other lines are eliminated during the sensor readout process. This reduces the sensor readout time, thus shortening the time required to focus lens system 23.
In an alternative embodiment, digital camera 12 uses a separate optical or other type (e.g. ultrasonic) of rangefinder 27 to identify the subject of the image and to select a focus position for lens system 23 that is appropriate for the distance to the subject. Rangefinder 27 can operate lens driver 25, directly or as shown in
A feedback loop is established between lens driver 25 and camera controller 32 so that camera controller 32 can accurately set the focus position of lens system 23.
Lens system 23 is also optionally adjustable to provide a variable zoom. In the embodiment shown lens driver 25 automatically adjusts the position of one or more mobile elements (not shown) relative to one or more stationary elements (not shown) of lens system 23 based upon signals from signal processor 26, an automatic range finder system 27, and/or controller 32 to provide a zoom magnification. Lens system 23 can be of a fixed magnification, manually adjustable and/or can employ other known arrangements for providing an adjustable zoom.
Light from the scene that is focused by lens system 23 onto image sensor 24 is converted into image signals representing an image of the scene. Image sensor 24 can comprise a charge couple device (CCD), a complimentary metal oxide sensor (CMOS), or any other electronic image sensor known to those of ordinary skill in the art. The image signals can be in digital or analog form.
Signal processor 26 receives image signals from image sensor 24 and transforms the image signals into an image in the form of digital data. The digital image can comprise one or more still images, multiple still images and/or a stream of apparently moving images such as a video segment. Where the digital image data comprises a stream of apparently moving images, the digital image data can comprise image data stored in an interleaved or interlaced image form, a sequence of still images, and/or other forms known to those of skill in the art of digital video.
Signal processor 26 can apply various image processing algorithms to the image signals when forming a digital image. These can include but are not limited to color and exposure balancing, interpolation and compression. Where the image signals are in the form of analog signals, signal processor 26 also converts these analog signals into a digital form. In certain embodiments of the invention, signal processor 26 can be adapted to process image signal so that the digital image formed thereby appears to have been captured at a different zoom setting than that actually provided by the optical lens system. This can be done by using a subset of the image signals from image sensor 24 and interpolating the subset of the image signals to form the digital image. This is known generally in the art as “digital zoom”. Such digital zoom can be used to provide electronically controllable zoom adjusted in fixed focus, manual focus, and even automatically adjustable focus systems.
Controller 32 controls the operation of the image capture system 10 during imaging operations, including but not limited to image capture system 22, display 30 and memory such as memory 40. Controller 32 causes image sensor 24, signal processor 26, display 30 and memory 40 to capture, present and store original images in response to signals received from a user input system 34, data from signal processor 26 and data received from optional sensors 36. Controller 32 can comprise a microprocessor such as a programmable general purpose microprocessor, a dedicated micro-processor or micro-controller, a combination of discrete components or any other system that can be used to control operation of image capture system 10.
Controller 32 cooperates with a user input system 34 to allow image capture system 10 to interact with a user. User input system 34 can comprise any form of transducer or other device capable of receiving an input from a user and converting this input into a form that can be used by controller 32 in operating image capture system 10. For example, user input system 34 can comprise a touch screen input, a touch pad input, a 4-way switch, a 6-way switch, an 8-way switch, a stylus system, a trackball system, a joystick system, a voice recognition system, a gesture recognition system or other such systems. In the digital camera 12 embodiment of image capture system 10 shown in
Sensors 36 are optional and can include light sensors and other sensors known in the art that can be used to detect conditions in the environment surrounding image capture system 10 and to convert this information into a form that can be used by controller 32 in governing operation of image capture system 10. Sensors 36 can include audio sensors adapted to capture sounds. Such audio sensors can be of conventional design or can be capable of providing controllably focused audio capture such as the audio zoom system described in U.S. Pat. No. 4,862,278, entitled “Video Camera Microphone with Zoom Variable Acoustic Focus”, filed by Dann et al. on Oct. 14, 1986. Sensors 36 can also include biometric sensors adapted to detect characteristics of a user for security and affective imaging purposes. Where a need for illumination is determined, controller 32 can cause a scene illumination system 37 such as a light, strobe, or flash system to emit light.
Controller 32 causes an image signal and corresponding digital image to be formed when a trigger condition is detected. Typically, the trigger condition occurs when a user depresses shutter trigger button 60, however, controller 32 can determine that a trigger condition exists at a particular time, or at a particular time after shutter trigger button 60 is depressed. Alternatively, controller 32 can determine that a trigger condition exists when optional sensors 36 detect certain environmental conditions, such as optical or radio frequency signals. Further controller 32 can determine that a trigger condition exists based upon affective signals obtained from the physiology of a user.
Controller 32 can also be used to generate metadata in association with each image. Metadata is data that is related to a digital image or a portion of a digital image but that is not necessarily observable in the image itself. In this regard, controller 32 can receive signals from signal processor 26, camera user input system 34 and other sensors 36 and, optionally, generates metadata based upon such signals. The metadata can include but is not limited to information such as the time, date and location that the original image was captured, the type of image sensor 24, mode setting information, integration time information, taking lens unit setting information that characterizes the process used to capture the original image and processes, methods and algorithms used by image capture system 10 to form the original image. The metadata can also include but is not limited to any other information determined by controller 32 or stored in any memory in image capture system 10 such as information that identifies image capture system 10, and/or instructions for rendering or otherwise processing the digital image with which the metadata is associated. The metadata can also comprise an instruction to incorporate a particular message into digital image when presented. Such a message can be a text message to be rendered when the digital image is presented or rendered. The metadata can also include audio signals. The metadata can further include digital image data. In one embodiment of the invention, where digital zoom is used to form the image from a subset of the captured image, the metadata can include image data from portions of an image that are not incorporated into the subset of the digital image that is used to form the digital image. The metadata can also include any other information entered into image capture system 10.
The digital images and optional metadata, can be stored in a compressed form. For example where the digital image comprises a sequence of still images, the still images can be stored in a compressed form such as by using the JPEG (Joint Photographic Experts Group) ISO 10918-1 (ITU-T.81) standard. This JPEG compressed image data is stored using the so-called “Exif” image format defined in the Exchangeable Image File Format version 2.2 published by the Japan Electronics and Information Technology Industries Association JEITA CP-3451. Similarly, other compression systems such as the MPEG-4 (Motion Pictures Export Group) or Apple QuickTime™ standard can be used to store digital image data in a video form. Other image compression and storage forms can be used.
The digital images and metadata can be stored in a memory such as memory 40. Memory 40 can include conventional memory devices including solid state, magnetic, optical or other data storage devices. Memory 40 can be fixed within image capture system 10 or it can be removable. In the embodiment of
In the embodiment shown in
Signal processor 26 and/or controller 32 also use image signals or the digital images to form evaluation images which have an appearance that corresponds to original images stored in image capture system 10 and are adapted for presentation on display 30. This allows users of image capture system 10 to use a display such as display 30 to view images that correspond to original images that are available in image capture system 10. Such images can include, for example images that have been captured by image capture system 22, and/or that were otherwise obtained such as by way of communication module 54 and stored in a memory such as memory 40 or removable memory 48.
Display 30 can comprise, for example, a color liquid crystal display (LCD), organic light emitting display (OLED) also known as an organic electro-luminescent display (OELD) or other type of video display. Display 30 can be external as is shown in
Signal processor 26 and/or controller 32 can also cooperate to generate other images such as text, graphics, icons and other information for presentation on display 30 that can allow interactive communication between controller 32 and a user of image capture system 10, with display 30 providing information to the user of image capture system 10 and the user of image capture system 10 using user input system 34 to interactively provide information to image capture system 10. Image capture system 10 can also have other displays such as a segmented LCD or LED display (not shown) which can also permit signal processor 26 and/or controller 32 to provide information to user 10. This capability is used for a variety of purposes such as establishing modes of operation, entering control settings, user preferences, and providing warnings and instructions to a user of image capture system 10. Other systems such as known systems and actuators for generating audio signals, vibrations, haptic feedback and other forms of signals can also be incorporated into image capture system 10 for use in providing information, feedback and warnings to the user of image capture system 10.
Typically, display 30 has less imaging resolution than image sensor 24. Accordingly, signal processor 26 reduces the resolution of image signal or digital image when forming evaluation images adapted for presentation on display 30. Down sampling and other conventional techniques for reducing the overall imaging resolution can be used. For example, resampling techniques such as are described in commonly assigned U.S. Pat. No. 5,164,831 “Electronic Still Camera Providing Multi-Format Storage Of Full And Reduced Resolution Images” filed by Kuchta et al. on March 15, 1990, can be used. The evaluation images can optionally be stored in a memory such as memory 40. The evaluation images can be adapted to be provided to an optional display driver 28 that can be used to drive display 30. Alternatively, the evaluation images can be converted into signals that can be transmitted by signal processor 26 in a form that directly causes display 30 to present the evaluation images. Where this is done, display driver 28 can be omitted.
Image capture system 10 can obtain original images for processing in a variety of ways. For example, in a digital camera embodiment, image capture system 10 can capture an original image using an image capture system 22 as described above. Imaging operations that can be used to obtain an original image using image capture system 22 include a capture process and can optionally also include a composition process and a verification process.
During the composition process, controller 32 provides an electronic viewfinder effect on display 30. In this regard, controller 32 causes signal processor 26 to cooperate with image sensor 24 to capture preview digital images during composition and to present a corresponding evaluation images on display 30.
In the embodiment shown in
The capture process is executed in response to controller 32 determining that a trigger condition exists. In the embodiment of
During the verification process, an evaluation image corresponding to the original digital image is optionally formed for presentation on display 30 by signal processor 26 based upon the image signal. In one alternative embodiment, signal processor 26 converts each image signal into a digital image and then derives the corresponding evaluation image from the original digital image. The corresponding evaluation image is supplied to display 30 and is presented for a period of time. This permits a user to verify that the digital image has a preferred appearance.
Original images can also be obtained by image capture system 10 in ways other than image capture. For example, original images can by conveyed to image capture system 10 when such images are recorded on a removable memory that is operatively associated with memory interface 50. Alternatively, original images can be received by way of communication module 54. For example, where communication module 54 is adapted to communicate by way of a cellular telephone network, communication module 54 can be associated with a cellular telephone number or other identifying number that for example another user of the cellular telephone network such as the user of a telephone equipped with a digital camera can use to establish a communication link with image capture system 10 and transmit images which can be received by communication module 54. Accordingly, there are a variety of ways in which image capture system 10 can receive images and therefore, in certain embodiments of the present invention, it is not essential that image capture system 10 have an image capture system so long as other means such as those described above are available for importing images into image capture system 10.
An initial viewing distance between a viewing frame and user through which user 6 observes an image is then determined (step 84). The initial viewing distance is a relative measure of the degree of separation between a selected body feature of user 6 such as a head, face, neck or chest and the viewing frame. In the embodiment illustrated in
The measurement of the initial viewing distance is determined using a user rangefinder 70. As is seen in
In still another embodiment, user rangefinder 70 can comprise an optional user imager 72 that is adapted to capture images of the presentation space for display 30 and that can provide these images to controller 32 and/or signal processor 26 so that viewing distance of user 6 relative to display 30 can be determined by analysis of these images. Additionally, the degree of separation may be determined by the dimensions of a particular feature such as separation between the eyes of user 6.
After an initial viewing distance is determined, the initial viewing distance is associated with an initial image capture setting. Typically, the initial setting is an image capture setting that is used to obtain the initial evaluation image. For example, the initial setting can comprise a zoom setting that helps to define the initial field of view of an initial evaluation image 96 shown in
In the embodiment shown in
There are a variety of ways in which an image capture setting can be determined based on a change in viewing distance.
As shown in
Similarly, as is illustrated in
In one embodiment, controller 32 causes adjustments to the zoom setting in relation to the change in viewing distance to be made. In other embodiments, signal processor 26 or other circuits and systems can cause zoom adjustments to be made. The relative extent to which the zoom level is adjusted based upon the change viewing distance can be preprogrammed or it can be manually set by user 6. This relation can be linear or it can follow other useful functional relationships including, but not limited to, logarithmic, and non-linear functions. Controller 32 can also consider other factors in determining the relative extent of the zoom adjustment to make in response to a detected change in the viewing distance. In one example, the relative extent of zoom adjustment per unit change in viewing distance can be established based upon a particular mode setting such as portrait of the so-called macro mode setting. Alternatively, the relative extent of zoom adjustment per unit change in viewing distance can be determined based upon a determined distance to a subject of a scene or so that when images or video are captured at relatively short distances such as when camera 12 is used, for example, in a macro, portrait, or close-up image capture mode from camera 12 only a modest change in the viewing position is necessary to effect a given degree of change in zoom magnification, while images that are captured at relatively long distance to the subject of a scene, such as for example, in a panoramic, or landscape mode a comparatively larger change in position can be necessary to effect a given degree of change in zoom magnification. Other factors including but not limited to the time rate of change in the viewing distance can also be considered by controller 32 in determining the distance that viewing frame 66 must be moved for controller 32 to cause a specific degree of adjustment in zoom settings.
FIGS. 7A-7B-9A-9B illustrate one example of a way to determine the extent of variation in zoom settings based upon a detected change in viewing distance. In the embodiment of
When as is shown in
When, as shown in
Accordingly, when transmissive type viewing frame 110 is positioned more distantly from the user, camera 12 is prepared to capture an image that is magnified (telephoto) to an extent that is defined generally by what the user actually desires to include in the image. Similarly, when the transmissive type viewing frame 110 is positioned more closely to user 6, camera 12 is prepared to capture a wide angle view.
It will be noted that in
Either of a image generating type viewing frame 64 or a transmissive type viewing frame 110 can be fixed to digital camera 12 or as shown in
A variety of well known approaches are known to compensate for conventional parallax problems that occur when a viewfinder system is provided having a different optical path that an image capture optical system. In one solution, lens driver 25 can be adapted to adjust the optical axis of lens system 23 and, if necessary, the zoom position of lens system 23 so that the field of view scene 120 provided by lens system 23 at image sensor 24 approximates the field of view observed by user 6 through viewing frame 110. In other solutions, when controller 32 determines that there is a separation between the optical axis of the of user 6 through viewing frame 110 and the optical axis of the lens system 23, controller 32 can cause lens driver 25 to widen the field of view of lens system 23 to an extent that encompasses at least a significant portion of the field of view of the scene that is observable to the viewer through the viewfinder. Controller 32 and/or signal processor 26 can cooperate to form an image based only upon signals from the portion of the image sensor that has received light from the portion of the scene that corresponds to the portion that is observable to user 6 via viewing frame 110, or at least the portion of the scene that is estimated to correspond to the portion that is observable to user 6 via viewing frame 110.
Alternatively, controller 32 and 6 or signal processor 26 can receive an image from image sensor 24 containing more than the portion of the image that corresponds to the portion that is visible through to user 6 through the viewfinder and can cause image to be formed by extracting the corresponding portion and, optionally, resampling the extracted portion. It will be appreciated, that in a typical imaging situation, the optical axis of the viewfinder system is fixed relative to the optical axis of the image capture system. This greatly simplifies the correction scheme that must be applied. However, there is a need for a system that can determine the field of view that is visible to a user 6 through a separate transmissive viewing frame at a moment of capture and to cause an image to be captured that reflects the field of view of an image capture system.
In accordance with the method, a user 6 directs digital camera 12 to enter composition mode (step 130). An initial evaluation image is then observable using transmissive viewing frame 110 (step 132) and an initial position of the eyes 8 of user 6 is determined (step 134). This can be done in a variety of ways.
In one embodiment, the position of the eyes 8 of user 6 are determined based upon a fixed relationship between the eyes 8 and the camera image capture system 22. For example as shown in FIG., 10, user 6 is shown wearing body 20 containing image capture system 22 of camera 12. In this embodiment, there is a generally consistent X and Y axis relationship between the position of eyes 8 and the position of the image capture system 22. Accordingly, in this embodiment, the relationship between image capture system 22 and eyes 8 of user 6 can be preprogrammed or customized by a user 6. Alternatively, a user image capture system 72 can be provided in camera housing 20 or with viewing frame 110 to capture images of the user 6 from which the position of the eyes 8 of user 6 relative to image capture system 22 or to viewing frame 110 can be determined. In the latter alternative, viewing frame 110 can provide user images for analysis by signal processor 26 and/or controller 32 by way of a wired or wireless connection.
An initial position of viewing frame 110 is then determined (step 134). In this embodiment, the initial position of viewing frame 110 is determined based upon the positional relationship between image capture system 22 and transmissive viewing frame 110. This can be done in a variety of ways. In one embodiment, image capture system 22 can be adapted to capture an evaluation image of a scene with a field of view that is wide enough to observe the relative position of the transmissive viewing frame 110 with respect to image capture system 22 and a distance from the eyes 8 of user 6 is determined based upon such an image. Alternatively, a multiple position rangefinder 27 can be calibrated so as to detect location of transmissive viewing frame 110 relative to camera 12. Such a multi-position rangefinder 27 can be adapted to have zones that are beyond the field of the maximum field of view of the image capture system 22 and arranged to sense both an X axis and a Y axis distance to the transmissive viewing frame.
In still another embodiment, transmissive viewing frame 110 can be equipped with a source of an electromagnetic, sonic, or light signal that can be sensed by a sensor 36 in camera 12 such as a radio frequency, sonic or light receiving system that can determine signal strength and a vector direction from image capture system 22 to transmissive viewing frame 110 in a manner that allows for the computation of X axis and Y-axis distances for use in determining an initial position of transmissive viewing frame 110.
Camera settings are adjusted based upon the relative positions of the viewing frame and eyes of the user so that an image captured by the image capture system 22 has a field of view that generally corresponds to the field of view of the evaluation image (step 140). If no trigger signal is detected (step 142), the method returns to step 134. If the trigger signal is detected, an image is captured (step 144) and an image that corresponds to the image viewed through transmissive type viewing frame 100 is provided (step 146). In one embodiment, the adjustments made to settings are made in a manner which causes the image as captured by digital camera 12 to have an appearance that corresponds to the appearance of the viewfinder. In another embodiment, the captured image is modified in accordance with the settings to more closely correspond to the field of view of the evaluation image.
It will be appreciated that user 6 is capable of viewing scene 120 using a transmissive type viewing frame 110 along a variety of angular positions along the Y and Z axis shown in
It will also be appreciated that the methods of the invention can be used for a variety of other purposes and to set a number of other camera settings. For example, the methods described herein can be used to help select from between a variety of potential focus distances when automatic focus is used to capture an image or to set camera flash intensity settings.
The embodiment shown in
One embodiment of such a display 160 that is transparent and then appears to freeze the image as desired is to provide a transparent OLED panel as the display. The OLED panel is manufactured with transistors that are fabricated with substantially transparent materials. Thus the display is transparent in the composition mode when the display is off, and then becomes emissive after capture of an image. An active diffuser such as LCD privacy glass may be provided behind the OLED panel so that the effect of the background is minimized when the OLED is displaying the captured image. The diffuser is off and transmissive when in composition mode, but becomes opaque when turned on in display mode.
This embodiment and others described herein help to meet a need experienced by many amateur photographers to be able to capture an image while still being able experience an event or moment exactly as seen with one's eyes—without the interference of hardware control selections, viewfinder, screen navigation, etc., (what you see is what you get). The captured image may be instantly shared with others either by looking at it on the display or by looking at its transmitted copy on other displays
The position of the viewing frame relative to the eyes 8 of user 6 can be determined in any of a number of ways. When user 6 triggers capture, the distance and position of hand 16 relative to the eyes 8 of a user 6 is used to determine the zoom setting and/or the field of view. In one embodiment, there is no zoom setting due to the lack of zoom optics in the hand. In this case only the angular relationship of hand 16 to the eyes 8 of user 6 is important, not the distance. The field of view is fixed, and the position of the hand is used only to determine what portion of the surroundings of the user is to be captured. In another embodiment, an image of a large area is captured and digitally zoomed to correspond more closely to the field of view defined by hand 16 as viewed by user 6.
A more complex embodiment adds the step of determining the distance from the hand 16 to the eyes and uses this distance to determine zoom setting. The farther the hand is from the eyes, the higher the magnification used.
There may need to be a calibration step provided for good correlation between the viewing area defined by a hand 16 and portion of the scene that is captured by the camera. In calibration, a known target such as that shown in
A camera that can cooperate with a transmissive type based viewing frame can be placed on a necklace such as shown in
The invention has been described in detail with particular reference to certain preferred embodiments thereof, but it will be understood that variations and modifications can be effected within the spirit and scope of the invention.