A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
1. Field of the Invention
This invention relates to video systems and composition of multiple digital images. The invention is more particularly related to the composition of multiple digital images each captured by an individual camera of a camera array, and to a method of warping at least portions of images so that the images can be combined, without overlap or inconsistent pixels, into a single image. The invention also related to the registration of multiple cameras and the determination of a transformative equation that allows fast warping and summation of plural images from the multiple cameras to produce a single image. The invention is also particularly related to digital panning and zooming of a scene captured by a multiple camera array, and the automatic control of pan and zoom of an individually controlled camera or camera array.
2. Discussion of the Background
Remote and locally located cameras typically include devices for camera control. Devices include stepping motors or other mechanisms configured to point the camera or an image capturing device toward a scene or point of interest. Examples include teleconferencing applications, surveillance cameras, security cameras, cameras that are remotely controlled, activated by motion, light or other stimuli, remote sensing cameras such as those placed on robotic means (examples including those used in space exploration, deep sea diving, and for sensing areas or scenes to dangerous or inaccessible for normal camera operations (inside nuclear reactor cores, inside pipes, police cars, or law enforcement robotics, for example).
Normally, cameras are manually operated by a human operator on site, or remotely controlling the camera via a steering input (joystick or mouse, for example). In the case of remotely steered cameras, steering inputs generally activate a control program that sends commands to a stepping motor or other control device to steer a camera toward an object, item, or area of interest. General zooming functions of the camera may also be activated either on site or remotely.
In the case of teleconferencing applications (meetings, lectures, etc.), a variable angle camera with a mechanical tilt pan, focal length, and zoom capability is normally used. Such devices generally require a human operator to orient, zoom, and focus a video or motion picture camera. In some cases, conference participants may be required to activate a specific camera or signal attention of a camera configured to zoom in or focus on selected areas of a conference room.
Multiple cameras have been utilized in a number of applications. For example, Braun et al., U.S. Pat. No. 5,187,571, “TELEVISION SYSTEM FOR DISPLAYING MULTIPLE VIEWS OF A REMOTE LOCATION,” teaches an NTSC camera array arranged to form an aggregate field, and Henley, U.S. Pat. No. 5,657,073, “SEAMLESS MULTI-CAMERA PANORAMIC IMAGING WITH DISTORTION CORRECTION AND A SELECTABLE FIELD OF VIEW,” teaches a system for production of panoramic/panospheric output images.
Applications for multiple or steerable cameras include teleconferencing systems that typically direct a camera toward a speaker who is then broadcast to other teleconference participants. Direction of the camera(s) can be performed manually, or may utilize a tracking mechanism to determine a steering direction. Some known tracking mechanisms include, Wang et al., “A Hybrid Real-Time Face Tracking System,” in Proc. ICASSP 98, and, Chu, “Superdirective Microphone Array for a Set-Top Videoconferencing System,” In Proc. ICASSP
97.
However, technical challenges and costs have prevented such systems from becoming common and in wide spread use.
Systems attempting to integrate multiple images have failed to meet the needs or goals of users. For) example, McCutchen, U.S. Pat. No. 5,703,604, “IMMERSIVE DODECAHEDRAL VIDEO VIEWING SYSTEM,” teaches an array of video cameras arrayed in a dodecahedron for a complete spherical field of view. Images are composed at the receiving end by using multiple projectors on a hemispherical or spherical dome. However, the approach taught in McCutchen will suffer problems at image boundaries, as the multiple images will not register perfectly and result in obvious “seams.”
In another example, Henley et al., U.S. Pat. No. 5,657,073, “SEAMLESS MULTI-CAMERA PANORAMIC IMAGING WITH DISTORTION CORRECTION AND SELECTABLE FIELD OF VIEW,” teaches combination of images from radially-arranged cameras. However, Henley fails to disclose any but radially-arranged cameras, and does not provide details on image composition methods.
The present inventors have realized the utility of an array of cameras for image and video capturing of a scene, including a fixed camera array capable of zooming and panning to any of selected areas within the scene. Roughly described, the present invention utilizes a camera array to capture plural piecewise continuous images of a scene. Each of the images are combined, via at least warping and fading techniques, to produce a single seamless image of the scene. Selected areas of the scene are then zoomed in, or panned to, taking portions of the seamless image and displaying them to a user.
In one embodiment, images are combined from an array of inexpensive video cameras to produce a wide-field sensor. The wide field sensor is utilized to locate people or regions of interest (image them for teleconferencing purposes, for example). By tracking shape, motion, color, and/or audio cues, the location of people or other items of interest in the room or scene being captured by the cameras can be estimated.
The present inventors have planned to make the present invention economical in terms of manufacturing costs and computational costs in terms of speed and calculations required to combine the separate images from the camera array. In a primary embodiment, the camera array is a digital camera array, each camera having a fixed position. The cameras are registered such that the processing equipment used to combine the images knows which parts of each image are overlapping or have same points of the scene captured such that they may be precisely combined into a single image or scene.
Each of the images captured are warped so that the same points of a scene captured by two or more cameras are combined into a single point or set of points appropriately positioned according to the scene being captured. Blending techniques are then applied to edges of each of the images to remove any brightness or contrast differences between the images being combined.
Once the scene is combined into a single image, portions of the single image are selected for panning or zooming. An output or display is then provides the image to a user or to another device (for transmission to a remote location, for example).
The present inventors have also enhanced the invention by utilizing an automatic selection of images to be displayed via panning, zooming, or other imaging technique. For example, in one embodiment, directional microphones may be utilized to determine a direction in which activity is taking place in the scene, and the camera array is automatically panned to that portion of the scene. In another embodiment, motion detectors are utilized to determine motion and direct a panning operation of the camera array.
It is worth noting the camera array is not panned in the normal sense of camera panning, but the panned image is selected from a scene image composed from the several cameras.
Unlike prior art using steerable cameras, the present invention, if properly configured, subjects are always in view of the camera array. Digitally combining array camera images results in a seamless high-resolution image, and electronically selecting a region of the camera array view results in a rapidly steerable “virtual camera.”
New CMOS camera chips will be both better and far less expensive than the current widely used CCD camera chips. By tracking shape, motion, and/or color cues, the location of subjects in the room can be determined. Unlike prior art tracking systems using steerable cameras, the subject is always in view and can't get “lost.” The camera array can be used as a sensor to control conventional steerable cameras. Additionally, video images from adjacent cameras are digitally composited to give a seamless panoramic video image. This can be used to electronically “pan” and “zoom” a “virtual camera”.
Virtual cameras are controlled electronically without physical motion, and so do not suffer from the physical limitations of mechanically controlled cameras, such as finite rates of zoom or pan. An additional benefit is that a plurality of images can be extracted, so that different users can view different regions of the same scene, which is impossible for a conventional camera.
Because images are composited in the digital domain, they are available for additional processing and sensing, unlike prior-art analog approaches. In this system, an appropriate camera view can be automatically determined by finding motion or human images. Thus the system can serve as an automatic camera operator, by steering a real or virtual camera at the most likely subjects. For example, in a teleconference, the camera can be automatically steered to capture the person speaking. In a lecture application, the camera can automatically detect both the lecturer's location and when new slides or other visuals are displayed. When a new slide is displayed, the virtual camera can be steered to encompass the entire slide; when the lecturer moves or gestures the virtual camera can be zoomed in for a closer shot. Also, it is possible for remote viewers to control their own virtual cameras; for example, someone interested in a particular feature or image on a projected slide could zoom in on that feature while others see the entire slide.
The present invention includes methods for steering a virtual camera to interesting regions as determined by motion and audio captured from multiple sensors. Additionally, the present invention includes methods for real-time recording and playback of a panoramic video image, arbitrary sensor array geometries, and methods of calibrating multiple cameras. The present invention also includes software and or devices capable of each of the above and for providing automatic image selection from a panoramic video image using motion analysis; automatic image selection using audio source location; automatic image selection using a combination of audio and motion analysis; each of the above features augmented with a face- or person-tracking system; a system for real-time recording and playback of panoramic video; arbitrary camera array configurations, including planar and linear; and methods for calibrating multiple cameras fixed with respect to each other.
The present inventors have also realized the utility of combining the camera array as described above along with a camera that is mechanically panned or steered (pointing a capture lens (optics) of the camera toward the object or area being panned or steered to). For example, the camera array may determine a direction to which the mechanically steered camera is pointed.
The present invention may be realized in a method, comprising the step of warping a set of images synchronously captured from a camera array into a common coordinate system of a composite image. The invention is also a method of controlling a virtual camera having a view selected from an array of cameras, comprising the steps of combining images from each of said cameras into a panoramic view, detecting motion in said panoramic view, and directing a view of said virtual camera based on said motion.
The present invention may be embodied in a device, or a camera array, comprising, a set of cameras mounted in an array, an image combining mechanism configured to combine at least two of images captured from said set of cameras into a composite image, a view selection device configured to select a view from the composite image, and an output mechanism configured to display the selected view. The invention also includes a method of registering a camera array, comprising the steps of, placing at least one registration point in a field of view of at least two cameras of said camera array, identifying a location of each registration point in a field of view of each camera of said array, and maintaining information about each registration point in relation to each camera such that images may be combined in relation to said registration points.
A more complete appreciation of the invention and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts, and more particularly to
Other configurations of cameras are also possible. For example,
For many applications, such as video conferencing, it is neither desirable nor possible to transmit a full-motion super-resolution image. This Invention Proposal describes methods for extracting a normal-resolution image using measures of motion, audio source location, or face location within the image. However, since multiple cameras and plural images of a same scene are envisioned, super-resolution processes (getting higher resolution, e.g., more pixels, from multiple images of a same scene) may be applied. When the multiple images are registered, any normal-sized sub-image can then be selected by excerpting a frame from the larger composite, as shown in
Because the present invention allows any desired sub-image to be selected, changing the selected region is equivalent to steering a “virtual camera.” The camera array can be constructed with long-focal-length cameras, such that each array element is “zoomed in” on a small region. Combining many of these zoomed-in images and reducing the resolution is equivalent to zooming out. Thus all the parameters of conventional cameras such as pan (selecting a different area of the panoramic image), zoom (combining or discarding images and changing resolution), and tilt (selecting images from cameras at specific angles, i.e., elevated cameras, for example) can be effectively duplicated with an array of fixed cameras.
Unlike mechanically-steered cameras which have a finite slew rate limited by motor speed and inertia, a virtual camera can be instantaneously panned anywhere in the camera array's field of view. Multiple virtual cameras can be simultaneously extracted from the same array, allowing multiple users to view different areas at the same time. This might be particularly useful in telepresence applications, for example a sports event, where each user could control a “personal virtual camera.”
For example, one embodiment of the present invention is illustrated in
A camera array 260 is trained on, for example, a sporting event 262. Video streams from the cameras are then packaged, compressed, and prepared for broadcast at a broadcast station 264, and broadcast via cable, Internet, airwaves (broadcast tower 264, for example), or other broadcasting media/modes. At a receiving end, a receiving device (antennas 280, for example), receive the broadcast signal, and a user station 282 combines the images received in the broadcast to a single panoramic view. A user (not shown) selects a desired view via control device 284, resulting in a display 286, for example.
To save broadcast bandwidth, signals from control device 284 may be broadcast back to a source (broadcast station 264 in this example) to identify only the specific video streams needed to produce the display 286. In another alternative the entire display 286 may be composed at the source and only the display 286 is broadcast to the user. A specific configuration would be selected based on the availability of broadcast bandwidth and processing power available at each of the broadcast station 264 and user station 282. As will be appreciated by those skilled in the art, any number of combinations or modifications may be implemented, consistent with the invention as described herein.
Camera arrays can have any arrangement, such as radial (
Although normally fixed on a platform or other substrate, it is consistent with this invention to have a camera array with cameras movable with respect to each other and employ a registration process that would re-register each camera in the camera array after movement. In addition, facilities for additional or auxiliary cameras to be attached or included in the array, and again, a registration process so that any cameras added to the array can benefit from the fast warping/transformation techniques described herein.
Referring now to
A similar array could be mounted on a mobile platform for other telepresence applications. The ability to instantaneously pan without distracting camera motion is an improvement over current telepresence systems. For example, another application, police robotics, require a mechanized camera to proceed into a hostile environment for reconnaissance purposes. Mobile and equipped with a video camera, remote panning means, and microphones, the devices proceed into a hostile environment. When a threat is detected, the camera pans toward the threat to check it out, however, the device is limited by pan and zoom time parameters. Alternatively, a radial array according to the present invention, mounted on the same robotic device, would be able to instantly pan to evaluate the threat, have multiple redundancy of cameras, and be able to self locate the threat via microphone or motion detection as described hereinbelow.
Camera arrays can be constructed in any desired configuration, and put anywhere convenient. A two-dimensional camera array may be constructed to facilitate electronic zooming as well as panning. For example, a first dimension of an array might cover/or be directed towards a full view of a conference room or scene, while a second array would have cameras directed toward specific points of interest within the scene (see
The prior art has failed to adequately resolve the challenging technical constraints of a camera array. Several major problems must be addressed: combining multiple video streams into a seamless composite, calibrating the array cameras, and handling the extreme data rate of the composite high-resolution video.
Frame Compositing:
Stitching adjacent frames together is accomplished using a method or combination of methods that combine separate images into a panoramic, or combined image. In one embodiment, a spatial transformation (warping) of quadrilateral regions is used, which merges two images into one larger image, without loss of generality to multiple images. First, a number of image registration points are determined; that is, fixed points that are imaged at known locations in each sub-image. This can be done either manually or automatically. In either case the process involves pointing the array at a known, structured scene and finding corresponding points. For example,
Alternatively, only one of the images need to be warped to match a coordinate system of the other image. For example, warping of quadrilateral area EFGH may be performed via a perspective transformation. Thus quadrilateral EFGH in Frame 1 can be transformed to E′F′G′H′ in the coordinate system of Frame 2.
In another embodiment, bilinear warping (transformation) of piecewise-contiguous quadrilateral regions is used. Referring now to
Square patches on the cylinder (640, for example) are imaged as quadrilateral regions by one or more cameras. The imaged quadrilateral regions are illustrated, for example, as quadrilateral region ABCD as seen in
Bilinear transformations are used to warp the quadrilateral regions (bilinear warping). Equation 1 below transform the homogeneous coordinate system u, v to the warped (square) coordinate system x, y.
Equation 1 is a transformation matrix having 8 unknown coefficients that are determined by solving the simultaneous equations given by the reference points (ABCD, in a coordinate system u, v, and A′B′C′D′, in a coordinate system x, y). The four points in each system have 8 scalar values to solve for the 8 unknown parameters. If more correspondence points (correspondence points referring to the points encompassing the quadrilateral region (ABCD in this example) are present, an overdetermined set of equations results, which can be solved using least-squares (pseudoinverse) methods for more robust estimates.
The above processes are repeated for every patch (each patch captured by one or more cameras; at least two set of patches along borders of images one from each image to be combined; each path determined along a border of combined images), and using equation (1) (or an equivalent equation performing the same function for the areas selected), a set of warping coefficients (eight coefficients in this example) are computed for every patch. These, as well as the location of the square destination region in the composite image, are referred to as warping coefficients or a calibration set.
To calculate a pixel value in the warped coordinate system x, y, the above equations are inverted by solving for u, v in terms of x, y. This allows for what is termed “inverse mapping.” For every pixel in the warped coordinate system, the corresponding pixel in the unwarped system is found and its value is copied.
The coefficients (of equation 1, for example) are stored in a table and utilized to warp images “on the fly.” Because the cameras are fixed, and registered (or calibrated, see later section), the same equations are utilized over and over for the same patches (i.e., no need to find new correspondence or registration points, or recalculate coefficients).
Because the warping is a continuous function rather than discrete, the reverse mapping will generally yield non-integral unwarped coordinates. For this reason, the pixel value is interpolated from the neighbors using bilinear interpolation. This uses a linear combination of the four closest integral points to produce the interpolated value. Because the necessary perspective warping is never extreme for this application, there is no need for additional interpolation or filtering to reduce aliasing effects.
Other embodiments include different types of spatial transformations to warp patches from captured images (u, v coordinate system) to a composite grid (x, y coordinate system. Any spatial transformation altering the captured images to fit into a composite grid would be consistent with the present invention. For example, affine, or perspective transformations may be utilized.
A later section discusses finding registration points. In the current embodiment, registration is performed manually by inspecting each cameras' image. This is not an excessive burden as it need only be done once, and can be automated.
The present invention also includes correction for lens distortions in the fast warping equations. For example the camera lens' utilized in
Once the source polygons have been warped to a common coordinate system, a cross-fade can be utilized to combine them. Pixels for the merged image are determined by interpolating the two warped polygons. A linear interpolation can do this. The number and size of polygons can be adjusted as necessary to give a good mapping and reduce the appearance of “seams” in the composite image. “Seams” can be put at arbitrary locations by changing the interpolation function, for example, by using a piecewise linear interpolation function. This is especially useful when face locations are known, because the image combination can be biased such that seams do not cut across faces.
In addition to warping, a significantly better image can be obtained by “cross-fading” patches, as illustrated by
Patches for fading may be of any size or shape within the confines of a camera view. Therefore any geometric shape or outline of an object or other selected area within and two or more camera views may be selected for fading.
Alternatively, individual corresponding pixels in overlapping regions, after warping and matching those images, may be summed and averaged in some manner and then faded or blurred at edges or throughout the overlapping regions. As will be appreciated by those skilled in the art, in light of the present disclosure, many procedures, including fading, blurring, or averaging may be implemented to smooth transitions between the warped abutting and/or combined images.
Also note the effects of cross-fading to produce a seamless image. Quadrilateral regions 1002 and 1012, and 1004 and 1014 of the camera images are combined to produce grid squares E2 and E3 of the composite. Quadrilateral regions 1002 and 1004 of CH1 are darker than corresponding regions 1012 and 1014 of CH2, as is common in similar views from different cameras. However, when combined the rightmost portions of grid squares E2 and E3 are light (as in quadrilateral regions 1012 and 1014), while the leftmost regions of grid squares (E2 and E3) are dark (as in quadrilateral regions 1002 and 1004), and no seams are present.
Automatic Control of Virtual Cameras
Mechanically-steered cameras are constrained by the limitations of the mechanical systems that orient them. A particular advantage of virtual cameras is that they can be panned/zoomed virtually instantaneously, with none of the speed limitations due to moving a physical camera and/or lens. In addition, moving cameras can be distracting, especially when directly in the subject's field of view, like the conference-table camera shown in
In this system, we can select one or more normal-resolution “virtual camera” images from the panoramic image. Mechanical cameras are constrained by the fact that they can be pointed in only one direction. A camera array suffers no such limitation; an unlimited number of images at different pans and zooms can be extracted from the panoramic image. We use information from the entire composite image to automatically select the best sub-images using motion analysis, audio source location, and face tracking. To reduce the computation load, parts of the panoramic image not used to compose the virtual images could be analyzed at a slower frame rate, resolution, or in greyscale.
A useful application of camera arrays is as a wide-field motion sensor. In this case, a camera array is fixed at a known location in a room. Areas of the room will correspond to fixed locations in the image plane of one or more cameras. Thus using a lookup table or similar method, detecting motion in a particular region of a video image can be used to find the corresponding spatial location of the motion. This is enough information to point another camera in the appropriate direction, for example. Multiple cameras or arrays can be used to eliminate range ambiguity by placing their field of view at right angles, for example, at different room corners.
Another useful system consists of conventional steerable cameras and a camera-array motion sensor. Motion in a particular location would set appropriate camera pan/zoom parameters such that a subject is captured. For example, in
Depth Map Creation from Parallax
Because it is impractical to make a camera array with coincident optics (that is, all the cameras optically in the same place), camera pairs will have a small but significant baseline separation. This is a problem when combining images of objects at different distances from the baseline, as a single warping function will only work perfectly for one particular distance. Objects not at that distance will be warped into different places and will appear doubled (“ghosted”) or truncated when the images are merged. In practice, this is not a problem, as the camera array can be calibrated for a typical distance (say one meter for teleconferencing) and patch blending minimizes objectionable artifacts from objects not at that distance. Another solution is to calculate a number of warping coefficients for objects at different distances; the appropriate set could then be selected.
A better solution is to take advantage of the parallax (stereo disparity) to find a range of the objects being imaged. In this solution, the camera array is calibrated such that patches at infinity can be combined with no disparity. Looking at each patch (or smaller subdivision of larger patches) the stereo disparity can be found by finding how much to shift one patch to match the other. This type of camera calibration greatly simplifies the stereo matching problem, turning it into essentially a one-dimensional search. Because patches are warped into corresponding squares, all that is necessary is to find the shift that will match them. This solution avoids complexity due to lens distortion. A one-dimensional (1-D) correlation will be a maximum at the lag with greatest overlap across a row of pixels in a patch.
The height of the maximum peak indicates the confidence of the image match. Attempting to match smooth or featureless regions will result in a low peak, while richly textured images will have a sharp peak. If it is desired to find the range of moving objects such as humans, the above technique can be used on the frame-by-frame pixel difference for more robust disparity estimates.
The lag of the correlation peak depends directly on the distance of the object in that patch. This will not have high resolution due to the small camera baseline, and will often be noisy, but still is able to detect, for example, the position of humans sitting around a conference table. Patches are small, on the order of 10–50 pixels, and can be overlapped for greater spatial detail, as there will be only one disparity estimate per patch. The result is a grid of disparity estimates and their associated confidence scores. These are smoothed using spatial filtering such as a median filter, and low-confidence points either ignored or replaced with an estimate derived from neighboring points.
The disparity estimates have a number of applications. An immediate application is to estimate the location and number of objects or humans in the array's field of view. This is done by looking for local peaks of the right spatial frequency in the smoothed disparity map (a smoothing of the grid of disparity estimates for each patch), using template matching or a similar technique. Once the positions are estimated, the virtual cameras can be “snapped to” the object locations ensuring they are always in the center of view.
Another application is to ensure that camera seams do not intersect objects. Because the individual camera views overlap by several patches, it is straightforward to blend only those patches that do not have significant disparity estimates, that is, contain only background images instead of objects.
Yet another application is to segment the panoramic images into foreground and background images. Once again, this task is simplified by the fact that the camera array is fixed with respect to the background. Any image motion is thereby due to objects and objects alone. Patches with no significant motion or disparity are background images. As long as foreground objects move enough to reveal the background, the background can be robustly extracted. The difference between any given image and the background image will be solely due to foreground objects. These images can then be extracted and used for other applications; for example, there is no need to retransmit the unchanging background. Significant bandwidth savings can be gained by only transmitting the changing object images (as recognized in the MPEG-4 video standard).
Camera Control Using Video Analysis
In one embodiment, a motion analysis serves to control a virtual camera; that is, to select the portion of the panoramic image that contains the moving object. Motion is determined by computing the frame-to-frame pixel differences of the panoramic image. This is thresholded at a moderate value to remove noise, and the center of gravity (first spatial moment) of the resulting motion image is used to update the center of the virtual image. The new virtual image location is computed as the weighted average of the old location and the motion center of gravity. The weight can be adjusted to change the “inertia,” that is, the speed at which the virtual camera changes location. Giving the previous location a large weight smooths jitter from the motion estimate, but slows the overall panning speed. A small weight means the virtual camera responds quickly to changes in the motion location, but may jitter randomly due to small-scale object motion.
Tracking can be further improved by adding a hysteresis value such that the virtual camera is changed only when the new location estimate differs from the previous one by more than a certain amount. The motion centroid is averaged across both a short and a long time span. If the short-time average exceeds the long-time average by a preset amount, the camera view is changed to that location. This accounts for “false alarm” events from both stable sources of image motion (the second hand of a clock or fluorescent light flicker) as well as short-term motion events such as a sneeze or dropped pencil). This smooths jitter, but constant object motion results in a series of jumps in the virtual camera position, as the hysteresis threshold is exceeded.
Other enhancements to the motion detection algorithm include spatial and temporal filtering, for example, emphasizing hand gestures at the expense of nodding or shifting. In operation, the virtual camera is initially zoomed out or put in a neutral mode, which typically includes everything in the camera's view. If a radial array is used, as in
Alternatively, the neutral position could be a “Brady Bunch” view where the large image is broken into units to tile the normal-aspect frame. The output of a face tracker ensures that all participants are in view, and that the image breaks do not happen across a participant's face.
If motion is detected from more than one region, several heuristics can be used. The simplest is to just choose the region with the largest motion signal and proceed as before. Another option might be to zoom back the camera view so that all motion sources are in view. In the case of conflicting or zero motion information, the camera can be changed back to the default neutral view.
Another useful heuristic is to discourage overlong scenes of the same location, which are visually uninteresting. Once the virtual camera location has been significantly changed, a timer is started. As the timer value increases, the motion change threshold is decreased. This can be done in such a way that the mean or the statistical distribution of shot lengths matches some pre-determined or experimentally determined parameters. Another camera change resets the timer. The net effect is to encourage human-like camera operation. For example, if the camera has been focused on a particular speaker for some time, it is likely that the camera would cut away to capture a listener nodding in agreement, which adds to the realism and interest of the video, and mimics the performance of a human operator.
All the above techniques can be combined with the object locations estimated from the disparity map.
Audio Control of Virtual Cameras
Using microphone arrays to determine the location and direction of acoustic sources is known. These typically use complicated and computationally intensive beamforming algorithms. However, a more straightforward approach may be utilized for determining a direction of a speaker at a table, or from which side of a room (presenter or audience) speech is coming from. This information, perhaps combined with video cues, a camera can be automatically steered to capture the speaker or speakers (using the methods of the previous sections).
While conventional beamforming relies on phase differences to estimate the direction and distance of an acoustic source, the present inventors have realized that a good estimate can be obtained by using the amplitude of an acoustic signal. The system presented here uses an array of directional microphones aligned around a circle as shown in
Adverse effects, such as acoustic reflections (not least off walls and tables), and the imperfect directionality of real microphones (most cardioid microphones have a substantial response at 180 degrees to their axis) are minimized by the present invention.
In one embodiment, the present invention utilizes a pre-filtering of the acoustic signal to frequencies of interest (e.g. the speech region) helps to reject out-of-band noise like ventilation hum or computer fan noise.
In addition, lateral inhibition is utilized to enhance microphone directionality. In one embodiment, lateral inhibition is done by subtracting a fraction of the average signal magnitude from each neighboring microphone. The time-averaged magnitude from each microphone is denoted |M| as illustrated in
In one embodiment, the system is normalized for ambient conditions by subtracting the ambient energy incident on each microphone due to constant sources such as ventilation. When the system is running, each microphone will generate a real-time estimate of the acoustic energy in its “field of view.” It might be possible to get higher angular resolution than the number of microphones by interpolation.
A more robust system estimates the location of an acoustic source by finding peaks or corresponding features in the acoustic signals from each microphone. Because of the finite speed of sound, the acoustic signal will arrive first at the microphone closest to the source. Given the time delay between peaks, the first-arriving peak will correspond to the closest microphone. Given delay estimates to microphones at known locations, geometrical constraints can be used to find the angular direction of the source.
In a complex environment with many reflections, the statistics of reflections may be learned from training data to characterize the angular location of the source. Combining this with the amplitude cues above will result in an even more robust audio location estimate.
A system with particular application to teleconferencing consists of one or more desk microphones on flexible cords. In use, microphones are placed in front of each conference participant. Each microphone is equipped with a controllable beacon of visible or invisible IR light. The beacon is set to flash at a rate comparable to ½ the video frame rate. Thus there will be frames where the beacon is illuminated in close temporal proximity to frames where the beacon is dark. Subtracting these frames will leave a bright spot corresponding to the beacon; all other image features will cancel out. From this method the location of the microphones in the panoramic image can be determined. Audio energy detected at each particular microphone can give a clue to shift the virtual image to that microphone.
Camera Control Using Audio
Given the angular direction and the magnitude signals from the various microphones, cameras can be controlled using algorithms similar to those described for motion above. Short-term and long-term averages can accommodate fixed noise sources like computer fans. A number of heuristics are used to integrate face and/or motion detection with the audio source location. The audio direction is used as an initial estimate of speaker location to start the face tracking system. Additionally, both face tracking and audio source location can be used in concert. Thus an object must both be recognized as a face and be an audio source before the camera is steered towards it. This is particularly useful for the automatic teleconferencing system that aims to display the image of the person speaking.
Stereo Ranging
Since the present invention utilizes multiple cameras, stereo range of object in the scene to be imaged may be performed. Conference room walls would be considered at a maximum range, but any object closer to the camera would be considered to be more likely subjects for moving focus of a virtual view of the image. If stereo ranging determines an object is closer than the conference room walls, and it is moving (motion detection via video analysis, for example) it is more likely to be a subject. If audio detection is also added, the object can be determined with even a higher degree of certainty to be a subject for zooming in. The present invention includes an embodiment utilizing a combination of all analysis functions, audio, video motion detection, and stereo ranging to determine a likely subject for camera zooming.
Data Manipulation and Compression
Multiple video cameras require techniques to cope with the sheer amount of generated video data. However, it is quite possible to composite multiple video streams in real time on a common CPU, and this should scale with increasing processor speed and parallelization. It is possible to stream each camera to a plurality of analog or digital recording devices, such that all camera views are recorded in real time. The recorded streams can then be composited using the same methods at a later time. Another approach is to store the composited high-resolution video image in a format that can support it.
Many common video formats such as MPEG support arbitrarily large frame sizes. Recording a full-resolution image has many advantages over prior art: first of all multiple views can still be synthesized from the high-resolution image, which may support varied uses of the source material. For example, in a videotaped lecture, one student might prefer slide images while a hearing-impaired but lip-reading student might prefer the lecturer's image. Recording a full-resolution image also allows better automatic control. Because any real-time camera control algorithm can't look ahead to future events, it is possible to get better control using a lag of several seconds to a minute. Thus switching to a different audio or motion source could be done instantaneously rather than waiting for the short-term average to reach a threshold.
Existing standards like MPEG already support frames of arbitrary resolution provided they are rectangular. It is possible to composite images using MPEG macroblocks rather than in the pixel domain for potentially substantial savings in both storage and computation. The multi-stream approach has the advantage that only the streams needed for a particular application need be considered.
For example, when synthesizing a virtual camera from a circular array (“B” of
A reverse embodiment is also envisioned: extracting normal-resolution video from a super-resolution MPEG stream is merely a matter of selecting and decoding the appropriate macroblocks. Given bandwidth constraints, a panoramic video image may be efficiently transmitted by sending only those regions that have changed significantly. This technique is commonly used in low-bandwidth video formats such as H.261. A novel adaptation of this method is to only store or transmit image regions corresponding to moving faces or a significant audio source such as a speaker.
Automatic Camera Registration
In order to merge overlapping images from different cameras, they must be registered such that lens and imaging distortion can be identified and corrected. This is particularly important with embodiments of the present invention that utilize the matrix coefficients, as they are premised on registered cameras. Generally, it is envisioned that cameras in the arrays will be fixed with respect to one another, and that registration will be performed at time of manufacture.
The present invention includes registering array cameras that are fixed with respect to each other. Registering cameras involves finding points that correspond in each image. This can be done manually, by observing two views of the same scene and determining which pixels in each image correspond to the same point in the image plane. Because cameras are fixed with respect to each other, this need only be done once and may be performed automatically. Manual registration involves locating registration points manually, say by pointing the camera array at a structured image such as a grid, and locating grid intersection points on corresponding images. Using machine-vision techniques, this could be done automatically.
In one embodiment, registration is performed using a “structured light” method (e.g. a visible or IR laser spot swept over the camera array's field of view, as shown in
Because the spot 1410 is orders of magnitude brighter than any projected image, detection can be performed by thresholding the red channel of the color image (other detection methods are also envisioned, color differences, or analyzing a combination of shape and brightness, for example). The spot 1410 also needs to be moved to find multiple registration points. This could be done using a rotating mirror or other optical apparatus, using multiple lasers (which are inexpensive), or by affixing a laser to a mechanically steered camera as described previously.
Another version of this system uses bright IR or visible LEDs affixed to a rigid substrate. Lighting the LEDs in succession provides registration points. The substrate can be moved to the approximate imaging error so that parallax is minimized at those points.
One embodiment of the present invention is illustrated in the block diagram of
The present invention may be conveniently implemented using a conventional general purpose or a specialized digital computer or microprocessor programmed according to the teachings of the present disclosure, as will be apparent to those skilled in the computer art.
Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art. The invention may also be implemented by the preparation of application specific integrated circuits or by interconnecting an appropriate network of conventional component circuits, as will be readily apparent to those skilled in the art.
The present invention includes a computer program product which is a storage medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention. The storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
Stored on any one of the computer readable medium (media), the present invention includes software for controlling both the hardware of the general purpose/specialized computer or microprocessor, and for enabling the computer or microprocessor to interact with a human user or other mechanism utilizing the results of the present invention. Such software may include, but is not limited to, device drivers, operating systems, and user applications. Ultimately, such computer readable media further includes software for performing the present invention, as described above.
Included in the programming (software) of the general/specialized computer or microprocessor are software modules for implementing the teachings of the present invention, including, but not limited to, inserting anchors into work artifacts, communication with application programming interfaces of various applications, initiating communications and communication clients, maintaining relative positions of conversation or communication clients to corresponding anchors in a work artifact, retrieving and logging conversations, requesting and handling communications requests, managing connections, initiating applications and downloading artifacts, and the display, storage, or communication of results according to the processes of the present invention.
Obviously, numerous modifications and variations of the present invention are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein.
Number | Name | Date | Kind |
---|---|---|---|
4393394 | McCoy | Jul 1983 | A |
4395093 | Rosendahl | Jul 1983 | A |
4484801 | Cox | Nov 1984 | A |
4558462 | Horiba | Dec 1985 | A |
4583117 | Lipton | Apr 1986 | A |
4707735 | Busby | Nov 1987 | A |
4745479 | Waehner | May 1988 | A |
4772942 | Tuck | Sep 1988 | A |
4819064 | Diner | Apr 1989 | A |
4862388 | Bunker | Aug 1989 | A |
4868682 | Shimizu et al. | Sep 1989 | A |
4947260 | Reed | Aug 1990 | A |
4974073 | Inova | Nov 1990 | A |
4985762 | Smith | Jan 1991 | A |
5023720 | Jardins | Jun 1991 | A |
5023725 | McCutchen | Jun 1991 | A |
5040055 | Smith | Aug 1991 | A |
5068650 | Fernandez | Nov 1991 | A |
5083389 | Alperin | Jan 1992 | A |
5115266 | Troje | May 1992 | A |
5130794 | Ritchey | Jul 1992 | A |
5140647 | Ise | Aug 1992 | A |
5142357 | Lipton | Aug 1992 | A |
5153716 | Smith | Oct 1992 | A |
5170182 | Olson et al. | Dec 1992 | A |
5175616 | Milgram | Dec 1992 | A |
5185667 | Zimmerman | Feb 1993 | A |
5187571 | Braun et al. | Feb 1993 | A |
5231673 | Elenga | Jul 1993 | A |
5258837 | Gormley | Nov 1993 | A |
5291334 | Wirth | Mar 1994 | A |
5302964 | Lewins | Apr 1994 | A |
5369450 | Haseltine | Nov 1994 | A |
5424773 | Saito | Jun 1995 | A |
5444478 | Lelong | Aug 1995 | A |
5465128 | Wah Lo | Nov 1995 | A |
5465163 | Yoshihara | Nov 1995 | A |
5469274 | Iwasaki | Nov 1995 | A |
5499110 | Hosogai | Mar 1996 | A |
5502309 | Davis | Mar 1996 | A |
5508734 | Baker | Apr 1996 | A |
5510830 | Ohia | Apr 1996 | A |
5539483 | Nalwa | Jul 1996 | A |
5548409 | Ohta | Aug 1996 | A |
5563650 | Poelstra | Oct 1996 | A |
5602584 | Mitsutake | Feb 1997 | A |
5608543 | Tamagaki | Mar 1997 | A |
5611033 | Pitteloud | Mar 1997 | A |
5619255 | Booth | Apr 1997 | A |
5625462 | Ohta | Apr 1997 | A |
5646679 | Yano | Jul 1997 | A |
5649032 | Burt et al. | Jul 1997 | A |
5650814 | Florent | Jul 1997 | A |
5657073 | Henley | Aug 1997 | A |
5657402 | Bender | Aug 1997 | A |
5666459 | Ohta | Sep 1997 | A |
5668595 | Katayama | Sep 1997 | A |
5680150 | Shimizu | Oct 1997 | A |
5682198 | Katayama | Oct 1997 | A |
5689611 | Ohta | Nov 1997 | A |
5691765 | Schieltz | Nov 1997 | A |
5699108 | Katayama | Dec 1997 | A |
5703604 | McCutchen | Dec 1997 | A |
5721585 | Keast | Feb 1998 | A |
5745305 | Nalwa | Apr 1998 | A |
5760826 | Nayar | Jun 1998 | A |
5768443 | Michael et al. | Jun 1998 | A |
5790181 | Chahl | Aug 1998 | A |
5812704 | Pearson | Sep 1998 | A |
5838837 | Hirosawa | Nov 1998 | A |
5841589 | Davis | Nov 1998 | A |
5848197 | Ebihara | Dec 1998 | A |
5870135 | Glatt | Feb 1999 | A |
5903303 | Fukushima | May 1999 | A |
5920376 | Bruckstein | Jul 1999 | A |
5920657 | Bender | Jul 1999 | A |
5940641 | McIntyre | Aug 1999 | A |
5956418 | Aiger et al. | Sep 1999 | A |
5966177 | Harding | Oct 1999 | A |
5973726 | Iijima | Oct 1999 | A |
5978143 | Spruck | Nov 1999 | A |
5982452 | Gregson | Nov 1999 | A |
5982941 | Loveridge | Nov 1999 | A |
5982951 | Katayama | Nov 1999 | A |
5987164 | Szeliski | Nov 1999 | A |
5990934 | Nalwa | Nov 1999 | A |
5990941 | Jackson | Nov 1999 | A |
6002430 | McCall | Dec 1999 | A |
6005987 | Nakamura | Dec 1999 | A |
6009190 | Szeliski | Dec 1999 | A |
6011558 | Hsieh | Jan 2000 | A |
6028719 | Beckstead | Feb 2000 | A |
6040852 | Stuettler | Mar 2000 | A |
6043837 | Driscoll | Mar 2000 | A |
6044181 | Szeliski et al. | Mar 2000 | A |
6064760 | Brown | May 2000 | A |
6078701 | Hsu et al. | Jun 2000 | A |
6097430 | Komiya | Aug 2000 | A |
6097854 | Szeliski | Aug 2000 | A |
6111702 | Nalwa | Aug 2000 | A |
6115176 | Nalwa | Sep 2000 | A |
6118474 | Nayar | Sep 2000 | A |
6128416 | Oura | Oct 2000 | A |
6144501 | Nalwa | Nov 2000 | A |
6148118 | Murakami | Nov 2000 | A |
6173087 | Kumar et al. | Jan 2001 | B1 |
6175454 | Hoogland | Jan 2001 | B1 |
6188800 | Okitsu | Feb 2001 | B1 |
6195204 | Nalwa | Feb 2001 | B1 |
6211911 | Komiya | Apr 2001 | B1 |
6215914 | Nakamura | Apr 2001 | B1 |
6452628 | Kato | Sep 2002 | B1 |
6526095 | Nakaya et al. | Feb 2003 | B1 |