Embodiments of the invention relate generally to multiple imager video systems that produce images through a process of stitching multiple images together.
Video/imaging systems may be used to produce video with a wide field-of-view and may be classified as single imager/sensor systems or multiple imager systems. Embodiments of the present invention relate to such video/imaging systems comprising multiple imagers or sensors.
Image stitching is feature common to multiple imager systems that produce images with a wide field-of-view. With image stitching, individual images from the input imagers or sensors are stitched to form a global image that has the required field-of-view.
It is known for imager systems to include a zoom feature used to zoom into particular areas of interest in the resultant global image.
Embodiments of the present invention, disclose an intelligent zoom method, which advantageously, may be used to determine the left and right extremities in a global image to serve as a left, and right boundary, respectively corresponding to the region of interest for a zoom of function.
In one embodiment, said left and right extremities are set or determined dynamically based on the detection/non-detection of a face in the global image. Advantageously, the left extremity is set to the coordinates for a face in the global image which is determined to be the “left most” face in the global image, and the right extremities set to the coordinates for a face in the global image which is determined to be the “right most” of face in the global image. Thus, advantageously, the area that serves as a region of interest for a zooming function based on said extremities will always contain the faces detected in the global image. This is useful, for example, where an event associated with the global image happens to be a meeting so that a zoom feature may be implemented to effectively zoom into an area of the meeting that includes only the faces of the meeting participants.
Other aspects of the invention will be apparent from the detailed description below.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form only in order to avoid obscuring the invention.
The present invention, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The drawings are provided for purposes of illustration only and merely depict exemplary embodiments of the invention. These drawings are provided to facilitate the reader's understanding of the invention and shall not be considered limiting of the breadth, scope, or applicability of the invention. It should be noted that for clarity and ease of illustration these drawings are not necessarily made to scale.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent, however, to one skilled in the art that the invention can be practiced without these specific details. In other instances, structures and devices are shown in block diagram form only in order to avoid obscuring the invention.
Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearance of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments.
Moreover, although the following description contains many specifics for the purposes of illustration, anyone skilled in the art will appreciate that many variations and/or alterations to said details are within the scope of the present invention. Similarly, although many of the features of the present invention are described in terms of each other, or in conjunction with each other, one skilled in the art will appreciate that many of these features can be provided independently of other features. Accordingly, this description of the invention is set forth without any loss of generality to, and without imposing limitations upon, the invention.
As will be understood by one of ordinary skill in the art, the geometric arrangement or configuration of the sensors 102 may vary. For example, for panoramic video imaging, the senses may position the along an arc configured to cover field-of-view ranging from between 120° to 360°.
Bayer raw images from the sensors 102 may be output into image signal processors (ISPs) 104. Each ISP may include a face detection engine indicated by reference numeral 104A. As will be appreciated by one of ordinary skill in the art, the ISPs 104 may be configured to perform a variety of image processing functions and/or image enhancements including color format conversion, noise the deduction, etc. to produce an image frame for a selected color-space e.g. RGB, or YUV. The face detection engine 104 a may be configured to detect and track faces in an image sequence. In some embodiments, the face detection functionality may be implemented as part of a stitching engine 106. Functionally, the stitching engine 106 may be configured to implement techniques to stitch the images output by each ISP 104 in order to produce a wide field-of-view image.
In one embodiment, in order to perform the intelligent zooming method of the present invention, output by each ISP 104 it divided into local zones which are then mapped to global zones in a global or stitched image. For example,
In one embodiment, the ISPs 104 may be the commercially available the commercially available ISP AP 1302 made by ON Semiconductor.
For purposes of implementing the intelligent zooming method of the present invention, the hardware 100 also includes a zoom engine 108. In one embodiment, the face detection engine 104 may generate a bounding box around a detected face and provides image pixel coordinates of the bounding box. In one embodiment, the zoom engine 108 may be configured to read the bounding box coordinates from all faces detected from an image array. Advantageously, the zoom engine 108 may be configured to calculate a region of interest in the scene being viewed. In one embodiment, based on the region of interest, the zoom engine 108 may compute scaling parameters. In one embodiment, these parameters may include a scaling factor and a starting X coordinate for a stitched image and may program a scalar to produce an image frame which covers only the region of interest. The scaling parameters may be in put into a frame formatter 110 that produces a zoomed image based on said parameters.
It has been observed by the inventor, that face detection has a low probability of generating a false positive and a high probability of generating a false negative, a valid face is not detected.
In one embodiment of the invention, a mapping function is implemented as follows: the mapping function is based on the top left coordinates of a rectangle each ISP draws around a detected face. If the mapping function detects a face in a zone, it marks the zone as a true zone, otherwise it marks the zone as a false zone. In other embodiments, other mapping methods may be used. For example, instead of creating zones, the exact pixel coordinates associated with a face may be used to mark pixels as true or false (a pixel's mark is true if it corresponds to a detected face).
In one embodiment, entry and exit criteria may be implemented to mark a local zone as a true zone based on parameters T-enter, and-N-enter, T-exit, and N-exit as follows:
In one embodiment, the entry criteria may be configured as follows:
Based on the above configuration, in last 90 (3 sec×30 fps) iterations of the mapping function, if a face was detected in at least in 90% of the frames (90% of 3 sec×30 fps; 81 frames with face detection returning positive result) in a particular local zone, then that zone is marked as a True zone.
In one embodiment, the Exit criteria may be configured as follows.
Which means that in last 270 (9 sec×30 fps) iterations of the mapping function, if no face was mapped into a local zone at least 99% of frames (99% of 9 sec×30 fps; 267 frames with face detection returning negative result), then the zone is marked as a false-zones.
Detection of a a in one embodiment, detection of a face resets the exit criteria counters, similarly detection of exit criteria resets the entry criteria counters. In one embodiment, once the mapping is complete an auto zoom function is called to adjust the zoom and pan levels.
Numerous specific details may be set forth herein to provide a thorough understanding of a number of possible embodiments of a digital imaging system incorporating the present disclosure. It will be understood by those skilled in the art, however, that the embodiments may be practice without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the embodiments. It can be appreciated that the specific structural and functional details disclosed herein may be representative and do not necessarily limit the scope of the embodiments.
Although the invention is described above in terms of various exemplary embodiments and implementations, it should be understood that the various features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described, but instead can be applied, alone or in various combinations, to one or more of the other embodiments of the invention, whether or not such embodiments are described and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments.
Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing: the term “including” should be read as meaning “including, without limitation” or the like; the term “example” is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof; the terms “a” or “an” should be read as meaning “at least one,” “one or more” or the like; and adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Likewise, where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future.
This application claims the benefit of priority to U.S. Provisional Patent Application No. 62/465,644 entitled “Intelligent Zoom for a Panoramic Video System” filed Mar. 1, 2017, which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
8045047 | Nikkanen | Oct 2011 | B2 |
8908057 | Yoshizumi | Dec 2014 | B2 |
20020122113 | Foote | Sep 2002 | A1 |
20040027451 | Baker | Feb 2004 | A1 |
20040179719 | Chen | Sep 2004 | A1 |
20070092245 | Bazakos | Apr 2007 | A1 |
20110310214 | Saleh | Dec 2011 | A1 |
20130010084 | Hatano | Jan 2013 | A1 |
20130063596 | Ueda | Mar 2013 | A1 |
20140126819 | Doepke | May 2014 | A1 |
20170140791 | Das | May 2017 | A1 |
20180288311 | Baghert | Oct 2018 | A1 |
Number | Date | Country | |
---|---|---|---|
62465644 | Mar 2017 | US |