BACKGROUND OF THE INVENTION
This invention relates to a system for and method of processing an acquired image. More particularly, the invention relates to a system for, and method of, processing an image in a wide variety of ways, not previously capable of being accomplished in the prior art, to provide results which are enhanced compared to what has been able to be achieved in the prior art.
Systems are now in use for processing an acquired image. For example, systems are now in use for processing an acquired image to determine the entrance into, and the departure of individuals from, a defined area such as an enclosure (e.g., a room). Systems are also in use for determining the identity of individuals and objects in an enclosure. Systems are further in use for tracking the movement and variations in the positioning of individuals in an enclosure. These are only a few examples of different types of processing and uses of acquired images.
As of now, different processing and uses of acquired images require different types of systems to be constructed. For example, the same system cannot be used to identify an individual in a crowd and to track the movement of the identified individual in the crowd and particularly the movement of the individual in a defined area such as an enclosure or from one defined area to another defined area. The same system cannot also be used to magnify a particular portion of an acquired image and process that magnified portion. Since different systems are required to perform different functions, costs to individuals or organizations have increased, available space has become limited and complexities in operation have become magnified.
BRIEF DESCRIPTION OF A PREFERRED EMBODIMENT OF THE INVENTION
A software development kit prioritizes certain aspects of an acquired image and introduces the prioritized aspects to a main processor. Alternatively, a coprocessor, or the coprocessor and the development kit, manipulate(s) the acquired image and introduce(s) the manipulated image to the processor. The reprogramming of either one of the development kit and the coprocessor may be initiated by either one of them and by the processor and the programming may be provided by the main processor.
A central station and a gate array may also be individually reprogrammable by the main processor which sets up, programs and controls an intelligent imaging platform in accordance with the individual reprogrammings.
A reprogramming of an audio acquisition stage may also be initiated by that stage and any of the other stages and the processor and may be provided by the processor. The audio information may be related to the acquired image.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1A and 1B are partial views which together form one complete view of a diagram, primarily in block form, of a system (hardware and software) constituting a preferred embodiment of the invention for processing acquired images on a global basis where the system can be used to process the images in a wide variety of different ways;
FIG. 2 is an expanded diagram, primarily in block form, of the software portion of the system shown in FIGS. 1A and 1B;
FIG. 3 is an expanded diagram, partially in block form, showing how different portions of the hardware in FIGS. 1A and 1B are interconnected by buses;
FIG. 4 is an expanded diagram, partially in block form, showing how different portions of the system shown in FIGS. 1-3 are reprogrammable, and how the different reprogrammable stages are connected to one another by interfaces;
FIG. 5 is a schematic view showing the relationship between a reprogrammable stage and a non-reprogrammable stage, the reprogrammable stage being capable of providing a plurality of generic operations and a plurality of custom defined operations and the non-reprogrammable stage being capable of providing only a plurality of generic operations;
FIG. 6 is a schematic diagram showing how different primary blocks in the system shown in FIGS. 1-5 can be combined in different patentable combinations depending upon the results desired to be obtained from the processing of the acquired image and also including a chart showing different combinations of the blocks shown in FIG. 6; and
FIGS. 7A-7D are partial views which together form one complete view of a chart illustratively showing a number of individual functions that may be provided by the system shown in FIGS. 1-6 to accomplish different desired results in processing an acquired image.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
FIGS. 1A-1B show a circuit diagram, primarily in block form, of a system generally indicated at 10 and constituting a preferred embodiment of the invention. The system 10 is shown as being divided by broken lines 12 into a central station generally indicated at 14 and by broken lines 16 into an intelligent imaging platform generally indicated at 18. A communications arrangement formed by one or more communications channels and generally indicated at 20 is disposed in the intelligent imaging platform 18. The intelligent imaging platform 18 is in turn indicated by broken lines 22 as including a software section generally indicated at 24. Substantially everything within the broken lines 16 (except for the communications arrangement 20 and the software section 24) constitutes hardware generally indicated at 28a, 28b and 28c. The hardware section 28a may be considered to include software which interfaces with the hardware in the section.
The hardware section in FIGS. 1A-1B also includes an image acquirer 30 for receiving an image and converting the image to signals, preferably electrical, in a conventional manner. The hardware section also includes an audio codec or audio acquirer 32 for receiving audio information which may be related to the video information. The audio codec or acquirer 32 may include an audio coder decoder designated as a “codec”. Signals pass through a bus 31 between the image acquirer 30 and a field programmable gate array 34 which may be constructed in a conventional manner. Signals also pass through a bus 33 between the gate array 34 and a coprocessor 36. The gate array 34 and the coprocessor 36 are disposed in the hardware section 28a.
The audio signals preferably pass through a bus 35 between the audio codec or acquirer 32 and the field reprogrammable gate array 34. However, the audio signals could pass through a bus between the audio codec or acquirer 32 and the coprocessor 36. The system is more flexible when the audio signals pass between the audio codec or acquirer 32 and the field reprogrammable gate array 34 than when the audio signals pass between the audio codec or acquirer 32 and the coprocessor 36. The ability of the signals from the audio acquirer 32 to pass to either the gate array 34 or the coprocessor 36 may be seen by the extension of the bus 35 to the audio/video interface for the hardware section 28a.
Signals pass between the hardware section 28a and the software section 24 through a bus 38. Signals also pass between a miscellaneous input/output stage 40 (considered as hardware) and the software section 24 through a bus 42. Signals also pass through the hardware section 28b and the software section 24 through a bus 44. The hardware section 28b includes a compact flash card interface 46, a PC card interface 48 and a PCI interface 50. The hardware section 28b provides information storage and includes a capacity for providing information storage expansion and other (non-storage) expansion.
The software section 24 includes a video manipulator 52, an audio manipulator 54, an event generator 56, an event responder 58, a platform user interface 60 and a kernel operating system 62. Each of these stages has an arrow 64 disposed in an oblique direction at the bottom right corner of the stage. The oblique arrow 64 indicates that the stage is capable of being reprogrammed. The reprogramming of any stage with the arrow 64 can be initiated by any stage whether the other stage has the arrow 64 to indicate its capability of being reprogrammed. For example, the reprogramming of any of the stages 34, 36 and 52-62 (even numbers only) can be self initiated and can be initiated by any of the other stages 34, 36 and 52-62 and by any other stages such as the stages 70, 72 and 74. Thus, each of the stages 34, 36 and 52-62 (even numbers only) is illustratively able to be reprogrammed. Thus, the stages 34, 36, 52-62 and 65 (even numbers only) and 65 receiving such communications have an enhanced flexibility in operation in comparison to the stages which do not receive such reprogramming. Each reprogrammable stage including the stages 34, 36 and 52-62 (even numbers only) and 65 can also initiate reprogramming of itself. The reprogramming of each reprogrammable stage including the stages 34, 36 and 52-62 (even numbers only) can be initiated by almost any stage in the system, except for the image acquirer 30, the audio acquirer or codec 32, the miscellaneous input/output 40 and the storage and expansion stage 28b.
A software development kit 65 is indicated by a cloud designated as “platform configuration” with an arrow 64 in the upper left corner. The output from the software development kit 65 is introduced to a main processor 66 to control the operation of the main processor. The software development kit may be considered to be within the main processor 66. The main processor 66 reprograms individual ones of the stages 34, 36 and 52-64 (even numbers only) and the software development kit 65 to process the image acquired by the stage 30 from the intelligent imaging platform 18 and the audio acquired by the stage 32 from the intelligent imaging platform 18.
The field programmable gate array 34 provides reprogrammable arrays of gates to clarify and/or sharpen or otherwise process the video data acquired from the image acquisition stage 30 and introduces the clarified image to the coprocessor 36. The coprocessor 36 manipulates the audio and video data depending upon the results desired to be obtained from the system 10. For example, different manipulations may be provided by the coprocessor 36 when the image is targeted on a single person or a group of people or on an inanimate object. The miscellaneous input/output stage 40 provides such information as motion sensing to indicate to an alarm panel that the camera has observed and detected motion in a scene. The hardware section 40 can also indicate to the intelligent imaging platform that some external device has detected motion and wishes to inform the intelligent imaging platform that an event worth observing is taking place. In addition, the hardware section 40 may also indicate to a lens to change the size of the iris in the lens. It will be appreciated that the hardware section 40 may perform a considerable number of functions other than motion detecting.
The video manipulate stage 52 may manipulate an image to clarify the image as by correcting for color or extracting facial features. This is especially important when faces are in the image and the faces are to be matched against a database identifying a particular face. A similar type of manipulation is provided by the stage 54 with respect to audio information such as when a person is speaking. The event generator 56 matches the image from the stage 52 against the images in the database. This is important when the images are faces. The event responder stage 58 provides a response depending on the matching or lack of matching of the acquired image from the stage 52 and the image in the database. Although the matching has been discussed with reference to faces, the matching can be with respect to any physical object or any perceived state independent of a physical object.
The event responder 58 acts upon the output from the event generator 56 in accordance with the processing which is provided to obtain the desired results. The platform user interface 60 provides a mechanism for taking the information that the intelligent imaging platform 18 sees and the platform user interface 60 processes that information and presents the processed information to the user. It also allows for the user to adjust the setting of the intelligent imaging platform. The platform configuration 65 allows the user of the system to write code for customizing the intelligent imaging platform to provide the desired result. The kernel operating system 62 provides for the basic operation of the intelligent imaging platform. It is well known in the art.
Although the stages 52-62 (even numbers only) and 65 constitute software, they may be disposed in the hardware section 28c, since they control the operation of the main processing hardware 66. The main processing hardware 66 is sometimes referred to in this application as a “main processor”. The main processor 66 is connected by the bus 75 to communication stages or channels in the intelligent imaging platform 18. The intelligent imaging platform 18 includes a subset of communication channels for example, channels 70 (ethernet), 72 (serial) and 74 (firewire) in the communications arrangement 20. The channel 70 receives information from an Ethernet source. The channel 72 receives serial information from an external source. The channel 74 receives high speed information from a protocol known as Firewire and communicates this information to the main processing hardware 66. The channels 70, 72 and 74 are representative of the different types of information that may be acquired by the currently active communication channels in the intelligent imaging platform 18. The representative channels such as the channels 70, 72 and 74 also receive information from the main processor 66 and supply information to the main processor.
The intelligent imaging platform 18 in turn communicates through the communications network 76 to the central station 14. As shown in FIG. 1A, the central station 14 is reprogrammable and can initiate reprogramming of itself and any other reprogrammable stage. The central station 14 is shown as including a station user interface 80, a station configuration 82, storage 84 and a platform setup, programming and control 86. The platform setup 86 may include set up and configuration information for event generation, event response, platform configuration, platform user interface, field programmable gate array and coprocessor corresponding to what is shown in the intelligent imaging platform 18. The platform set up 86 is shown as being included in the central station 14 but it controls the state of the stages 34, 36, 52-62 (even numbers only) and the main processor 66 in the intelligent imaging platform 18.
The stage 30 acquires an image and introduces the acquired image to the field programmable gate array 34. The gate array 34 processes the image in accordance with the desired processing to be provided of the image and introduces the signals representing the processed image to the coprocessor 36. The coprocessor 36 manipulates the clarified image dependent upon the desired result to be obtained from the system shown in FIGS. 1A-1B. For example, the coprocessor 36 may manipulate the image to focus on an individual in a crowd and may track the movements of the individual. Alternatively, the coprocessor may manipulate the image to concentrate on what happens in a particular corner of a room. The coprocessor 36 is also able to manipulate the audio from the codec 32 to conform to the manipulation of the video. However, as indicated previously, the audio information may be clarified by the field reprogrammable gate array 34 before it is introduced to the coprocessor 36.
The signals from the coprocessor 36 are further manipulated by the stages 52 and 54. The video manipulator 52 further enhances the quality of the acquired image or otherwise processes the acquired image. For example, the video manipulator 52 may identify individual faces in a crowd and may extract facial features of an individual. The event generator 56 may match the facial features against a database to identify the individual on the basis of this matching against the database.
The system 10 shown in FIGS. 1A-1B has certain important advantages. It provides software (e.g. the stages 34, 36, 52-62 (even numbers only) and 65) in the intelligent imaging platform 18 to control the operation of the platform. In this way, the reprogramming of the software stages can occur instantaneously in the intelligent imaging platform 18 and the resultant changes in the output from the software stages can directly control the operation of the intelligent imaging platform. This is in contrast to the prior art where output signals have been introduced to controls outside of the intelligent imaging platform. In the prior art, the signals are then processed outside the intelligent imaging platform and introduced to the controls outside of the intelligent imaging platform. As will be appreciated, the passage of the signals outside of the platform and the subsequent processing of the signals outside of the system produce a degradation in the system performance.
The degradation of the signal resolution with increases in distance is particularly troublesome when analog signals are processed. Many of the systems of the prior art have processed analog signals. In contrast, the system of this invention operates on a digital basis. Coupled with the disposition of the controls in the intelligent imaging platform 18, the digital operation of the system of this invention enhances the sensitivity and the reliability and functionality of the system 10.
The system 10 also has other advantages. This results in part from the flexibility in the construction and operation of the system. For example, all of the stages 34, 36, 52-62 (even numbers only) and 65 are reprogrammable. Furthermore, each of the stages 34, 36, 52-62 (even numbers only) and 65 can be reprogrammed on the basis of a decision from that stage or any of the other of these stages. This flexibility in reprogramming provides for an enhanced sensitivity and reliability in the adjustments that can be provided in the operation of the intelligent imaging platform 18, thereby providing an enhanced performance of the platform.
FIG. 2 illustrates the software in additional detail. It includes the video manipulator 52, which is shown in broken lines 90 in FIG. 2. As shown in FIG. 2, the video manipulator 52 includes a preprocessor 92 and an analyzer 94. The preprocessor 92 converts the acquired image from the stage 30 in FIG. 1B to a format that the user wishes to provide. For example, the preprocessor 92 may correct, fix or establish colors in the acquired image or may select only a small portion of the image. The analyzer 94 may illustratively look for something specific in the image or in a portion of the image. For example, the analyzer 94 may look for an individual having particular facial features. Alternatively, the analyzer 94 may extract facial features or may detect motion of an image or an object. The operation of the event generator 56 and the event responder 58 have been indicated previously in connection with FIGS. 1A-1B.
The output of the analyzer 94 is stored or archived as at 96 in FIG. 2 and the stored or archived output is introduced to a post processor 98. The post processor 98 illustratively provides for a modification of the image based upon the output of the analyzer 96 and the event responder 58. For example, the post processor 98 may emphasize image portions that have changed in position with time. The output of the post processor stage 98 is introduced to one of the stages 70, 72 and 74 in the communications network 20 in FIG. 1A and the output of the communications stage is provided to the communication network 76 also shown in FIG. 1A.
The miscellaneous input/output stage 40 in FIG. 1B is also shown in additional detail in FIG. 2 within a block 106 in broken lines. The miscellaneous input/output stage 40 includes miscellaneous inputs 108 such as triggers and includes miscellaneous outputs 110 such as relays, light emitting diodes and an iris control port for the lens of the intelligent imaging platform 18. The audio manipulator 54 in FIG. 1B is also shown in FIG. 2 within a box 100 formed from broken lines. The audio manipulator 100 in FIG. 2 includes a preprocessor 102 and an analyzer 104 which respectively operate on the audio in a manner similar to the operation of the preprocessor 92 and the analyzer 94 on the video in FIG. 2.
FIG. 2 also includes the platform user interface 60 and the kernel operating system 62 shown in FIG. 1A. The platform user interface 60 includes commands and web pages and the kernel operating system 62 includes timer tasks and load control. The “load” refers to the work in performing the software tasks on the processor and the “load control” refers to the acts of organizing the tasks to make certain that all of the tasks are provided with an opportunity to occur. FIG. 2 also includes the platform configuration 65 also shown in FIG. 1B. The platform configuration 65 includes code load, set-up, original equipment manufacturers (OEM) requirements and the software development kit (SDK). The platform configuration 65 and all of the other stages in FIG. 2 include the diagonal line 64 to indicate that each of the stages can communicate with any of the other stages in FIG. 2 and can be reprogrammed by the main processor 66 on the basis of an initiation by any of the reprogrammable stages shown in FIG. 2.
FIG. 3 shows the acquisition of an image illustratively in either photonic (light) or analog form. FIG. 3 also indicates the flow of video and audio data through the hardware shown in FIGS. 1A-1B. The video path includes the image acquisition of light illustratively in analog form as at 30 in FIGS. 1B and 3. The image is converted inside the image acquirer 30 into digital form either by using a combination of a lens and imager (for light) or by using an analog decoder—for example, an NTSC decoder—(for analog). The resultant signals flow through the video bus 31 to the field reprogrammable gate array 34 also shown in FIG. 1B. The gate array 34 also receives audio signals flowing through a bus 122 from the audio codec or acquirer 32 also shown in FIG. 1B.
The video and audio signals then flow through a bus 124 to the coprocessor 36. The output from the coprocessor 36 is provided to a bus 126. These signals then pass through the gate array 34 to PCI bus 38. The signals on the buses 31 and 122 may also be by-passed through the field reprogrammable gate array 34 to the PCI bus 38 without passing through the coprocessor 36. The signals on the PCI bus 38 pass through the main processor 66 and through a communications bus 129 to the communications network 76 in FIG. 1A. As can be seen, the audio/video data flows through as many as five (5) different buses but only once through each bus. This allows for a streamlined flow of data through the intelligent imaging platform 18.
FIG. 6 is a simplified block diagram of the system shown in the previous Figures. In this Figure, the stages discussed previously in connection with FIGS. 1A-1B and considered as primary are shown. The simplified system includes the image acquirer 30 (designated as A), the field programmable gate array 34 (designated as B), the main processor 66 (designated as C), the software development kit 65 (designated as F), the central station 14 (designated as D) and the coprocessor 36 (designated as E). The software development kit F may be considered as a part of the platform configuration 65 in FIGS. 1B and 2 and is included within the main processor 66.
FIG. 6 also includes a chart showing primary combinations of individual ones of the stages A-F in FIG. 6 and optional combinations of the primary stages with the other stages shown in FIG. 4. As will be seen, there are two (2) primary combinations—(1) a combination of A (the image acquirer), C (the main processor) and F (the software development kit) and (2) a combination of A (the image acquirer), C (the main processor) and E (the coprocessor). Certain optional combinations are also shown involving individual ones of B (the field reprogrammable gate array), D (the central station) and F (the software development kit) for the primary combination of A, C and E. They further include optional combinations of B (the field reprogrammable gate array), D (the central station) and E (the coprocessor) for the primary combination of A, C and F. The combinations designated with a star in the first column may be considered as the most critical. It will be appreciated that the combinations shown in FIG. 6 are illustrative only and that a considerable number of other combinations (some even primary) may be provided without departing from the scope of the invention.
FIG. 4 illustrates the configurability of different ones of applicants' primary stages. The stages correspond to the primary stages shown in FIG. 6—namely, the image acquirer 30, the field programmable gate array 34, the coprocessor 36, the main processor 66, the software development kit 65 (within the main processor 66) and the central station 14. All of these stages (except for the image acquirer 30) are configurable or reprogrammable as indicated by a cloud-like configuration within a rectangular block. Each cloud represents a configurable or reprogrammable entity which can be shaped to the task at hand. Each rectangular block represents the encompassing fixed body which is not unto itself configurable or reprogrammable.
The different blocks are defined and determined by the interfaces of applicants' assignee. These interfaces are as follows:
1. The video interface 31 between the image acquirer 30 and the field reprogrammable gate array 34;
2. A coprocessor interface 142 between the gate array 34 and the coprocessor 36;
3. The hardware interface 38 between the gate array 34 and the main processor 66;
4. A communications interface 146 between the main processor 66 and the central station 14; and
5. A software interface 148 between the main processor 66 and the software development kit 65.
The request to initiate reprogramming of a reprogrammable block can come from anywhere in the system with the exception of such stages as the image acquirer 30, the audio acquirer or codec 32 and the miscellaneous input/output stage 40. However, the reprogramming is provided by the main processor 66.
FIG. 5 schematically illustrates in some detail a customizable block, generally indicated at 150, which is also representative of other blocks. The customizable block 150 is reprogrammable as indicated by the diagonal arrow 64 in the lower right corner. The customizable block 150 includes a sub-block 152 capable of performing a plurality of available generic operations designated in the sub-block as tasks I to n. These operations are generic to the block 150 and many of the other blocks in the system shown in the drawings. When the block 150 provides only available generic operations such as in the block 152, the block is not reprogrammable. The customizable block 150 may also include a sub-block 154 providing custom defined operations. These are operations individual to the block 150. The sub-block 154 may provide I to n custom operations. When the block 150 can provide one (1) or more custom defined operations, the block 150 is said to be reprogrammable and is demarcated or indicated by the arrow 64. As will be appreciated from the previous discussion, most of the blocks shown in the drawings are reprogrammable.
FIGS. 7A-7D show a chart showing examples of different functions capable of being performed by the system 10. A complete view of the chart is shown when the partial view of FIG. 7A is placed to the left of the partial view of FIG. 7B and above the partial view of FIG. C, and the partial view of FIG. 7D is placed below FIG. 7B and to the right of FIG. 7C. It will be appreciated that FIGS. 7A-7D shows only a few of the multitudinous operations that can be performed by the system 10. The first (designated as “Main Function”) column in FIGS. 7A and 7C indicates four (4) different functions which can be performed by the system 10. These four (4) functions are:
- (a) “Remote Color Video Monitor with Archive”,
- (b) “Face Print Generation and Upload to Server”,
- (c) “Gun Shot Detection and Server Notification,” and
- (d) “Subject Tracking with Realtime Video Monitor.”
The second column in FIGS. 7A and 7C is designated as “Other/Combined Functions”. It indicates other functions which can be performed in addition to the “main function” specified in the first column. For example, another function or a sub-function such as “Audio Monitoring” can be performed in addition to the main function of “Remote Color Video Monitor with Archive.” As another example, other functions or sub-functions such as (a) “Generate Face Present Audio Alert”, (b) “Recognize Face and Alert Server of Match” and (c) “Recognize Multiple Faces Simultaneously” can be performed with the main function “Face Print Generation and Upload to Server”.
The third column in FIGS. 7A and 7C indicates the function that is performed in the preprocessor 92 in FIG. 2. For example, the preprocessor 92 performs a color correction when the main function is “Remote Color Video Monitor with Archive”. As another example, the pre-processor 92 provides a “Facial Feature Extraction” when the main function is “Face Print Generation and Upload to Server”. The operation of the third (3rd) column in FIGS. 7A and 7C is dependent on the operation of the first and second columns of FIGS. 7A and 7C. This is also true of the operation in the fourth (4th) through eleventh (11th) columns of FIGS. 7A-7D.
FIGS. 7A-7D indicates the operation of the analyzer 94 in the fourth column of FIGS. 7A and 7C for different ones of the main functions in column 1 of FIGS. 7A and 7C. For example, for the main function of “Face Print Generation and Upload to Server”, the analyzer 94 operates to provide Face Print Generation. As another example, for the main function of “Subject Tracking with Realtime Video Monitor,” the analyzer 94 operates to provide “Subject Detection/Motion Estimation.”
Column 5 in FIGS. 7A and 7C indicates the operation of the event generator 56 in FIGS. 1A and 2 for the different main functions in column 1 of FIGS. 7A and 7C. In like manner, column 6 in FIGS. 7A and 7C indicates the operation of the event responder 58 in FIGS. 1A and 2 for the different main functions in column 1 of FIGS. 7A and 7C. Similarly, column 7 in FIGS. 7B and 7D indicates the operation of the storage member 96 in FIG. 2 for different main functions in column 1 of FIGS. 7A and 7C. The post processor 98 in FIG. 2 provides the operations shown in column 8 of FIGS. 7B and 7D for the different main functions specified in column 1 of FIGS. 7A and 7C. The communication stages 70, 72 and 74 in FIG. 1A perform the operations shown in column 9 of FIGS. 7B and 7D when the main function is as indicated in column 1 of FIGS. 7A and 7C and the other combined functions are as indicated in column 2 of FIGS. 7A and 7C.
Column 10 of FIGS. 7B and 7D indicates the miscellaneous output which is provided when the main function specified in column 1 of FIGS. 7A and 7C is provided. The miscellaneous output is indicated at 40 in FIG. 1B. The audio preprocessor 102 in FIG. 2 is shown in FIG. 2. The audio preprocessor 102 in FIG. 2 provides the operations shown in column 11 of FIGS. 7B and 7D for the different main functions specified in column 1 of FIGS. 7A and 7C. In like manner, the audio analyzer 104 in FIG. 2 provides the operations shown in column 12 of FIGS. 7B and 7D for the different main functions specified in column 1 of FIGS. 7A and 7C. The last column of FIGS. 7B and 7D is designated as “Notes”. This column indicates that the field programmable gate array 34 and the coprocessor 36 are utilized for added processing power when “Recognize Multiple Faces Simultaneously” is provided as a sub-function in column 2 of FIGS. 7A and 7C. As another example, the gate array 34 and the coprocessor 36 are utilized when “Multiple Subject Tracking is Provided with Digital Pan, Tilt, Zoom” is provided in column 2 of FIGS. 7A and 7C.
Although this invention has been disclosed and illustrated with reference to particular embodiments, the principle involved are susceptible for use in numerous other embodiments which will be apparent to persons of ordinary skill in the art. The invention is, therefore, to be limited only as indicated by the scope of the claims.