The present invention generally relates to cameras and systems using cameras. And, in particular, the present invention relates to apparatuses and systems using a number of cameras to provide multiple fields of view.
Cameras are used for a wide variety of functions and in many different situations. For example, cameras are used to monitor activity in spaces within buildings, such as department stores, malls, and businesses. In these areas, although the cameras are generally protected from exposure to natural elements (such as wind, rain, soil, etc), they may be placed in areas that are not easily accessible and/or may not be regularly maintained for a number of reasons. In such instances, movable parts can wear out or malfunction, which can reduce the effectiveness of the camera or render it inoperable.
Additionally, insects, such as spiders and the like, can obstruct the movement of camera parts. For example, spider webs, the spiders themselves, and/or their victims can become caught in the path of various moving parts of a camera which can also reduce the effectiveness of the camera or render it inoperable.
Cameras can also be used in unprotected environments, such as in the outdoors where the camera can be exposed to various natural elements, insects, and the like. In some instances, the cameras can also be positioned in areas that are not easily accessible and/or where they are not maintained sufficiently. Additionally, replacement parts may not be readily accessible and, therefore, the camera may not be operable for a period of time until replacement parts can be made available.
For example, cameras are often used in aircraft for aerial surveillance of targets on the ground, at sea, etc. In some instances, such as on manned aircraft, although the aircraft has occupants that can perform maintenance on the camera, the parts may not be available while in flight, or may not be available at the aircrafts base of operations. In such situations, the camera may sit idle until the replacement parts arrive.
Cameras are also used on unmanned aircraft. In these situations, the aircraft is inaccessible during flight and if a camera becomes inoperable or its effectiveness reduced, it cannot be fixed until the aircraft returns from its mission. The reduced effectiveness of the camera or its inoperability can also influence the potential for the successful return of the aircraft, since the camera may be used by a remotely located controller to navigate the aircraft to and from a surveillance target.
Further, in such situations, the area available for movement of a camera in order to pan the camera to a different focal point can be restricted due to the small amount of space typically available in unmanned aircraft.
In such instances, digital cameras have been used. Some digital cameras have a large enough resolution to provide a functionality similar to a zoom lens. To accomplish this functionality, a digital camera having a high resolution and a wide field of view can be used. In such cameras, in order to view the entire field of view, some image information is discarded to provide a generally zoomed out resolution. For example, in some devices, every third column of information is discarded.
If a “zoomed in” view of an area is desired, only a portion of the entire field of view is shown in the display, but it is shown with less or none of the image information discarded. For example, in some devices, the full field of view can be segmented into nine display's worth of information at full pixel resolution (3×3).
In these devices, the ratio of full field of view to small field of view is 3 to 1. In this way, the fine detail of the image can be shown. Additionally, this allows the digital camera to pan over an area that is the size of the “zoomed out” image, for example. In such cameras, the change of field between large and small can be accomplished through the use of computer executable instructions which can select and format the image data from the camera.
Since the digital camera can obtain both a wide field of view and a detailed small field of view and, if a ratio such as 3 to 1 is acceptable, then the camera does not have to utilize moving parts for this pan and zoom functionality. However, if other fields of view or resolutions are desired, these types of digital cameras have to make use of lenses and movable parts to accomplish the other fields of view and resolutions.
Additionally, in some situations, such as in unmanned aircraft, weight and size are also important characteristics regarding camera design. In this regard, digital cameras are typically lighter and small than cameras with movable components. Further, when using a single movable lens, compromises may have to be made on the amount of zoom available based upon the limitations of the lens selected.
Embodiments of the present invention provide apparatuses and systems having a number of cameras. For example, camera embodiments can include digital surveillance cameras having a first field of view and a second field of view that is wider than the first field of view. The camera can generate image data and an imaging circuit can be used for converting the image data into a digital field of view.
In such embodiments, the camera can be mounted to an imaging circuit board. The imaging circuit can be formed on the imaging circuit board. The imaging circuit can be used to select a portion of the image data for conversion based upon a selected area within a field of view. The imaging circuit can include circuitry and/or computer executable instructions for converting the image data. The camera can also include a signal processing board for performing a conversion of the image data into a signal to be transmitted. Such camera embodiments can also include circuitry and/or computer executable instruction to provide a digital panning capability.
Embodiments of the present invention provide camera arrays including various numbers of cameras. As used herein, the term camera array includes one or more cameras. For example, in various embodiments, the camera array can include a first and a second camera, among others.
In such embodiments, the first camera can have a first field of view, while the second camera has a second field of view that is different than the first field of view. In this way, the multiple cameras can compliment each other with respect to the field of view and zoom capabilities available to the overall functionality of the system. For example, the camera fields of view can be combined to provide a larger composite field of view and/or can be compliment each other by providing varying fields of view and zoom ratios for an area around a focal point.
In some embodiments, the multiple cameras can be fixed with respect to each other. In this way, the structures mounting the cameras to a backing plate or circuit board do not have articulating parts that could become damaged or their movement restricted. In such embodiments, the cameras can be directed to the same focal point or to different focal points as described above.
If digital cameras are used, the cameras themselves do not have to utilize movable parts. This can be beneficial, for example, in circumstances where the camera may not receive regular maintenance and/or in environments that expose the camera to natural elements, such as, water, soil, salt, and the like, among others.
In some embodiments, the digital cameras can include digital magnification and/or demagnification capabilities. In this way, a camera can be used for multiple fields of view, multiple pan factors, and multiple zoom factors. When combined with other cameras, such combinations can provide the user with more field of view, pan, and/or zoom options.
The cameras can also be directed at the same focal point. When multiple cameras are directed to the same focal point, these embodiments provide many field of view choices for the area centered around the focal point. In some embodiments, the camera array can be moved to change the area to be viewed that is aligned with the focal point of the multiple cameras.
The cameras used in the embodiments of the present invention can have any field of view that can be provided by a camera lens. For example, some possible fields of view can include 3.3 degrees, 10 degrees, 30 degrees, and 90 degrees. These exemplary fields of view can also serve as an example of the fields of view available from a four camera array.
Such an array can, for example, provide a continuous ability to zoom from 90 degrees down to approximately one degree. This can be accomplished by manual switching from one camera to another in a camera array, for example. This can also be accomplished through use of computer executable instructions that can switch from one camera to the next, when a low field of view or high field of view threshold is reached.
In some embodiments, an imaging component can be used that includes mega-pixel imaging control and interfacing circuitry. Since a digital picture is a digital interpretation of an actual scene viewed by the camera, the mega-pixel imaging control is used with digital cameras to aid in the construction of the pixels in order to represent the area of a scene that is being replicated in the image data. Interfacing circuitry can be used to connect the various control and imaging circuitry together to allow for intercommunication between the various control and imaging components.
Examples of imaging controls that can be provided in various embodiments include, but are not limited to, shutter time, shutter delay, color gain (e.g., can be in one or more of red, green, blue, monochrome, etc), black level, window start position, window size, and row and column size, white balance, color balance, window management, and algorithms that determine the value of these camera controls for various situations, to name a few. These controls can be accomplished through circuitry and/or computer executable instructions associated with the imaging circuitry. Additionally, circuitry and/or computer executable instructions can be executed on a signal processor, such as a digital signal processor, or other processing component, to implement some of the above functions.
In various apparatus embodiments, the apparatus can include a camera array connected to a mounting structure. For example, and the camera and/or camera array can be mounted to a fixed mounting structure or a movable mount, for moving the camera array. As discussed above, the camera array can be mounted such that the entire array is moved together. In this way, the varied fields of view and zoom ratios of the cameras can be used in combination to provide a number of pan and zoom options for viewing the area in which the camera array is directed.
The movable mount can be designed to move the camera array in any manner. For example, the movable mount can be designed to rotate in one or more dimensions. In this regard, the movable mount can be designed to rotate 180 degrees, for example, around a center point in one or more dimensions. This can be beneficial when the cameras are to be used to view through a hole in a surface, such as the bottom of an aircraft or a ceiling of a room. However, the invention is not limited to such movement and embodiments can include more, less, or different types of movement.
Additionally, the use of a movable mount can also be used in combination with the digital panning features of a digital camera to allow a user to digitally pan to the edge of a digital field of view and then use a motorized movable mount to pan the camera array beyond the current digitally available field of view. This can be accomplished with a mount controller, such as a processor and computer executable instructions to switch between digital panning and physical panning through use of the movable mount.
In some embodiments, an apparatus can include a camera array having multiple cameras that generate image data. The image data can be handled in various manners. For example, the image data can be stored in memory, displayed on a display, communicated to another apparatus or system, passed to an application program, and/or printed on print media, etc.
Memory can be located proximate to one or more of the cameras (e.g., within a surveillance vehicle) or at a remote location, such as within a remote computing device at a base of operations at which a surveillance mission originated, is controlled, or has ended. In such instances, the image data can be stored in memory, as discussed above. For example, an apparatus provided in a surveillance vehicle can store the image data in memory when in the field. The information can then be sent to a remote device once the vehicle has exited a hostile area or remote area, or has returned from the mission.
The transmission of the image data can be accomplished via wired or wireless communication techniques. In embodiments where image data is transmitted or stored, the apparatus can also include an imaging circuit for converting the image data for storage in memory, and/or into a signal to be transmitted.
In some embodiments, the apparatus can also include a switching circuit for switching between the image data from the one camera to the image data from another camera for conversion into the signal to be transmitted. Additionally, an imaging circuit can be used to select a portion of the image data for conversion based upon a selected field of view. In some embodiments, each camera can have its own imaging circuit.
A remote user, in various embodiments, can select a camera and a field of view and/or zoom ratio (e.g., selected area within a field of view) for the image data that is to be sent to the user. The apparatus can then configure the camera array settings to provide the selected image data to the user.
In some embodiments, the apparatus can use a digital signal processing (DSP) board for performing the conversion of the data to be stored, displayed, transmitted, and/or printed, etc. The DSP board can be a part of, or connected to, the one or more imaging circuits of the apparatus.
The embodiments of the present invention also include a number of multiple camera system embodiments. In some embodiments, the system includes a camera array having multiple cameras. The cameras generate image data that can be provided to a computing device, such as, or including, a DSP board, having computer executable instructions for receiving and/or processing the image data. In some embodiments, the computing device can be located in a remote location with respect to the cameras. The computing device can also be located proximate to one or more of the cameras, in some embodiments.
The computing device can include a display for displaying the received image data and/or a printer for printing the image data. For example, the image data can be sent directly to a printer. The printer can receive the image data and print the received image data on a print medium.
The image data can also be sent to a computing device such as a desktop or laptop computer with a display and the information can be displayed thereon. In some embodiments, a display can be used to aid in navigating the device by allowing a remote controller (e.g., user) to have a view from an unmanned vehicle, such as a marine craft, land craft, or aircraft.
Additionally, the image data can be provided to a computing system having a number of computing devices, such as a desktop computer and number of peripherals connected to the desktop, such as printers, etc. The computing system can also be a network including any number of computing devices networked together.
In various embodiments, the computing device can include computer executable instructions to send image data requests to the camera array. In this way, the computing device can potentially receive a selected type of image data based upon a request originated at the computing device. For example, the image data requests can include a camera to be selected to obtain a view and a selected field of view, among other such parameters that can be used to determine the type of image data to be provided.
In many such embodiments, the computing device can include a user interface to allow a user to make image data requests. However, in some embodiments the computing device can include computer executable instructions that can be used to automate the selection of a number of different views from the camera array without active user input. For example, a user can make the selections ahead of time, such as through a user interface, and/or computer executable instructions in the form of a program can be designed that can be used to select the same image data when executed. In some embodiments, the program can be a script or other set of instructions that can be directly executed or interpreted. Such programs may be combined with a file or database can be used to make selections.
Various embodiments can also include a variety of mechanisms for transferring image data and camera control signals. For example, various embodiments can include an image data transceiver. A transceiver can send and receive data. A transmitter and receiver combination can also be used in various embodiments to provide the sending and receiving functions.
The image data transceiver can include computer executable instructions for receiving instructions from a remote device and for selecting a field of view based upon the received instructions. The image data transceiver can also include computer executable instructions for receiving instructions from a remote device and for selecting one of the cameras based upon the received instructions.
Embodiments can also include one or more antennas and transmission formats for sending and/or receiving information. One example of a suitable format is a National Television Systems Committee (NTSC) standard for transmitting image data to a remote device. The Federal Communications Commission established the NTSC standard of defining lines of resolution per second for broadcasts in the United States. The NTSC standard combines blue, red, and green signals with an FM frequency for audio. However, the invention is not limited to transmission based upon the NTSC standard or to antennas for communicating NTSC and/or other types of formatted signals.
Various embodiments can also include a mount controller. In various embodiments, the mount controller can include circuitry to receive a signal from a remote device. The mount controller can also include circuitry to move the movable mount based upon the received signal. In some embodiments, the mount controller can include computer executable instructions to receive signals from a remote device and to move the movable mount based upon the received signal.
In such embodiments, the mount controller can include a radio frequency (RF) or other type of antenna for receiving control signals from a remote device, such as from a remote computing device. In such instances, the remote device is equipped with a transmitter or transceiver to communicate with the mount controller.
Those of ordinary skill in the art will appreciate from reading the present disclosure that the various functions provided within the multiple camera embodiments (e.g., movement of the mount, camera selection, field of view selection, pan selection, zoom selection, and the like) can be provided by circuitry, computer executable instructions, antennas, wires, fiber optics, or a combination of these.
Embodiments of the present invention include systems and apparatuses having multiple camera arrays. Embodiments of the present invention will now be described in relation to the accompanying drawings, which will at least assist in illustrating the various features of the various embodiments.
The use of the symbols “M”, “N”, “Q”, “R”, “S”, “T”, and “U” herein are used to represent the numbers of particular components, but should not be construed to limit the number of any other items described herein. And, the numbers represented by each symbol can each be different. Additionally, the terms horizontal and vertical have been used to illustrate relative orientation with respect to each other and should not be viewed to limit the elements of the invention to such directions as they are described herein.
The mounting plate 114 can be used, for example, to attach the camera array to a movable mount, as described in more detail below. The mounting plate 114 can be made of any material and can include a circuit board, such as an imaging circuit board or a DSP circuit board. Additionally, in some embodiments, the mounting plate 114 can be a circuit board, such as an imaging circuit board or a DSP circuit board.
Elements 116 and 118 illustrate an example of a range of motion for the camera array of
In the embodiment shown in
In the embodiment shown in
Also,
Each of the fields of view of the cameras has an edge. The fields of view can be of any suitable shape. For example, a field of view can be, be circular or oval shaped, in which case, the field of view has one edge. The field of view can be polygonal in shape, or an irregular shape, for example, in which case, the field of view has three or more edges. In many digital imaging cameras, the imaging sensors are rectangular and, therefore, the field of view is rectangular in shape and has four edges. In various embodiments, the cameras can be positioned such that portions of the edges of at least two fields of view can abut or overlap each other. In this way, a composite image can be created based upon the overlapping or abutting relationship between the fields of view, as will be discussed in more detail with respect to
In some such embodiments, any combination of fields of view can be combined. For example, the fields of view of 112-2 and 112-N can be combined to provide a larger composite field of view 113. In the embodiment shown in
In embodiments such as that shown discussed with respect to
For example, in order to show a composite image, the data sets from camera 112-2 and 112-N can be used (since they are abutting, there is no duplicate data to ignore or discard). In addition, non-overlapping image data from cameras 112-1 and 112-3 can be added to the image data from cameras 112-2 and 112-N to create a composite image data set for the field of view encompassed within the fields of view of cameras 112-1 to 112-N without any duplicate data therein. In other embodiments, all of the image information for the selected fields of view can be combined.
In some embodiments, duplicate information can then be compared, combined, ignored, and/or discarded. For example, overlapping image data can be compared to determine which image data to use in the composite image, such as through use of an algorithm provided within a set of computer executable instructions. Computer executable instructions can also select a set or average the sets to provide lighting and/or color balance for the composite image, among other such functions.
In some embodiments, the composite field of view can be larger than can be printed or displayed. In such embodiments, a portion of the combined image data can be viewed. For example, as shown in
In these embodiments, imaging circuitry and/or computer executable instructions can be used to collect, combine, discard, and/or ignore the field of view image data for forming the composite image and/or composite image data set. Imaging circuitry and/or computer executable instructions can also be used to select the portion of the composite image to be viewed, allow for the user selection of the portion of the image to be viewed, the selection of the fields of view to be used in forming the composite image, and/or the method of forming the composite image, among other uses.
In the embodiment shown in
Additionally, the DSP circuit board 226 is illustrated in the embodiment of
The embodiment shown in
The cameras can be of any type. For example, the cameras shown in
In contrast to the cameras shown in
In the embodiment shown in
A mounting arm 428 can be used, as shown in
As discussed above, the camera array can be mounted to the mounting arm 428 through use of a mounting plate 414. The mounting arm 428 and mounting plate 414 can be made of any materials. Additionally, as stated above, a circuit board, such as an imaging circuit board, a DSP circuit board, or a combined circuit board, among others, can be used to mount the camera array to the mounting arm 428.
Through use of a motorized mount and a digital camera, some embodiments can have a panning functionality that incorporates both the digital panning capability of the camera and the physical panning capabilities of the motorized mount. For example, a user can instruct the imaging circuitry to digitally pan within the digital field of view of the camera array. When the user pans to the edge of the digital field of view, a mount controller, such as a processor, can activate the motor(s) of the movable mount to move the camera array in the panning direction instructed by the user. In some embodiments, this switching between digital and manual panning can be transparent to the user, so that the user does not know that they have switched between digital and physical panning. However, embodiments are not so limited.
Additionally, the panning and selection of image data to be captured or transmitted can be accomplished in coordination with a guidance apparatus, such as a global positioning system (GPS), inertial navigation system (INS), and/or other such device or system, in order to collect information about a particular target for imaging. In such embodiments, imaging circuitry and/or computer executable instructions can be used to track the camera array position with respect to the location of the target and can adjust the camera array accordingly.
Those of ordinary skill in the art will appreciate that the example structure and type of movement shown in
Image data and control information are passed between the camera assemblies 512-1 to 512-N and a number of circuit boards 526-1, 526-2, 526-3, to 526-T. Each circuit board includes a processor (e.g., 538-1, 538-2, 538-3, to 538-Q), memory (e.g., 540-1, 540-2, 540-3, to 540-R), an image information converter/transceiver (e.g., 546-1, 546-2, 546-3, to 546-S), and a control information converter/transceiver (e.g., 547-1, 547-2, 547-3, to 547-U). These components can be used to select and format image data to be collected, and to process, store, display, transmit, and/or print collected image data. The components can also be used to control the selection of, zoom, pan, and movement of the cameras and camera array.
Memory 540-1, 540-2, 540-3, to 540-R can be used to store image data and computer executable instructions for receiving, manipulating, and sending image data as well as controlling the camera array movement, selecting a camera, a field of view, and/or a zoom ratio, among other functions. Memory can be provided in one or more memory locations and can include various types of memory including, but not limited to RAM, ROM, and Flash memory, among others.
One or more processors, such as processor 538-1, 538-2, 538-3, to 538-Q can be used to execute computer executable instructions for the above functions. The imaging circuitry 524-1, 524-2, 524-3, to 524-P and DSP circuitry on circuit boards 526-1, 526-2, 526-3, to 526-T, as described above, can be used to control the receipt and transfer of image data and can control the movement of the camera array and, in some embodiments, can control selection of cameras, fields of view, and/or zoom ratios. Additionally, these functionalities can be accomplished through use of a combination of circuitry and computer executable instructions.
As discussed above, the information can be directed to other devices or systems for various purposes. This direction of the information can be by wired or wireless connection.
For example, in the embodiment illustrated in
The image information antenna 548 can be of any suitable type, such as an NTSC antenna suited for communicating information under the NTSC standard discussed above. The camera control antenna 550 can also be of any suitable type. For example, antennas for communicating wireless RF information are one suitable type.
The embodiment shown in
Image data and control information are passed between the imaging circuitry 524-1, 524-2, to 524-P and a number of processors 538-1, 538-2, to 538-Q provided on the circuit board 524.
The circuit board 524, shown in
The circuit board 524, shown in
In the embodiment shown in
The embodiment shown in
The density of a frame of image data is measured in pixels, or oftentimes, mega-pixels (a mega-pixel=one million pixels). The table in
The vertical and horizontal dimensions of frames captured and/or transferred can vary from component to component. For instance, under the NTSC standard, the horizontal pixel dimension for the NTSC signal is 720 pixels and the vertical pixel dimension is 486 pixels. As demonstrated in the table of
The ratio of the horizontal pixel density of the camera to the NTSC standard is calculated in column 660. For instance, for the 3 mega-pixel camera, the camera has 2048 horizontal pixels and the NTSC standard has 720 horizontal pixels. Accordingly, the ratio of the horizontal pixel densities is:
NTSC horizontal pixels/camera horizontal pixels=0.3515625 (1)
which is shown as 0.35 in column 660.
From such ratios, the fields of view, such as the horizontal fields of view shown in the table of
360*Arc Tan (ratio calculated above*Tan (high horizontal field of view*π/360))/π=38.740 (2)
The calculated low horizontal field of view value is the “zoomed” field of view of the camera.
In the embodiments described in the table of
Based upon the table of
1/ratio calculated in equation (1)*Tan (high horizontal field of view *π/360) (3)
Accordingly, for a 3 mega-pixel camera having a 90 degree high horizontal field of view, the zoom ratio is 2.844.
In the embodiments described in the table of
Based upon the tables of
Further, as was the case with the fields of view calculations provided above, in such embodiments, there is no overlap between the zoom ratios of the multiple cameras that are calculated in the table of
Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that any arrangement calculated to achieve the same techniques can be substituted for the specific embodiments shown. This includes the use of cameras having different pixel densities within a multiple camera array, as well as variation in the lenses used, and orientations of the cameras with respect to each other in a multiple camera array. This disclosure is intended to cover adaptations or variations of various embodiments of the invention. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one.
Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of ordinary skill in the art upon reviewing the above description. The scope of the various embodiments of the invention includes various other applications in which the above structures and methods are used. Therefore, the scope of various embodiments of the invention should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.
In the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may lie in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.