1. Technical Field
Embodiments of the present disclosure generally relate to systems and methods for processing data, and more particularly to a system and a method for processing image data.
2. Description of Related Art
With the development of computer networks and multimedia applications, video technology, that is, digitally capturing, recording, processing, storing, transmitting, and reconstructing a sequence of still images representing motion, has found widespread application. Such video can be seen as made up of a plurality of images.
As is known, when capturing video record, more than one video camera is preferable since the images captured thereby provide a variation in perspectives of the event or footage.
However, the images to be integrated may share area with others. Such redundantly occupied area increases required storage space for the video and occupies undue bandwidth during transmission of the video data.
The application is illustrated by way of examples and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean at least one.
In general, the word “module” as used hereinafter, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, for example, Java, C, or assembly. One or more software instructions in the modules may be embedded in firmware. It will be appreciated that modules may comprised connected logic units, such as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other computer storage device.
In one embodiment, the function modules of the data processing device 2 may include a division module 20, an image selection module 21, an identification module 22, a usable area determination module 23, a character mark module 24, an image compression module 25, an image integration module 26, and an image output module 27.
In other embodiments, the system 100 may include more than one data processing device 2 in which the function modules 20-27 are distributed. For example, the division module 20, the image selection module 21, the identification module 22, the usable area determination module 23, and the character mark module 24 can be included in an encoder, and the image compression module 25, the image integration module 26, and the image output module 27 in a decoder.
In one embodiment, at least one processor 28 of the data processing device 2 executes one or more computerized codes of the function modules 20-27. The one or more computerized codes of the functional modules 20-27 may be stored in a storage system 29 of the data processing device 2.
The division module 20 is operable to divide a coordinate plane of the scene into a plurality of partitions according to predetermined division points. In one embodiment, the number of the partitions equals the number of the video cameras 1. An example of such division of the coordinate plane of the scene is illustrated in
The image selection module 21 is operable to select an image from the images captured by the video cameras 1. In one embodiment, the selection may be random.
The identification module 22 is operable to identify information of the selected image. In one embodiment, the information includes the video camera 1 on which the selected image was captured. Here, each image captured by the video cameras 1 bears a mark indicating the video camera 1 on which it was captured.
The usable area determination module 23 is operable to determine a usable area of the selected image according to the information and the division of the coordinate plane of the scene, so as to distinguish unusable areas of the selected image. In one embodiment, the usable areas of images captured by different video cameras 1 are different. In the example of the division of the scene illustrated in
The character mark module 24 is operable to mark a character into each pixel point of the unusable areas of the selected image. The character may be any character.
The image compression module 25 is operable to compress the selected image by deleting the pixel points marked by the character to generate a compressed image.
The image integration module 26 is operable to integrate all the compressed images to generate a panoramic image of the scene.
The image output module 27 is operable to output the panoramic image of the scene.
In block S10, the video cameras 1 placed at different locations of a scene capture images of the scene.
In block S11, the division module 20 divides a coordinate plane of the scene into a plurality of partitions according to predetermined division points. In one embodiment, the number of partitions equals the number of the video cameras 1.
In block S12, the image selection module 21 selects an image from the images captured by the video cameras 1. In one embodiment, the selection may be random.
In block S13, the identification module 22 identifies information of the selected image, such as, here, on which video camera the 1 the selected image was captured. In the embodiment, each image captured by the video cameras 1 may includes a mark indicating the video camera 1 on which the image was captured.
In block S14, the usable area determination module 23 determines a usable area of the selected image according to the information and the division of the coordinate plane of the scene for distinguishing unusable areas of the selected image. In one embodiment, the usable areas of the images captured by different video cameras are different.
In block S15, the character mark module 24 marks a character into each pixel point of the unusable areas of the selected image. The character may be any character.
In block S16, the image compression module 25 compresses the selected image by deleting the pixel points marked by the character to generate a compressed image.
In block S17, the image selection module 21 determines if all the images captured by the video cameras 1 have been selected. If at least one image has not been selected, block S12 is repeated. If all the images have been selected, block S18 is implemented.
In block S18, the image integration module 26 integrates all the compressed images to generate a panoramic image of the scene.
In block S19, the image output module 27 outputs the panoramic image of the scene.
Although certain inventive embodiments of the present disclosure have been specifically described, the present disclosure is not to be construed as being limited thereto.
Various changes or modifications may be made to the present disclosure without departing from the scope and spirit of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
200910307273.X | Sep 2009 | CN | national |