This application is based upon and claims the benefit of priority from prior Japanese Patent Application No. 2003-283498, filed Jul. 31, 2003, the entire contents of which are incorporated herein by reference.
1. Field of the Invention
The invention relates to an image pickup device such as a network camera, and in particular, to an image pickup device and an image pickup method for transmitting image information in segment regions set by a plurality of users.
2. Description of the Related Art
With recent wide spread use of digital devices, a wide variety of image information devices, such as digital cameras, are developed and produced. As such an image information device, for example, a video system having a network function is used.
Patent Document 1 (Jpn. Pat. Appln. KOKAI Publication No. 2003-111050) discloses a video distribution server and a video reception client system of the aforementioned type, in which when image information in a plurality of regions is supplied from a plurality of users, frame rates are determined in accordance with levels of interest to perform the information distribution.
However, according to the prior art described above, while the frame rates are determined corresponding to the levels of interest, there exists a problem in that the frame rates are not determined in accordance with other factors. More specifically, in the conventional apparatus, when the plurality of segment regions are set from the external devices, process parameters cannot be automatically determined sufficiently taking, for example, the areas of the segment regions, desired transfer speed rates, and image quality into consideration.
An embodiment of the invention provides an image pickup device comprising an image pickup portion which picks up an image; a communication portion which makes communication with a plurality of external devices on a network; a setting portion which receives a signal from each of the plurality of external devices via the communication portion and sets a segment region of image information to be transmitted to the each external device and priority items of process parameters when the image information is transmitted; and a control portion which, upon determining process parameters in accordance with the priority items of the process parameters, transmits image information in the segment region, which is set in units of the each external device from the image information of the image picked up by the image pickup portion, to the each device on the network via the communication portion in correspondence to the determined process parameters.
With reference to the drawings, an image pickup device of the invention will be described in detail hereinbelow.
<Network Camera which is an Image Pickup Device of the Invention>
(Configuration)
Using the drawings, an image pickup device of the invention will be described hereinbelow by reference to an example case of a PC (personal computer) connected to a network camera and a network.
Referring to
In addition, the network camera device 10 has an MPU (main processing unit) 20 and a memory 21. The MPU 20 performs total control of processing and operations of the apparatus, and includes a function of a segmentation processing portion which performs segmentation processing and a function of a segment setting portion, which are will be described below as features of the invention. The memory 21 is used, for example, to store programs that manage such operations as those mentioned above, to provide work areas usable for execution of the individual processing operations for image signals, and to preserve coordinate information of per-user segment regions and screen data for alarm presentation to be displayed at the time of motion detection and the like.
In addition, the network camera device 10 has an Ethernet communication portion 18 and a wireless LAN (local area network) communication portion 19 that are connected to the MPU 20 via a data bus. The network camera device 10 is enabled thereby to perform the process of communication with, for example, each individual external PC 26 via a wired network N or a wireless network.
The network camera device 10 further has a pan driver 22, that is connected to the MPU 20 via the data bus, for driving the camera unit C in a pan direction; a pan motor 24 formed of a stepping motor or the like; a tilt driver 23 for driving the camera unit C in a tilt direction; and a tilt motor 25 formed of a stepping motor or the like. The camera unit C has at least the objective lens 11 and the solid-state image pickup device 13.
As shown in
Further, as shown in
(Basic Operations)
The network camera device 10 having the configuration described above performs basic operations described hereunder. Specifically, the network camera device 10 is capable of performing such operations as an image pickup operation in which incident light is received from an object, and an image signal corresponding to a screen of an image picked up of the object is supplied via the network or the like; a camera driving operation in which the camera unit C is driven in, for example, the pan direction or the tilt direction; operations (such as a motion detection operation) in various operation modes in accordance with image signals indicative of the picked-up image; and various setting operations for producing settings of segment regions for the segmentation processing described below; and a self-test operation.
More specifically, the image pickup operation is performed under the control of the MPU 20 responsive to the operation program stored in the memory 21 upon receipt of an instruction signal from, for example, the PC 26, which is a control unit, via the network N (or the wireless network). Having received incident light from an object through the objective lens 11, the solid-state image pickup device 13 supplies a detection signal corresponding to the incident light to the image processing portion 16. After a predetermined image processing is applied, image compression such as JPEG compression or MPEG compression is performed in the image compression portion 17, the signal is output to the outside via the Ethernet communication portion 18 and the wireless LAN communication portion 19.
Additionally, in the camera driving operation, the MPU 20 all the time recognizes the direction of the current camera unit C after zero-coordination tuning in the pan motor 24 and the tilt motor 25 that are the stepping motors. Thereby, the MPU 20 all the time controls the coordinate of the screen of images being picked up by the currently operating camera unit C. More specifically, when the camera unit C is driven in the pan direction or the tilt direction in response to an operation control signal to be supplied from the MPU 20 to the driver and the image pickup screen is thereby varied, the MPU 20 is synchronously recognizing the coordinate of the current image pickup screen at all the time. As such, on the screen of the PC 26 or the like connected via the network, while viewing a image pickup screen corresponding to image signals continually being supplied from the current image pickup device 10, the user can move the camera unit C in the pan direction or the tilt direction, and the user can view a image pickup screen corresponding to the movement of the camera unit C. While the MPU 20 recognizes and manages the coordinate of the current image pickup screen, also the user can acquire information of the coordinate of the current image pickup screen through the PC 26 or the like in correspondence to operations.
In each individual operation mode, for example, a movement detection operation mode, the image pickup device 10 automatically detects the movement of an image in an arbitrary region set by the user from a PC existing as an external device on the network. More specifically, suppose that, in the movement detection operation mode, a movement-detection observation area in an image pickup screen is set by the user operation; and thereafter, in a set time, a variation greater than or equal to a predetermined value set for the image pickup screen is detected. In this event, the MPU 20 determines the occurrence of movement detection and performs operations, such as a warning operation to output an alarm signal, and addition of an alarm image screen data stored in the memory 21 to an image signal to output the signal.
(Setting Operation for Segment Region)
With reference to a flowchart, a description will be made hereinbelow in detail regarding setting operations of a segment region in the below-described segmentation processing for image information.
The segment region in the network camera device 10 can be set in at least two cases. One case is that, as shown in
An example case of the operation of setting the segment region according to the invention will now be described with reference to the flowchart of
At the outset, the network camera devices 10 of the invention are supplied with an IP address signal specified from a control unit, such as the PC 26 residing on the network. In this case, one of the network camera devices 10 is selected to operate under the control of the PC 26 or the like when the supplied signal is determined to correspond to the Ethernet communication portion 18 or the wireless LAN communication portion 19 of the network camera device 10 (S31). Upon reception of an instruction for the image pickup operation from the PC 26, the image pickup operation is carried out under the control of the MPU 20, and a detection signal corresponding to incident light is supplied from the solid-state image pickup device 13 to the image processing portion 16. In the image processing portion 16, the input image signal undergoes image processings, such as the sharpness processing, contrast processing, gamma correction, white-balance processing, and pixel addition processing. Thereafter, in the image compression portion 17, the image signal undergoes JPEG compression or MPEG compression and is output via either one of the Ethernet communication portion 18 and the wireless LAN communication portion 19. The output image signal undergoes decompression processing in the PC 26, and is then displayed therein in the form of a screen of a browser application 31, as shown in
In this stage, when the mode of setting the segment region is selected by the user (S33), a current image pickup screen 37 is displayed together with manipulation icons 31 to 37 in the screen of the browser application 31 shown in
These manipulation icons are provided for a segment-region setting mode. An “ALL ON” icon 32 is used to set an entire screen to a segment region. An “ALL OFF” icon 33 is used to cancel the segment region set for the entire screen. A “RESET” icon 34 is used to cancel a segment region specified using a pointing device such as a mouse to return a set value to a default value. A “Save & Exit” icon 35 is used to confirm a segment region specified by the pointing device such as the mouse and to terminate the segment-region setting mode. A “Close” icon 36 is used to close the screen of that mode. An arrow icon 37 is used to move the camera unit C in the pan or tilt direction.
The setting of the segment region representing the image information according to the invention can thus be implemented in the current display screen A, as shown in
Suppose that, determining the set region to be defective, the user operates the mouse or the like and thereby supplies an instruction signal again to the MPU 20 via the network. In this case, the active display is canceled, and the screen is returned to a normal image pickup display that is equivalent to that in the other region, thereby enabling the user to implement very intuitive region specification.
Finally, for determination of the region on the current active display as the segment region 39, the user manipulates, for example, the “Save & Exit” icon 35, and supplies an instruction signal to the MPU 20 via the network. Upon the determination and instruction, the coordinates of instructed blocks (in a matrix form) are registered as a new segment region into, for example, the memory 21 (S38).
In this manner, in the set screen shown in
Registration items of the segment region and the like will be described in detail hereunder. As shown in
According to volumes of predicted image information corresponding to coordinates from the plurality of the above-described process parameters, and segment regions, the optimal compression ratio and frame rate are automatically calculated by the MPU 20 or the like, and are stored into the memory 21 or the like.
(Communication Method Involving Segmentation Processing)
A communication method involving the segmentation processing of the invention will now described below in detail. Description will be made with reference to
More specifically, as shown in
Subsequently, optimizing processing for the process parameters is executed so that the parameters are optimized in units of the user to the transfer environments desired by the individual users. More specifically, by way of an example case, the transfer speed rates Ba to Bj to which the highest priority should be assigned for the individual users are first provided, and a verification is made which of the compression ratio and the frame rate has priority (S24). However, these priority items are presented just by way of example, and it is preferable that other process parameters be held as options selectable as desired by the individual users.
By way of an example, the relationship among the transfer speed rate, the compression ratio, the image volume in the segment region, and the frame rate is expressed by
(Transfer speed rate)=(Compression ratio)×(Image volume in segment region)×(Frame rate).
Accordingly, the compression ratio and the frame rate can be obtained corresponding to a user-desired priority item (the compression ratio, for example) by obtaining the image volume from the coordinate information of the segment region. The frame rate determines how many frames of image information processed per second.
If the frame rate has priority (S24), the method determines a frame rate that implements the desired transfer speed rate corresponding to the volume of the segment image (S25), and then determines a compression ratio corresponding to the frame rate (S26). In this case, the desired frame rate is guaranteed after the transfer speed rate has been guaranteed.
On the other hand, if the compression ratio has priority (S24), the method determined a compression ratio that implements the desired transfer speed rate corresponding to the volume of the segment image (S27), and then determines a frame rate corresponding to the compression ratio (S28). In this case, the desired compression ratio is guaranteed after the transfer speed rate has been guaranteed.
As another embodiment, when the transfer speed rate also is included to the objects of the priority items, the frame rate can also be set as an absolute value and the compression ratio can be set as an absolute value. Alternatively, a method is preferable in which, the (image volume in segment region) value is not predicted from the coordinate information of the segment image, but the (image volume in segment region) value is instead obtained after the segment image has been acquired, whereby the process parameters such as (transfer speed rate), (compression ratio), and (frame rate) are optimized each time the (image volume in segment region) value is obtained.
Under a transmission environment according to the optimized process parameters, the segment image is transmitted to an external device P or the like present on the network (S29). The transmission processing described above is sequentially performed for each of all the users a to j (S30).
As described above, according to the image pickup device of the invention, the image information in the segment region required by each of the plurality of users is transmitted under the transmission environment having been automatically optimized according to the process parameters desired by the each individual user. Thereby, transmission processing for the image information in the segment region can be implemented corresponding to conditions required by the external device possessed by each of the plurality of users.
The above embodiment has been described with reference to the example cases where the segment images desired by the individual users are secured when the positions of the pan motor and the tilt motor are all different. However, the desired segment images can be similarly secured even in a case where segment images corresponding to the plurality of users can be acquired from image information in the same position. In this case, electronic tilt processing from one-screen image information is possible, thereby enabling the segmentation processing to be implemented without driving, for example, the pan motor and the tilt motor.
According to the invention, requests for segment regions are received from a plurality of users (a plurality of external devices), priority items of process parameters are concurrently received, and image information of the segment regions are transmitted in response thereto under optimal transmission environments. More specifically, in addition to segment regions in an image screen, parameters such as transfer speed rates and priority items such as compression ratios with priority and frame rates with priority are provided from a plurality of users. Then, appropriate compression ratios are determined, and frame rates corresponding thereto are further determined in accordance with the information of, for example, volumes of image information in the segment regions, desired transfer speed rates, and compression ratios with priority. Thereby, not only simple screen information in the segment region desired by the user, but also individual transmission processings can be automatically implemented with appropriate image quality when the user is desirous of image quality (compression ratio with priority) and at a transfer speed rate desired with the frame rate for guaranteeing stable transmission when the user is desirous of the stable transmission (frame rate with priority). Consequently, according to the image pickup device of the invention, distribution processing for appropriate image information can be automatically implemented corresponding to the different requests of the plurality of users.
According to the various embodiments described above, those skilled in the art will be able to implement the invention, and various other modified examples of the various embodiments will easily occur to those skilled in the art. Further, it will be possible even for those not having sufficient inventive knowledges and skills to adapt the invention by way of various other embodiments. Thus, the invention covers a broad range of applications as long as they do not contradict the principles and novel features disclosed herein; that is, the invention is not limited to the embodiments described hereinabove.
Number | Date | Country | Kind |
---|---|---|---|
2003-283498 | Jul 2003 | JP | national |