The present invention relates to an imaging system, an imaging method, and a storage medium storing an imaging program, and more particularly to technology to achieve segmented image taking and merging processing of segmented images with logical consistency with high efficiency at low cost and thereby to enable easy formation of a high-definition merged image.
This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2009-269035, filed on Nov. 26, 2009, the entire contents of which are incorporated herein by reference.
Heretofore, there have been needs for high-definition images. In particular for art objects such as paintings, studies have been made to faithfully reproduce the originals in digital images from the viewpoint of studies, reuse or preservation.
On the other hand, in order to reproduce an object in an image with reality, there has been a proposal of technology that involves taking segmented images for obtaining a high-resolution image (segmenting an object and taking an image of each segment of the object), and synthesizing the segmented images thus taken (Refer to NPL 1). The technology uses a layering approach to take images of an object with resolutions changed stepwise, ranging from an image of the whole object (hereinafter called a whole-object image) to segmented images with a finally required resolution (hereinafter called a target resolution). Also, this technology aims to reproduce the object as realistic an image as possible by using the layering approach in which the segmented images thus obtained are merged through image merging operations using the whole-object image as a “reference image” for merging processing of segmented images with the resolution higher by one step, and then using the merged image as a reference image for the merging processing of segmented images with the resolution higher by another one step.
Although not targeted for art objects, there has also been a proposal of technology that involves taking a whole-object image and segmented images, making corrections on the whole-object image, determining the positions of the segmented images relative to the whole-object image before the correction, and merging the segmented images together, based on the determined positions and the corrected whole-object image, thereby generating high-resolution image data (Refer to PTL 1).
However, the conventional technologies have the following problems. Specifically, in the image taking using the layering approach, the segmented images are taken with changing shooting distances to obtain layered resolutions. Thus, viewpoints vary among the resolutions thereby to cause what is called a parallax problem, and an imaging object with an uneven surface (ex. an oil painting or the like), in particular, is prone to cause inconsistency in the image merging. Also, the image taking takes a lot of labor and time because a geometrical pattern is projected onto the imaging object for specifying image taking segments in the segmented image taking and an image of each segment of the object is taken with manual alignment with the projected geometrical pattern as a reference. Further, the image taking even takes more labor and time because of the necessity to take three images for one segment together, including a segmented image and also images of two types of projected geometrical patterns, which are to be used as a reference for the merging of the segmented images. Instead, segmented images may be taken with a single resolution and the segmented images may be merged together based on a whole-object image thereby to directly form a merged image. In this case, however, if the whole-object image and the segmented images have a large difference in resolution, there may arise problems such as a deterioration in accuracy of pattern matching in the merging operation or an image mismatch after the merging.
Therefore, the present invention has been made in view of the above-described problems, and a principal object of the present invention is to provide technology to achieve segmented image taking and merging processing of segmented images with logical consistency with high efficiency at low cost and thereby to enable easy formation of a high-definition merged image.
In order to solve the above-described problems, an imaging system of the present invention is a computer communicably coupled to a digital camera and a motor-driven tripod head therefor and configured to control image taking by the digital camera, and includes the following functional units. Specifically, the imaging system includes a distance calculator that calculates a distance from an imaging object to a fixed position of the digital camera by using a predetermined equation in accordance with a target resolution to be obtained for an image of the imaging object, a CCD resolution of the digital camera and a focal length of a lens, and stores the calculated distance in a storage unit.
Also, the imaging system includes a resolution setting unit that sets layered resolutions ranging from a resolution of a captured image of the whole imaging object to the target resolution, and stores the set resolutions in the storage unit.
Also, the imaging system includes a frame layout unit that executes a process for each of the resolutions stored in the storage unit, the process including forming segment frames into which an image taking range is segmented by dividing an angle of view of the image taking range by a predetermined angle, laying out the segment frames so that focus ranges in each two neighboring segment frames overlap with each other, and storing information on at least a layout of the segment frames and the corresponding resolution as a segmented image taking plan in the storage unit.
Also, the imaging system includes a segmented image taking command unit that transmits a command to take an image from a single viewpoint for the positions of the respective segment frames for each of the resolutions, based on the segmented image taking plan containing the layout information and the resolution information stored in the storage unit, to the digital camera and the motor-driven tripod head fixed at the distance from the imaging object, acquires segmented images taken for the respective segment frames for each of the resolutions from the digital camera, and stores the acquired segmented image in the storage unit.
Also, the imaging system includes a segmented image merging unit that executes a process for each of the segmented image taking plans stored in the storage unit, the process including reading each of captured images of a predetermined segmented image taking plan from the storage unit, forming a base image by enlarging the captured image to the same resolution as that of a segmented image taking plan having a one-step-higher resolution among the layered resolutions, determining a corresponding range on the base image for each of segmented images of the segmented image taking plan having the one-step-higher resolution, based on the position of the segmented image, merging the segmented images together by aligning each of the segmented images through pattern matching between the segmented image and the base image, and storing a merged image in the storage unit.
Also, an imaging method of the present invention is characterized in that a computer communicably coupled to a digital camera and a motor-driven tripod head therefor and configured to control image taking by the digital camera executes the following processes. Specifically, the computer executes a process of calculating a distance from an imaging object to a fixed position of the digital camera by using a predetermined equation in accordance with a target resolution to be obtained for an image of the imaging object, a CCD resolution of the digital camera, and a focal length of a lens, and storing the calculated distance in a storage unit.
Also, the computer executes a process of setting layered resolutions ranging from a resolution of a captured image of the whole imaging object to the target resolution, and storing the set resolutions in the storage unit.
Also, the computer executes a process of executing a process of executing a process for each of the resolutions stored in the storage unit, the process including forming segment frames into which an image taking range is segmented by dividing an angle of view of the image taking range by a predetermined angle, laying out the segment frames so that focus ranges in each two neighboring segment frames overlap with each other, and storing information on at least a layout of the segment frames and the corresponding resolution as a segmented image taking plan in the storage unit.
Also, the computer executes a process of transmitting a command to take an image from a single viewpoint for the positions of the respective segment frames for each of the resolutions, based on the segmented image taking plan containing the layout information and the resolution information stored in the storage unit, to the digital camera and the motor-driven tripod head fixed at the distance from the imaging object, acquiring segmented images taken for the respective segment frames for each of the resolutions from the digital camera, and storing the acquired segmented image in the storage unit.
Also, the computer executes a process of executing a process for each of the segmented image taking plans stored in the storage unit, the process including reading each of captured images of a predetermined segmented image taking plan from the storage unit, forming a base image by enlarging the captured image to the same resolution as that of a segmented image taking plan having a one-step-higher resolution among the layered resolutions, determining a corresponding range on the base image for each of segmented images of the segmented image taking plan having the one-step-higher resolution, based on the position of the segmented images, merging the segmented images together by aligning each of the segmented images through pattern matching between the segmented image and the base image, and storing a merged image in the storage unit.
Also, an imaging program of the present invention is stored in a computer-readable recording medium, and is for use in a computer communicably coupled to a digital camera and a motor-driven tripod head therefor and configured to control image taking by the digital camera, the program causing the computer to execute the following processes. Specifically, the imaging program of the present invention causes the computer to execute: a process of calculating a distance from an imaging object to a fixed position of the digital camera by using a predetermined equation in accordance with a target resolution to be obtained for an image of the imaging object, a CCD resolution of the digital camera, and a focal length of a lens, and storing the calculated distance in a storage unit; a process of setting layered resolutions ranging from a resolution of a captured image of the whole imaging object to the target resolution, and storing the set resolutions in the storage unit; a process of executing a process for each of the resolutions stored in the storage unit, the process including forming segment frames into which an image taking range is segmented by dividing an angle of view of the image taking range by a predetermined angle, laying out the segment frames so that focus ranges in each two neighboring segment frames overlap with each other, and storing information on at least a layout of the segment frames and the corresponding resolution as a segmented image taking plan in the storage unit; a process of transmitting a command to take an image from a single viewpoint for the positions of the respective segment frames for each of the resolutions, based on the segmented image taking plan containing the layout information and the resolution information stored in the storage unit, to the digital camera and the motor-driven tripod head fixed at the distance from the imaging object, acquiring segmented images taken for the respective segment frames for each of the resolutions from the digital camera, and storing the acquired segmented image in the storage unit; and a process of executing a process for each of the segmented image taking plans stored in the storage unit, the process including reading each of captured images of a predetermined segmented image taking plan from the storage unit, forming a base image by enlarging the captured image to the same resolution as that of a segmented image taking plan having a one-step-higher resolution among the layered resolutions, determining a corresponding range on the base image for each of segmented images of the segmented image taking plan having the one-step-higher resolution, based on the position of the segmented images, merging the segmented images together by aligning each of the segmented images through pattern matching between the segmented image and the base image, and storing a merged image in the storage unit.
According to the present invention, segmented image taking and merging processing of segmented images with logical consistency can be achieved with high efficiency at low cost to thereby enable easy formation of a high-definition merged image.
Embodiments of the present invention will be described in detail below by use of the drawings.
A computer 100 of the imaging system 10, and a digital camera 200 and a motor-driven tripod head 300 are coupled to a network 15 of the embodiment. The computer 100 is a server apparatus managed by a company or the like that provides digital archiving of art objects such for example as paintings, and may be envisaged as the apparatus communicably connected via the various networks 15 such as a LAN (local area network) and a WAN (wide area network) to the digital camera 200 and the motor-driven tripod head 300 placed in an art museum, that is, a location of photo shooting, containing an art object such as a painting as an imaging object 5. Of course, the computer 100, and the digital camera 200 and the motor-driven tripod head 300 may be integral with one another, rather than connected together via the network 15.
The digital camera 200 is the camera that forms an image of a subject, that is, the imaging object 5, through a lens 250 on a CCD (Charge Coupled Device), converts the image into digital data by the CCD, and stores the digital data in a storage medium 203 such as a flash memory, and is sometimes called a digital still camera. The digital camera 200 includes a communication unit 201 that communicates with the computer 100 via the network 15, and an image taking controller 202 that sets a specified resolution by controlling a zoom mechanism 251 of the lens 250 in accordance with a command to take an image, received from the computer 100 via the communication unit 201, and gives a command to perform an image taking operation to a shutter mechanism 205.
Also, the motor-driven tripod head 300 is an apparatus that movably supports and fixes the digital camera 200. The motor-driven tripod head 300 is the tripod head placed on a supporting member 301 such as a tripod, and includes a mounting portion 302 that mounts and fixes the digital camera 200, a moving portion 303 that acts as a mechanism (ex. a bearing, a gear, a cylinder, a stepping motor, or the like) that allows vertical and horizontal movements of the mounting portion 302, a tripod head controller 304 that gives a drive command (ex. a command for an angle of rotation of the motor or the like) in accordance with a command to take an image from the computer 100, to (the stepping motor or the like of) the moving portion 303, and a communication unit 305 that communicates with the computer 100 via the network 15.
In the embodiment, the digital camera 200 and the motor-driven tripod head 300 are used to perform single-viewpoint segmented image taking. (See
Although the state of the single-viewpoint image taking is schematically illustrated in
Next, a configuration of the computer 100 will be described. The computer 100 may be envisaged as the server apparatus that forms the system 10. The computer 100 is configured by including a storage unit 101, a memory 103, a controller 104 such as a CPU (central processing unit), and a communication unit 107, which are coupled to one another by a bus.
The computer 100 loads a program 102 stored in the storage unit 101 such as a hard disk drive into a volatile memory such as the memory 103 or does the like thereby to cause the controller 104 to execute the program 102. Also, the computer 100 may include an input unit 105 such as various keyboards and mice generally included in a computer apparatus, and an output unit 106 such as a display, as needed. Also, the computer 100 has the communication unit 107 such as NIC (Network Interface Card) that serves to send and receive data to and from other apparatuses, and is communicable with the digital camera 200 and the motor-driven tripod head 300 and the like via the network 15.
Next, description will be given with regard to functional units configured and held, for example based on the program 102, in the storage unit 101 of the computer 100. The computer 100 includes a distance calculator 110 that calculates a distance from the imaging object 5 to the fixed position of the digital camera by a predetermined equation in accordance with a target resolution to be obtained for an image of the imaging object 5, and a CCD resolution of the digital camera 200 and a focal length of the lens 250, and stores the calculated distance in the storage unit 101.
Also, the computer 100 includes a resolution setting unit 111 that sets layered resolutions ranging from a resolution of a captured image of the whole imaging object to the target resolution, and stores the set resolutions in the storage unit 101.
Also, the computer 100 includes a frame layout unit 112 that executes a process for each of the resolutions stored in the storage unit 101, and the process involves forming segment frames into which an image taking range is segmented by dividing an angle of view of the image taking range by a predetermined angle, laying out the segment frames so that focus ranges in each two neighboring segment frames overlap with each other, and storing information on at least a layout of the segment frames and a corresponding resolution as a segmented image taking plan in the storage unit 101.
Also, the computer 100 includes a segmented image taking command unit 113 that transmits a command to take an image from a single viewpoint for the position of each of the segment frames for each of the resolutions, based on the segmented image taking plan containing the layout information and the resolution information stored in the storage unit 101, to the digital camera 200 and the motor-driven tripod head 300 fixed at the above-described distance from the imaging object 5, acquires a segmented image taken for each of the segment frames for each of the resolutions from the digital camera 200, and stores the acquired segmented image in the storage unit 101.
Also, the computer 100 includes a segmented image merging unit 114 that executes a process for each of the segmented image taking plans stored in the storage unit 101, and the process involves reading each of captured images obtained by a predetermined segmented image taking plan from the storage unit 101, forming a base image by enlarging the captured image to the same resolution as that of a segmented image taking plan having a one-step-higher resolution of the layered resolutions, determining a corresponding range on the base image for each of segmented images obtained by the segmented image taking plan having the one-step-higher resolution, based on the position of each of the segmented images, merging the segmented images together by performing alignment of each of the segmented images by performing pattern matching between each of the segmented images and the base image, and storing a merged image in the storage unit 101.
Incidentally, the resolution setting unit 111 may set layered resolutions ranging from a resolution of a captured image of the whole imaging object to the target resolution, by setting a higher scaling factor from one to another of the resolutions as the resolution becomes higher, and store the set resolutions in the storage unit 101.
Also, the frame layout unit 112 may execute a process for each of the resolutions stored in the storage unit 101, and the process involves forming segment frames into which an image taking range is segmented by dividing an angle of view of the image taking range by a predetermined angle, calculating diameters of circles of confusion (hereinafter called confusion circle diameters) in each of the segment frames, determining in advance a confusion circle diameter indicating a range in which a circle of confusion is in focus (hereinafter called an allowable confusion circle diameter) by test image taking or the like, determining that a range where the confusion circle diameters are equal to or smaller than the allowable confusion circle diameter is a focus range in the segment frame, laying out the segment frames so that the focus ranges in each two neighboring segment frames overlap with each other, and storing information on at least a layout of the segment frames and a corresponding resolution as a segmented image taking plan in the storage unit 101.
The units 110 to 114 in the computer 100 described above may be implemented in hardware, or may be implemented in the program 102 stored in the memory or the appropriate storage unit 101 such as the HDD (Hard Disk Drive) in the computer 100. In this case, the controller 104 such as the CPU of the computer 100 loads a corresponding program from the storage unit 101 according to program execution, and executes the program.
Description will be given below with regard to an actual procedure for an imaging method of the embodiment, based on the drawings. Various operations for the imaging method to be described below are implemented in a program executed by being loaded into the memory of mainly the computer 100 (or the digital camera 200 or the motor-driven tripod head 300) that forms the system 10. Then, the program is composed of code to perform the various operations to be described below. A main flow of the imaging method of the embodiment is the flow from creation of a segmented image taking plan to image taking based on the segmented image taking plan, and merging processing of segmented images, as shown in
Firstly, description will be given mainly with regard to a process for creating a segmented image taking plan.
Then, the distance calculator 110 receives information on a maximum image-taking resolution, that is, a target resolution, of the imaging object 5 through the input unit 105, and stores the information in the storage unit 101 (at step s101). For example, for digitization of the imaging object 5 (or its conversion into image data) with a resolution of “1200 dpi (dots per inch),” input indicative of “1200” is received through the input unit 105. (Of course, a resolution list previously recorded in the storage unit 101 may be displayed on the output unit 106 in order for a user to select a desired resolution.)
Then, the distance calculator 110 receives information on the size (or the height and width) and the number of pixels (or the numbers of pixels in the height and width directions) of the image pickup device (or the CCD) of the digital camera 200 for use in image taking, through the input unit 105, and stores the information as “camera conditions” in the storage unit 101 (at step s102). Also, the distance calculator 110 receives information on the focal length of the lens 250 for use in image taking with the target resolution and a minimum image-taking distance of the lens 250 (that is, image taking becomes impossible if the lens approaches the imaging object any further) through the input unit 105, and stores the information as “lens conditions” in the storage unit 101 (at step s103).
Also, the distance calculator 110 calculates a distance, that is, a camera distance, from the imaging object 5 to the fixed position of the digital camera by a predetermined equation in accordance with the target resolution to be obtained for an image of the imaging object 5, and the CCD resolution of the digital camera 200 and the focal length of the lens 250, and stores the calculated distance in the storage unit 101 (at step s104).
Calculation of the camera distance z from the target resolution will be described, provided that in Part (1) of
Here, a relationship of the focal length f of the lens 250, the camera distance z and the distance b from the nodal point to the CCD is derived from a lens formula thereby to obtain “1/f=1/z+1/b,” which then can be transformed into and defined as “b=f*z/(z−f).” A relationship between the image taking range w and the width Cw of the CCD is determined by simple ratio calculations from
Then, these equations are combined into “p=Cd*25.4/(z/b*Cw)”=“Cd*25.4*b/(z*Cw)”=“Cd*25.4*(f*z/(z−f))/(z*Cw)”=“(Cd/Cw*25.4)*f/(z−f).” Also, the resolution of the CCD is “Cp=Cd/Cw*25.4,” and therefore, the above equation is “p=Cp*f/(z−f).” This equation leads to an equation for determination of the camera distance z, which is “z=Cp*f/p+f.” The camera distance z can be determined from this equation, provided that the target resolution p, the CCD resolution Cp and the focal length f of the lens are obtained in advance through the input unit 105. For example, when the target resolution is “1200 dpi,” the resolution of the CCD is “4000 dpi” and the focal length f of the lens 250 is “600 mm,” the camera distance z is “4000*600/1200+600,” which is “2600 mm.” Incidentally, the distance calculator 110 may execute the process of determining whether or not the camera distance z thus obtained is smaller than the minimum image-taking distance of the lens 250 (previously held as data in the storage unit 101), and displaying information indicating that a corresponding lens is to be excluded from candidates for selection on the output unit 106, when the camera distance z is smaller than the minimum image-taking distance.
Incidentally, Part (2) of
Then, the resolution setting unit 111 of the computer 100 sets layered resolutions ranging from a resolution of a captured image of the whole imaging object to the target resolution, and stores the set resolutions in the storage unit 101 (at step s105). At this time, preferably, the resolution setting unit 111 sets layered resolutions ranging from a resolution of a captured image of the whole imaging object to the target resolution, by setting a higher scaling factor from one to another of the resolutions as the resolution becomes higher. As shown for example in
Then, the frame layout unit 112 of the computer 100 executes a process for each of the resolutions stored in the storage unit 101, and the process involves forming segment frames into which an image taking range is segmented by dividing an angle of view of the image taking range by a predetermined angle, laying out the segment frames so that focus ranges in each two neighboring segment frames overlap with each other, and storing information on at least a layout of the segment frames and a corresponding resolution as a segmented image taking plan in the storage unit 101. At this time, the frame layout unit 112 may execute a process that involves forming segment frames into which an image taking range is segmented by dividing an angle of view of the image taking range by a predetermined angle, calculating confusion circle diameters in each of the segment frames, determining that a range where the confusion circle diameters are equal to or smaller than the allowable confusion circle diameter is a focus range in the segment frame, and laying out the segment frames so that the focus ranges in each two neighboring segment frames overlap with each other.
The process by the frame layout unit 112 will be described in detail below. First, the frame layout unit 112 sets a resolution of a segmented image taking plan to be created as a target resolution (at step s106), and calculates a layout of an image taking range for segmented image taking (hereinafter called segment frames) (at step s107). Here, for the segmented image taking with the target resolution, the focal length of the lens 250, the camera distance, and the size andf the number of pixels of the CCD are fixed values, and the segment frames are determined by determining the orientation of the digital camera 200. Also, the segment frames can be automatically calculated and arranged in the form of tiles in the image taking range of the whole imaging object by specifying a minimum overlap rate (hereinafter called a minimum overlap). The reason for a need for information on the minimum overlap is that, when the overlap rate of the segment frames becomes less than zero, the merging cannot be made in the image merging processing. Therefore, the user specifies the minimum overlap through the input unit 105, allowing for an error in machine accuracy of the motor-driven tripod head 300, while the frame layout unit 112 receives the specified minimum overlap and stores it in the storage unit 101.
First, the frame layout unit 112 arranges the segment frames equiangularly. Here, as the segment frames are farther away from the center of the image taking range of the whole imaging object, the segment frames in themselves become larger by being subjected to perspective correction. (The resolution becomes lower since the size of the CCD is fixed.) In this case, however, the segment frames all have the same angle of view (that is, the angle at which the segment frames become wider as seen from the digital camera 200). Therefore, the angle of view of the image taking range of the whole imaging object is segmented by the angle of view of the segment frames thereby to obtain a layout in which the segment frames are uniformly arranged in the image taking range of the whole imaging object.
Then, the frame layout unit 112 performs calculation of a first layout, using an image taking angle and an angle of view of the digital camera 200. For example, it is assumed that the angle of view of the image taking range of the whole imaging object in the width direction is “50°,” the angle of view of the segment frames in the width direction is “10°,” and the minimum overlap is “0°.” In this case, for example, attention is given to the width direction of the imaging object; when the segment frames are arranged with camera angles shifted “10°” from each other in the width direction, five segment frames can be laid out in the image taking range of the whole imaging object. (See
The frame layout unit 112 executes this layout method on the imaging object also in its height direction thereby to lay out the segment frames in the image taking range of the whole imaging object.
Next, an example will be given in which the camera angle rather than the angle of view is equally divided to lay out segment frames. (See
In the example shown in Table 1, numbers are assigned to rows and columns of the segment frames, starting at the upper left of the image taking range, and frame names are assigned to the segment frames so that layout positions can be readily seen (“Frame+row number+column number”). Also, the angle of the digital camera 200 in a position in which the digital camera 200 faces squarely the imaging object (that is, in the center of the imaging object) is such that x=0° and y=0°, and a plus angle (+) is set in an upper left direction and a minus angle (−) is set in a lower right direction, thereby to set the camera angle. In the example shown in Table 1, the camera angle is such that x=0° and y=0° at an angle at which “Frame0403” faces the center of the imaging object 5.
Also, for the segmented image taking, the segmented image taking plan records also image taking conditions described previously (such as image taking resolutions, an image taking range, lens conditions, camera conditions, and focus conditions) (See Table 2 in
Next, description will be given with regard to a process for calculating an in-focus range (step s108), executed by the frame layout unit 112. In the embodiment, a range where the confusion circle diameters are equal to or less than a given value is defined as the in-focus range, that is, a focus range. An equation for calculation of the circle of confusion will be given below. Also, a concept of the calculation of the circle of confusion is shown in
Equation (1) is derived from the following formula.
Equation (2) is based on an equation of similar triangles, and S2 denotes a distance from the subject to the lens; f1, a distance from the lens to the image pickup device; S1, a position (that is, an in-focus position) in which focus is obtained on the position of the image pickup device; and A, a lens aperture.
A relationship between the confusion circle diameter COC and C0 is derived from calculation of lens magnification.
Equation (4) is derived from a lens formula when the focal length on a point at infinity is defined as f.
When an f-number is defined as N, Equation (5) is derived from a relationship of the f-number N, the focal length f and the lens aperture A.
Here, in the embodiment, the digital camera 200 is rotated on the nodal point of the lens 250 to perform image taking of segment frames.
Also in this case, Equation (1) can be used to determine the confusion circle diameter (COC). For example, the subject positions U and D can be calculated in the following manner.
Also, for any position other than the subject positions U and D, the distance between the lens and the subject can be determined by calculation of geometry using trigonometric functions, to thus enable determining the confusion circle diameter COC in every position in the range (or the segment frames) to be subjected to segmented image taking. In
After all, the focus range in the segment frame can be determined by calculating the size of the circle of confusion from Equation (1), provided that the angle of the digital camera 200 in the segment frame (that is, θ in
Also, the frame layout unit 112 determines the allowable confusion circle diameter and the f-number. The frame layout unit 112 calculates the confusion circle diameter COC in each of the segment frames uniformly laid out as described previously, by using Equation (1), determines the value of the allowable confusion circle diameter, and determines that a range where the confusion circle diameters COC are equal to or smaller than the value is a focus range.
Here, for example, the frame layout unit 112 arranges a segment frame in the center of the image taking range, and calculates and arranges the focus ranges of neighboring segment frames so that the focus ranges of the neighboring segment frames overlap with the focus range of the center segment frame (at step s109). Also, the frame layout unit 112 synthesizes the focus ranges of a group of the segment frames arranged around the center segment frame, thereby to determine that the synthesized focus range is the center focus range (at step s110). The frame layout unit 112 arranges the segment frames in sequence from the center to the periphery so that the focus ranges overlap with each other, until the center focus range here determined fills up the image taking range, thereby to lay out the segment frames so that (the edges of) the focus ranges of all segment frames overlap with each other (“No” at step s111→step s109→step s110→“Yes” at step sill). On the other hand, when the determined center focus range fills up the image taking range (“Yes” at step sill), the frame layout unit 112 determines that proper arrangement of the segment frames in the image taking range for the segmented image taking plan having the target resolution has been finished, and retains information on the segmented image taking plan (that is, data contained in Tables 1 and 2 shown in
The frame layout unit 112 repeatedly executes steps s108 to s112 until the steps are completed for all resolutions stored in the storage unit 101 (or all segmented image taking plans having the resolutions) (“No” at step s113→step s114→steps s108 to s112→“Yes” at step s113). The process for creating the segmented image taking plan is thus executed.
Incidentally, of the above-described image taking range, camera conditions, lens conditions and focus conditions and the like, the image taking range is always to be set for each imaging object, and other conditions are values uniquely determined by determining the digital camera 200 or the lens 250. Therefore, in the embodiment, the conditions of the digital camera 200 or the lens 250 for use may be recorded (as a database) in the storage unit 101 of the computer 100 in order for the computer 100 to retrieve and select an optimum in accordance with the target resolution and the image taking range specified through the input unit 105. (For example, the computer 100 selects the digital camera 200, the lens 250 or the like having the camera conditions, the lens conditions or the focus conditions for a target resolution of “1200 dpi.”) Alternatively, specific combinations of conditions may be named and held in the storage unit 101 in order for the computer 100 to select a corresponding combination of conditions in accordance with specified conditions through the input unit 105 (for example, a “1200-dpi image taking set,” a “600-dpi image taking set,” and the like).
Although image taking starting with a segmented image taking plan having any resolution does not affect merging processing of segmented images, in the embodiment, description will be given with regard to a procedure for image taking starting with a segmented image taking plan having a low resolution. The segmented image taking command unit 113 of the computer 100 receives a command from the user through the input unit 105, reads a segmented image taking plan for image taking of the whole imaging object from the storage unit 101, and displays the segmented image taking plan on the output unit 106 (at step s200). The user places the corresponding digital camera 200, lens 250 and motor-driven tripod head 300 in an accurate position (or the position corresponding to the camera distance) in which they face squarely the center of the imaging object 5, in accordance with image taking conditions indicated by the segmented image taking plan (which are data indicated for a corresponding resolution by Tables 1 and 2 shown in
Then, the segmented image taking command unit 113 of the computer 100 reads position information on segment frames, and resolution or the like information, indicated by the segmented image taking plan (at step s201). In response to an image-taking start trigger received from the user through the input unit 105, the segmented image taking command unit 113 gives a command for an angle of rotation of the moving portion 303 to the motor-driven tripod head 300 according to the position information on the segment frames, gives a zoom command to a lens zoom mechanism of the digital camera 200 so as to set a focal length according to the resolution (ex. 75 dpi), and gives a shutter press command to the digital camera 200, thereby to orient the lens of the digital camera 200 to the center of the imaging object 5 (at step s202) and start taking a whole-object image (at step s203).
The digital camera 200 acquires captured image data by performing image taking, assigns a name that can uniquely identify a captured image to the captured image data, and stores the image data in a storage medium such as a flash memory. The “name that can uniquely identify an image” may be envisaged as “resolution value+segment frame number” or the like, for example. Incidentally, here, the segmented image taking plan of interest is the plan for taking the whole-object image of the imaging object (that is, the number of images taken is one), and, with a resolution of “75 dpi,” an image name such as “Plan75-f0001” may be assigned to the image data. Alternatively, the segmented image taking command unit 113 of the computer 100 may communicate with the digital camera 200 to acquire captured image data for each image taking by the digital camera 200, assign a name that can uniquely identify an image to the captured image data, and store the image data in the storage unit 101.
Incidentally, the digital camera 200 records a corresponding captured image name in the storage medium, for a segment frame that has undergone image taking, of the segment frames indicated by the segmented image taking plan. In other words, a list of correspondences between the segment frames and captured images related thereto is created. (However, if the image name in itself contains a segment frame name as is the case with the above-described example, each image name may merely be extracted without the need to create a new list.) The digital camera 200 may transfer a list of correspondences between the segment frames and the captured image names obtained therefor to the computer 100. In this case, the segmented image taking command unit 113 of the computer 100 receives the correspondence list and stores the correspondence list in the storage unit 101.
When the taking of the whole-object image of the imaging object 5 is completed as described above (“Yes” at step s204), a segmented image taking plan having a target resolution is not finished (“No” at step s205), and therefore, the segmented image taking command unit 113 of the computer 100 selects a segmented image taking plan having a one-step-higher resolution than that of the segmented image taking plan for taking the whole-object image, from the storage unit 101 (at step s206), and returns the processing to step s201.
Then, the segmented image taking command unit 113 of the computer 100 reads a first segment frame indicated by the segmented image taking plan (for example, in ascending or descending order of numeric characters or characters contained in the names of the segment frames, or the like) from the storage unit 101, and reads position information indicated by the first segment frame, and resolution or the like information (at step s201). The segmented image taking command unit 113 gives a command for an angle of rotation of the moving portion 303 to the motor-driven tripod head 300 according to the position information on the first segment frame, gives a zoom command to the lens zoom mechanism of the digital camera 200 so as to set a focal length according to the resolution, and gives a shutter press command to the digital camera 200, thereby to orient the lens of the digital camera 200 to the first segment frame (at step s202) and start segmented image taking (at step s203).
The digital camera 200 acquires captured image data, that is, segmented image data, by performing image taking, assigns a name that can uniquely identify a captured image to the segmented image data, and stores the image data in a storage medium such as a flash memory. The “name that can uniquely identify an image” may be envisaged as “resolution value+segment frame number” or the like, for example. The segmented image taking plan of interest is the plan having a resolution of “150 dpi,” and, if the first segment frame is assigned “f0001,” a segmented image name such as “Plan150-f0001” may be assigned to the image data. Alternatively, the segmented image taking command unit 113 of the computer 100 may communicate with the digital camera 200 to acquire segmented image data for each image taking by the digital camera 200, assign a name that can uniquely identify an image to the segmented image data, and store the image data in the storage unit 101. As is the case with the above, the digital camera 200 records a corresponding segmented image name in the storage medium, for a segment frame that has undergone image taking, of the segment frames indicated by the segmented image taking plan.
When the taking of the first segment frame by the segmented image taking plan having a resolution of “150 dpi” is completed as described above but segment frames still remain (“No” at step s204), the segmented image taking command unit 113 of the computer 100 reads a subsequent segment frame (for example, in ascending or descending order of numeric characters or characters contained in the names of the segment frames, or the like) from the storage unit 101 (at step s204a), and reads position information indicated by the subsequent segment frame, and resolution or the like information (at step s201). The segmented image taking command unit 113 gives a command for an angle of rotation of the moving portion 303 to the motor-driven tripod head 300 according to the position information on the subsequent segment frame, gives a zoom command to the lens zoom mechanism of the digital camera 200 so as to set a focal length according to a resolution of “150 dpi,” and gives a shutter press command to the digital camera 200, thereby to orient the lens of the digital camera 200 to the subsequent segment frame (at step s202) and execute segmented image taking (at step s203).
The digital camera 200 acquires segmented image data by performing image taking, assigns a name that can uniquely identify a captured image to the segmented image data, and stores the image data in a storage medium such as a flash memory. The segmented image taking plan of interest is the plan having a resolution of “150 dpi,” and, if the subsequent segment frame is assigned “f0002,” a segmented image name such as “Plan150-f0002” may be assigned to the image data. Alternatively, the segmented image taking command unit 113 of the computer 100 may communicate with the digital camera 200 to acquire segmented image data for each image taking by the digital camera 200, assign a name that can uniquely identify an image to the segmented image data, and store the image data in the storage unit 101. As is the case with the above, the digital camera 200 records a corresponding segmented image name in the storage medium, for a segment frame that has undergone image taking, of the segment frames indicated by the segmented image taking plan.
Thus, when the taking of the segment frames by the segmented image taking plan having a resolution of “150 dpi” is executed as described above and the taking of the last segment frame is finally completed (“Yes” at step s204), the segmented image taking command unit 113 of the computer 100 determines whether the segmented image taking plan having the target resolution is finished (at step s205). Of course, in the example of the embodiment, segmented image taking plans having a resolution of “300 dpi” and a resolution of “1200 dpi” still remain (“No” at step s205), and therefore, the segmented image taking command unit 113 selects the segmented image taking plan having a one-step-higher resolution of “300 dpi” than that of the segmented image taking plan having a resolution of “150 dpi,” from the storage unit 101 (at step s206), and returns the processing to step s201.
When, as a result of repeated execution of the above process, image taking for the segmented image taking plan having a target resolution of “1200 dpi” is completed (“Yes” at step s205), the segmented image taking command unit 113 brings the processing to an end.
In the example of the embodiment, first, the segmented image merging unit 114 reads segmented image taking plans from the storage unit 101 (at step s300), and selects a plan for the taking of the whole-object image of the imaging object 5, that is, a segmented image taking plan having a minimum resolution of “75 dpi,” from the segmented image taking plans (at step s301). Also, the segmented image merging unit 114 reads captured image data stored in the storage unit 101 for the segmented image taking plan having a resolution of “75 dpi” (at step s302), and executes a process for eliminating contaminants or distortions produced in an optical system (at step s303). The process for eliminating the contaminants or distortions involves executing the process of eliminating the contaminants present between the lens and the image pickup device of the camera or doing the like, or the process of correcting the distortions produced in the lens or a diaphragm, on the captured image data. Conventional technologies may be adopted for these corrections.
Then, the segmented image merging unit 114 corrects the whole-object image for distortions or misalignments produced during image taking, thereby to form a whole-object image having a similar shape to the actual shape of the imaging object 5 (at step s304). This correction may be envisaged for example as perspective correction, which involves setting the resolution of the shape of the imaging object 5 obtained by actual measurement thereof to the same resolution as that of the whole-object image, and changing the shape of the whole-object image so that four points (for example, four corners of a picture frame) of the whole-object image coincide with those of the imaging object 5.
Also, the segmented image merging unit 114 enlarges the whole-object image corrected at step s304 to a resolution of “150 dpi” indicated by a segmented image taking plan having a one-step-higher resolution, and stores an enlarged image thus obtained as a base image in the storage unit 101 (at step s305).
Then, the segmented image merging unit 114 reads the segmented image taking plan having the one-step-higher resolution, that is, a resolution of “150 dpi,” from the storage unit 101 (at step s306), and reads a piece of segmented image data stored in the storage unit 101 for the read segmented image taking plan, according to a predetermined rule (e.g. in descending or ascending order of the file names, or the like) (at step s307). Of course, the segmented image data here read is raw data that is not yet subjected to subsequent processing. (An unprocessed status may be managed by a flag or the like.)
Then, the segmented image merging unit 114 executes a process for eliminating contaminants or distortions produced in the optical system, on the segmented image data read at step s307 (at step s308). This process is the same as the process of step s303 described above. Also, the segmented image merging unit 114 cuts out an in-focus range from the segmented image, and arranges the in-focus range in a corresponding position on the base image (at step s309). Conventional image processing technology may be adopted for a process for cutting out the in-focus range. Also, arrangement of the segmented image in the corresponding position on the base image can be accomplished for example in the following manner. Information on the angle of rotation of the digital camera 200 and the camera distance (fixed regardless of a segment frame or a segmented image taking plan) in image taking of a segment frame (whose name, such as “f0001,” can be determined from the segmented image name) for the segmented image is read from information on a corresponding segmented image taking plan. The position of a corresponding segmented image on an imaging object plane can be calculated based on the read information. Therefore, the segmented image can be arranged in a position (ex. the same coordinate values on plane coordinate systems on a base image plane) on the base image corresponding to the calculated position (ex. coordinate values on plane coordinate systems on the imaging object plane).
Then, the segmented image merging unit 114 sets a central range (such as a rectangular range having a predetermined number of pixels) of the segmented image as a pattern matching point (at step s310). Also, the segmented image merging unit 114 performs pattern matching between the set pattern matching point and the base image, and changes the shape of the segmented image and arranges the segmented image, based on the amount of displacement of each pattern matching point (at step s311). In this case, the segmented image merging unit 114 performs pattern matching with the base image at a position that overlaps with the central range of the segmented image as the pattern matching point, and, for example when pixels are displaced from each other although the pixels are derived from the same image taking point, the segmented image merging unit 114 determines a distance between the pixels, that is, the amount of displacement, and shifts the segmented image on the base image by the amount of displacement in a direction in which the position of a target pixel of the segmented image coincides with the position of a corresponding pixel of the base image (that is, perspective correction).
When the amount of displacement is not equal to zero or is not equal to or less than a predetermined value and the number of pattern matching points previously set is not equal to or more than a predetermined value (for example, the number of pattern matching points is not equal to or more than 16) (“No” at step s312), for example, the segmented image merging unit 114 divides the segmented image in the shifted position into four regions, and adds central ranges of the divided regions as pattern matching points (at step s313). Then, the segmented image merging unit 114 performs pattern matching between each of the four pattern matching points and the base image, and determines the amount of displacement and performs perspective correction on the four points, in the same manner as above described (at step s311). As in the case of the above, when the amount of displacement here is not equal to zero or is not equal to or less than the predetermined value and the number of pattern matching points previously set is not equal to or more than the predetermined value (“No” at step s312), the segmented image merging unit 114 further divides the perspective-corrected segmented image into 16 regions (at step s313), performs pattern matching between each of central ranges of the divided regions and the base image, and determines the amount of displacement and performs perspective correction on 16 points in the same manner (at step s311).
The segmented image merging unit 114 performs the above process until the “amount of displacement” becomes equal to zero or becomes equal to or less than the predetermined value and the number of pattern matching points previously set becomes equal to or more than the predetermined value (“No” at step s312→steps s310 and s311→“Yes” at step s312), so that the segmented image has the same shape as that of the corresponding range of the base image.
The segmented image merging unit 114 repeatedly executes the processing from steps s307 to s312 (or s313) until processing for all segment frames for a corresponding segmented image taking plan is completed (“No” at step s314→step s307). On the other hand, when the processing from steps s307 to s312 (or s313) is repeatedly executed and, consequently, the processing for all segment frames for the corresponding segmented image taking plan is completed (“Yes” at step s314), the segmented image merging unit 114 recognizes that perspective correction and alignment of all segmented images are completed, and performs merging processing of all segmented images that have finished changes in their shapes (that is, perspective correction) and alignment (at step s315).
Then, when the segmented images merged at step s315 are not those obtained by a segmented image taking plan having a target resolution (“No” at step s316), the segmented image merging unit 114 forms a base image by enlarging the merged segmented images to a resolution of a segmented image taking plan having a subsequent resolution of “300 dpi” (at step s317).
Then, the segmented image merging unit 114 returns the processing to step s306, and repeatedly executes the processing from steps s306 to s315 in the same manner for each segmented image taking plan having each of the resolutions.
Finally, when the segmented images merged at step s315 are those obtained by the segmented image taking plan having the target resolution (“Yes” at step s316), the segmented image merging unit 114 stores the segmented images finally merged as a merged image having the target resolution in the storage unit 101, and brings the processing to an end.
The following at least will be apparent from the disclosure of this specification. Specifically, the resolution setting unit of the imaging system may hierarchically set resolutions ranging from a resolution of a captured image of the whole imaging object to the target resolution, by setting a higher scaling factor from one to another of the resolutions as the resolution becomes higher, and store the set resolutions in the storage unit.
Also, the frame layout unit of the imaging system may execute a process for each of the resolutions stored in the storage unit, and the process involves forming segment frames into which an image taking range is segmented by dividing an angle of view of the image taking range by a predetermined angle, calculating confusion circle diameters in each of the segment frames, determining that a range where the circles of confusion have a diameter equal to or smaller than an allowable circle of confusion is a focus range in the segment frame, laying out the segment frames so that the focus ranges in each two neighboring segment frames overlap with each other, and storing information on at least a layout of the segment frames and a corresponding resolution as a segmented image taking plan in the storage unit.
As described above, according to the embodiments, segmented image taking and merging processing of segmented images with logical consistency can be achieved with high efficiency at low cost to thereby enabling easy formation of a high-definition merged image.
Although the present invention has been specifically described above based on the embodiments thereof, it is to be understood that the invention is not limited to these embodiments, and various changes and modifications could be made thereto without departing from the basic concept and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2009-269035 | Nov 2009 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2010/068923 | 10/26/2010 | WO | 00 | 6/22/2012 |