The present invention, in an aspect thereof, relates to position designation devices and position designation methods.
There are known techniques for capturing deep depth-of-field images on camera. One of such techniques captures a plurality of images with different focal point location settings, selects suitable, well-focused pixels from across the captured images, and combine the selected pixels to generate a deep depth-of-field image.
Techniques are also known for capturing wide-dynamic-range images. One of such techniques captures a plurality of images with different exposure settings, selects suitable pixels having equal exposure levels from across the captured images, and combine the selected pixels to generate a wide-dynamic-range image.
As an example, Patent Literature 1 discloses a method of obtaining a natural-looking image with an improved dynamic range and less decaying from a plurality of input images having different exposure levels that are captured of the same subject.
Patent Literature 1: Japanese Unexamined Patent plication Publication, Tokukai, No. 2012-165259 (Publication Date: Aug. 30, 2012).
Recent development in measurement technology has enabled computation of three-dimensional information for a measurement point in a captured image. For example, a parallax is calculated from a plurality of images captured from different positions. In reference to information on these positions, three-dimensional information is calculated for any measurement point. In another known example, three-dimensional information is calculated for any measurement point in reference to depth information associated with captured images.
In these measurement techniques, the user, for example, checks a captured image displayed on a display device and designates the positions of measurement points in the captured image using an input device. In this process, grayscale saturation can occur in the image, for example, if the image is captured under conventional automatic exposure control. Grayscale saturation is especially likely in images with a wide angle of view. The captured image may have a shallow depth of field. In such cases, it will be difficult for the user to designate an intended position of a measurement point in parts of a captured linage where grayscale has saturated or in parts that are out of focus.
Grayscale saturation in a synthesized image can be restrained, for example, by applying the technology described in Patent Literature 1. According to the technology, the user may easily designate an intended position of a measurement point if the devise is configured to enable the user to designate the position of a measurement point in the synthesized image. The technology described in Patent Literature 1, however, necessitates complex image processing to obtain a synthesized image. It is therefore useful to develop, by means of a new configuration, position designation technology allowing for designation of a desired position of a measurement point.
The present invention, in an aspect thereof, has been made in view of these issues and has an object to provide position designation technology for designating a desired position for a measurement point.
The present invention, in one aspect thereof, addresses these issues and is directed to a position designation device including: an image acquisition section configured to acquire a plurality of captured images of the same subject and a synthesized image generated from the captured images; a first position receiving section configured to cause a display device to display the synthesized image and to receive an input of a first position in the synthesized image via an input device; an image selecting section configured to select one of the captured images in accordance with the first position as a selected image; and a second position receiving section configured to cause the display device to display at least a part of the selected image and to receive an input of a second position in the selected image via the input device.
The present invention, in an aspect thereof advantageously enables designation of a desired position of a measurement point.
The following will specifically describe a position designation device 1 in accordance with a first embodiment of the present invention in reference to drawings.
The control section 10 controls the overall operation of the position designation device 1 and serves as an image acquisition section 101, a first position receiving section 102, an image selecting section 103, and a second position receiving section 104. The processes implemented by the control section 10 will be described later in detail.
The control section 10 a ay be, for example: a processing unit including a processor (not shown) such as a CPU (central processing unit) and a main storage device (not shown) such as a RAM (random access memory) to run a program stored in the storage device to carry out various processes; an FPGA (field programmable gate array) or like programmable integrated circuit, or any hardware including an integrated circuit that executes various processes.
The display device 11 may be, for example, a CRT, a liquid crystal display device, or an OLED (organic light-emitting diode) display device. The input device 12 may be, for example, a mouse, a pen tablet, or a touch panel.
The position designation device 1 acquires a plurality of captured images of the same subject (which hereinafter may be referred to simply as “captured images”) and a synthesized image generated from these captured images (which hereinafter may be referred to simply as a “synthesized image”). The position designation device I then causes the display device 11 to display the synthesized image and receives a designation of a position (first position) in the synthesized image from the user (operator) via the input device 12. Next, the position designation device 1 selects as a selected image one of the captured images in accordance with the received first position. Subsequently, the position designation device 1 causes the display device 11 to display at least a part of the selected image and receives a designation of a position (second position) in the selected image from the user via the input device 12. This configuration enables the user to designate a desired position for a measurement point in the captured image.
The following will describe an example position designation process performed by the position designation device 1 in reference to
The image acquisition section 101 in the position designation device 1 in accordance with the present embodiment acquires a plurality of captured images of the same subject and a synthesized image generated from these captured images.
The “plurality of captured images of the same subject” is not limited in any particular manner and may be, for example, a plurality of images of the same subject captured under different image capturing conditions. Examples of the “image capturing conditions” include an exposure level and/or a focal point location.
Throughout the present embodiment, the “plurality of captured images of the same subject” is a plurality of images of the same subject captured with different exposure settings, and the “synthesized image generated from captured images” is a synthesized image obtained by combining the plurality of captured images in such a manner as to extend the dynamic range of the resultant synthesized image. In an embodiment, the “synthesized image generated from captured images” may be an image in which each pixel has a pixel value averaged over the corresponding pixels of the captured images. Another method of generating a synthesized image with an extended dynamic range is to calculate a contrast difference in a predetermined area around each pixel in each captured image and select one of the captured images that has a maximum contrast difference for each area for use in synthesis.
The image acquisition section 101, in an aspect, may acquire a plurality of captured images and a synthesized image, for example, from an external image capturing device and an external image processing device over a wired or wireless link. Alternatively, the position designation device 1 may include an image capturing section and an image processing section so that the image acquisition section 101 can acquire a plurality of captured images and a synthesized image from the image capturing section and the image processing section.
Next, the first position receiving section 102 causes the display device 11 to display the synthesized image and receives a designation of a position (first position) in the synthesized image from the user (operator) via the input device 12. For example, the user designates a position in the synthesized image displayed on the display device 11 as a first position by manually operating the input device 12 such as a mouse or a touch pad. In response to the designation of the position, the input device 12 outputs input information to the first position receiving section 102, and the first position receiving section 102 receives this input of a position (first position) in the synthesized image.
As shown in (a) of
In contrast, (b) and (c) of
In the captured image 210a shown in (b) of
The captured image 210b shown in (c) of
For these reasons, by the first position receiving section 102 causing the display device 11 to display the synthesized image 210 as in (a) of
The first position receiving section 102 may receive an input of a desired position for a point in the synthesized image 210 and may receive an input of a desired position for a region in the synthesized image 210.
The synthesized image 210 having an extended dynamic range is displayed in step S0102 for the designation of a position as described above. Captured images may include very bright and dark regions. The synthesized image 210 however has an extended dynamic range and is free from grayscale saturation. The user can therefore visually recognize the entire image easily and designate a position (first position) in the synthesized image 210 being displayed. For example, even when the user needs to designate a position in an image captured of a scenery with very bright and dark regions, the synthesized image 210 has an extended dynamic range and is free from grayscale saturation. The user can thus check both bright and dark regions in the same image. The synthesized image 210 is hence very suitable for the user to explore for a position that the user wants to designate. The user can also designate a position (first position) in both the bright and dark regions in the same synthesized image 210.
Next, the image selecting section 103 selects, as a selected image, one of the captured images in accordance with the first position received in step S0102.
The image selecting section 103 may select an image, for example, by calculating the grayscale saturation level of a subarea including the first position for each captured image and selecting one of the captured images that has a minimum grayscale saturation level as a selected image. The “subarea including the first position” refers to a subarea of a captured image that includes a position (first-position-equivalent position) in the captured image corresponding to the first position. The size of the subarea is not limited in any particular manner and may be, for example, a rectangular region containing a predetermined number of pixels.
More specifically, as the first position receiving section 102 receives an input of the point 211 in the synthesized image 210 as the first position (see (a) of
The image selecting section 103, in an aspect, calculates the number of saturated pixels in the subarea as the “grayscale saturation level.” The image selecting section 103 determines that the grayscale saturation level of a subarea is lower if the subarea contains fewer saturated pixels. Accordingly, the “one of the captured images that has a minimum grayscale saturation level” is one of the captured images that has a subarea containing the least number of saturated pixels.
Next, the second position receiving section 104 causes the display device 11 to display at least a part of the captured image 210a as a selected image (namely, a selected image 210a) and receives a designation of a position (second position) in the selected image from the user via the input device 12.
The second position receiving section 104 may cause the display device 11 to display the whole selected image. Alternatively, the second position receiving section 104 may cause the display device 11 to display a part of the selected image (partial image) that includes the first-position-equivalent position. As another alternative, the second position receiving section 104 may cause the display device 11 to display a partial image at the original scale or at an enlarged scale. The second position receiving section 104 may further alternatively cause the display device 11 to display the selected image alone or the selected image being superimposed on a synthesized image. In the present embodiment, the second position receiving section 104 scales up a partial image including the corresponding point 211a in the captured image 210a serving as the selected image (namely, the selected image 210a′) and causes the display device 11 to display a resultant scaled-up display image 111 being superimposed on the synthesized image 210 as shown in (a) of
The second position receiving section 104 then receives via the input device 12 an input of the position of a point 211a′ (second position) in the partial image of the selected image 210a′ displayed on the display device 11 as the scaled-up display image 111 (see (a) of
The corresponding point 211a in the captured image 210a does not necessarily have the same coordinates as the point 211a′ designated as the second position in the partial image of the selected image 210a′. The user may first designate in the synthesized image 210 an approximate position of a point that the user wants to designate as the first position and subsequently designate in the selected image 210a″ the accurate position of the point that the user wants to designate. In other words, since the corresponding point 211a in the captured image 210a represents an approximate position of a point that the user wants to designate, the point 211a′ designated as the second position in the partial image of the selected image 210a′ may have different coordinates from the coordinates of the corresponding point 211a in the captured image 210a, as a result of the user having confirmed the accurate position of the point that the user wants to designate by checking the selected image 210a′.
The position designation device can improve visibility for the user searching an image for a position that the user wants to designate, by displaying the entire synthesized image that has an extended dynamic range. Furthermore, the position designation device 1 can select a captured image that has a minimum grayscale saturation level as a selected image in accordance with the position (first position) designated by the user in the synthesized image, so that the user can designate a position (second position) in the selected image. Therefore, the user can accurately designate a position that the user wants to designate in an image.
If a plurality of images is captured of the same subject with different exposure settings in order to generate a synthesized image with a wide dynamic range, discrepancy occurs between the captured images because the images are captured at different times and the image capturing device or the subject may move during the image capturing period. The synthesized image generated from captured images that include discrepancy can therefore be double vision, which results in a failure to accurately designate the position of a measurement point. This problem may be addressed by a known technique disclosed, for example, in Patent Literature 1. The technique combines a plurality of images of the same subject captured with different exposure settings while preventing positional deviations, in order to acquire a high-quality synthesized image with no decays caused by positional deviations. Complex image processing is essential to acquire a high-quality synthesized image with no decays caused by positional deviations. As a result, if such a synthesized image is used to designate the position of a measurement point, complex image processing is needed to acquire a high-quality synthesized image. Therefore, the user cannot easily designate a desired position for a measurement point in an image.
In contrast, the position designation device 1 uses a synthesized image in such a manner that the user can explore the synthesized image for a desired position of a measurement point and designate a measurement point in an image selected in accordance with the position (first position) found by the user m the synthesized image. The position designation device 1 therefore does not need to acquire a high-quality synthesized image generated by considering the positional deviations of the subject. Even when the synthesized image is double vision, since the measurement point is designated in the selected image, the position designation device 1 is free from the problem that the position of a measurement point cannot be accurately designated in a double-vision synthesized image. In addition, since there occurs no grayscale saturation near the first position in the image selected in accordance with the position (first position) found in the synthesized image, it is possible to designate a position in the selected image. Hence, the position designation device 1 in accordance with the present embodiment enables the user to easily and accurately designate a desired position for a measurement point in an image.
Step S0103 may further include the following substeps:
(a) a step in which the image selecting section 103 corrects the image selected in step S0103 under different correction conditions to obtain a plurality of corrected images (substep S01031, “image correction step”); and
(b) a step in which the image selecting section 103 selects, in accordance with the first position received in the first position receiving step, one of the corrected images obtained under different correction conditions as a selected image (substep S01032, “corrected image selecting step”).
Correcting the contrast of a selected image advantageously improves visibility for the user. A description will be given of an example contrast correction method in reference to
Portion (b) of
Portion (c) of
Examples of the image correction in substep (a) include, in addition to contrast correction, gamma correction and saturation correction. As an example, if the corrected images Obtained in substep (a) are contrast-corrected images, for example, the image selecting section 103 may in substep (b) calculate a contrast difference in a subarea including the first position for each corrected image and select one of the corrected images that has a maximum contrast difference as a selected image. The “subarea including the first position” refers to a subarea of a corrected image that includes a position (first-position-equivalent position) in the corrected image corresponding to the first position. The size of the subarea is not limited in any particular manner. Contrast difference in a subarea can be calculated by subtracting a minimum pixel value in the subarea from a maximum pixel value in the subarea.
As another example, if the corrected images obtained in substep (a) are saturation-corrected images, for example, the image selecting section 103 may in substep (b) calculate a saturation difference in a subarea including the first position for each corrected image and select one of the corrected images that has a maximum saturation difference as a selected image. Saturation difference in a subarea can be calculated, for example, by converting each pixel value in the subarea to an HSV value and subtracting a minimum saturation S value from a maximum saturation S value in the subarea.
As a further example, if the corrected images obtained in substep (a) are gamma-corrected images, for example, the image selecting section 103 may in substep (b) calculate a contrast difference in a subarea including the first position for each corrected image and select one of the corrected images and the original image that has a maximum contrast difference as a selected image.
In another embodiment, the image selecting section 103 may in step S0103 (image selecting step) select, in accordance with the first position received in step S0102, one of the captured images having been subjected to image correction as a selected image. For example, the image selecting section 103 may in step S0103 calculate a contrast difference in a subarea including the first position for each contrast-corrected captured image and select one of these captured images that has a maximum contrast difference as a selected image. Selecting one of the contrast-corrected captured images as a selected image advantageously improves visibility for the user. Examples of the image correction to which the captured images are subjected include, in addition to contrast correction, gamma correction and saturation correction.
This configuration enables an input of the second position to be received on a corrected image in subsequent step S0104 (second position receiving step), which in turn enables the user to mote accurately designate the second position.
The image selecting section 103 uses publicly known image processing technology to correct the selected image under different correction conditions to generate a plurality of corrected images. The image selecting section 103 selects one of these corrected images in accordance with the first position as a selected image.
The image selecting section 103, for example, calculates a contrast difference in a subarea including the first position for each corrected image and selects one of the corrected images that has a maximum contrast difference as a selected image. Contrast difference in a subarea can be calculated by subtracting a minimum pixel value in the subarea from a maximum pixel value in the subarea.
Step S0104 may further include the following substep:
(a) a step in which the second position receiving section 104 further receives an input of a third position in the selected image, the third position differing from the second position (substep S01041).
The second position receiving, section 104 may further receive a designation of a third position in the selected image 210a′ via the input device 12, the third position differing from the second position.
For example, in response to the user designating, via the input device 12, a point 213 (third position) in a partial image of the selected image 210a′ displayed as the scaled-up display image 111 on the display device 11, the point 213 differing from the point 211a′ (second position) (see (b) of
The captured image 210a as the selected image 210a′ (see (b) of
As described earlier, if a plurality of images is captured of the same subject with different exposure settings in order to generate a synthesized image with a wide dynamic range, discrepancy occurs between the captured images because the images are captured at different times and the image capturing device or the subject may move during the image capturing period. The synthesized image generated from these captured images showing discrepancy can therefore be double vision, which results in a failure to accurately designate the position of a measurement point. In view of this problem, the position designation device 1 uses a synthesized image in such a manner that the user can explore the synthesized image for a desired position of a measurement point and designate a measurement point (second position) in an image selected in accordance with the position (first position) found by the user in the synthesized image. The position designation device 1 therefore does not need to acquire a high-quality synthesized image generated by considering the positional deviations of the subject. The position designation device 1 is also free from the problem that the position of a measurement point cannot be accurately designated in a double-vision synthesized image. In addition, since a position (e.g., third or fourth position) that differs from the second position is designated in a single selected image as shown in (b) of
The following will specifically describe a position designation device 1 in accordance with a second embodiment of the present invention in reference to drawing.
Throughout the following description about the position designation device 1 in accordance with the second embodiment of the present invention, the “plurality of captured images of the same subject” is a plurality of images of the same subject captured with different focal point location settings, and the “synthesized image generated from captured images” is a synthesized image with an extended depth of field acquired through synthesis from a plurality of images of the same subject captured with different focal point location settings, as an example.
See the description under the heading, “1. Embodiment 1,” for a general description of the position designation device 1 in accordance with the present embodiment. Differences exist only in that the position designation device 1 acquires a “plurality of captured images” of the same subject captured with different focal point location settings and that the “synthesized image” is an image with au extended depth of field synthesized from these captured images.
The following will describe an example position designation process performed by the position designation device 1 in reference to
The image acquisition section 101 in the position designation device 1 in accordance with the present embodiment acquires a plurality of images of the same subject captured with different focal point location settings and a synthesized image generated from these captured images with an extended depth of field.
Next, the first position receiving section 102 causes the display device 11 to display the synthesized image and receives a designation of a position (first position) in the synthesized image from the user (operator) via the input device 12.
A synthesized image 220 has an increased depth of field as shown in (a) of
Meanwhile, (b) and (c) of
In the captured image 220a in (b) of
Meanwhile, in the captured image 220b shown in (c) of
For these reasons, by causing the display device 11 to display the synthesized image 220 as in (a) of
The first position receiving section 102 may receive an input of a desired position for a point in the synthesized image 220 and may receive an input of a desired position for a region in the synthesized image 220.
Next, the image selecting section 103 selects, as a selected image, one of the captured images in accordance with the first position received in step S0102.
The image selecting section 103 may select an image, for example, by calculating a focusing level of a subarea including the first position for each captured image and selecting one of the captured images that has a maximum focusing level as the selected image. The “subarea including the first position” refers to a subarea of a captured image that includes a position (first-position-equivalent position) in the captured image corresponding to the first position. The size of the subarea is not limited in any particular manner and may be, for example, a rectangular region containing a predetermined number of pixels. The selected image prepared in this manner includes an increased number of well-focused subjects in the subarea including the first position.
More specifically, as the first position receiving section 102 receives an input of the point 221 in the synthesized image 220 as the first position (see (a) of
The image selecting section 103, in an aspect, evaluates the “focusing level” by means of contrast difference in a predetermined region around each pixel. The image selecting section 103 determines that the focusing level is higher if the region has a greater contrast difference. Accordingly, the “one of the captured images that has a maximum focusing level” is one of the captured images that has a subarea having a maximum contrast difference.
Next, the second position receiving section 104 causes the display device 11 to display at least a pan of the captured image 220a as a selected image and receives a designation of a position (second position) in the selected image from the user via the input device 12.
The display device 11 displays the selected image as described earlier under the heading, “1. Embodiment 1.”
The position designation device 1 can improve visibility for the user searching an image for a position that the user wants to designate, by displaying the entire synthesized image that has an extended depth of field. Furthermore the position designation device 1 can select a captured image that has a maximum focusing level as a selected image in accordance with the position (first position) designated by the user in the synthesized image, so that the user can designate a position (second position) in the selected image. Therefore, the user can accurately designate a position that the user wants to designate in an image. If a plurality of images is captured with different focal point settings, there may occur positional deviations of the subject between the captured images because the angle of view could change with changing focal point location. The synthesized image with an extended depth of field generated from images captured with different focal point location settings can therefore be double vision even when the position designation device 1 is not shaken and the subject stays still while imaging.
The position designation device 1 uses a synthesized image in such a manner that the user can explore the synthesized image for a desired position of a measurement point and designate a measurement point in an image selected in accordance with the position (first position) found by the user in the synthesized image. The position designation device 1 therefore does not need to acquire a high-quality synthesized image generated by considering the positional deviations of the subject. Even when the synthesized image is double vision, since the measurement point is designated in the selected image, the position designation device 1 is free from the problem that the position of a measurement point cannot be accurately designated in a double-vision synthesized image.
The variation examples of the position designation device 1 described earlier under the heading, “1. Embodiment 1,” are also applicable to the position designation device 1 in accordance with the present embodiment.
Throughout the present embodiment, the “plurality of captured images of the same subject” is, as an example, a plurality of images of the same subject captured with different focal point location settings, and the “synthesized image generated from captured images” is, as an example, a synthesized image obtained by combining the plurality of images of the same subject captured with different focal point location settings in such a manner as to extend the depth of field of the resultant synthesized image. These examples may be combined with the exposure settings described earlier under the heading, “1. Embodiment 1.” Specifically, the “plurality of captured images of the same subject” may be a plurality of images of the same subject captured with different focal point location settings and different exposure level settings, and the “synthesized image generated from captured images” may be a synthesized image obtained by combining the plurality of images of the same subject captured with different focal point location settings and different exposure level settings in such a manner as to extend the dynamic range and depth of field of the resultant synthesized image.
When these images are used, an image may be selected as a selected image, for example, by calculating the grayscale saturation level and the focusing level of a subarea including the first position for each captured image and selecting one of the captured images that has a minimum grayscale saturation level and a maximum focusing level. The “subarea including the first position” refers to a subarea of a captured image that includes a position (first-position-equivalent position) in the captured image corresponding to the first position. The size of the subarea is not limited in any particular manner.
The following will specifically describe a measuring instrument 100 (position designation device) in accordance with a third embodiment of the present invention in reference to drawings.
Referring to
The control section 10 controls the overall operation of the measuring instrument 100 and serves as an image acquisition section 101, a first position receiving section 102, an image selecting section 103, a second position receiving section 104, and a measuring section 105. The processes implemented by the control section 10 will be described later in detail.
The measuring instrument 100 acquires a plurality of captured images of the same subject (which hereinafter may be referred to simply as “captured images”) and a synthesized image generated from these captured images (which hereinafter may be referred to simply as a “synthesized image”). The measuring instrument 100 then causes the display device 11 to display the synthesized image and receives a designation of a position (first position) in the synthesized image from the user (operator) via the input device 12. Next, the measuring instrument 100 selects as a selected image one of the captured images in accordance with the received first position. Subsequently, the measuring instrument 100 causes the display device 11 to display at least a part of the selected image and receives a designation of a position (second position) in the selected image from the user via the input device 12. The measuring instrument 100 then acquires depth information associated with the selected image and calculates the three-dimensional position (coordinates) of a position on the subject, the latter position corresponding to the second position in the selected image. This configuration enables the user to measure the three-dimensional position (coordinates) of a desired measurement point in the captured image.
Since the control section 10 of the measuring instrument 100 serves as the image acquisition section 101, the first position receiving section 102, the image selecting section 103, and the second position receiving section 104 as described earlier under the headings, “1. Embodiment 1” and “2. Embodiment 2,” the user can accurately and easily designate a desired measurement point in an image. By referring to the depth information associated with the selected image on the basis of the position information (second position information) acquired in this manner in the selected image, the measuring instrument 100 can accurately and in an error-reduced manner measure the three-dimensional position (coordinates) of a position (measurement point) on the subject, the latter position corresponding to the second position in the selected image.
The following will describe an example measuring process performed by the measuring instrument 100 in reference to
Steps S0101 to S0104 have been described earlier under the headings, “1. Embodiment 1” and “2. Embodiment 2.” Description thereof is not repeated here.
The measuring section 105 receives depth information. The depth information received by the measuring section 105 is associated with the selected image (in other words, the captured image used in the designation of the second position).
The measuring section 105 calculates the three-dimensional position (coordinates) of a position (measurement point) on the subject, the latter position corresponding to the second position in the selected image, in reference to the second position information and the depth information. The measuring section 105 outputs results of the measurement for storage in a storage device (not shown) and/or for a display on the display device 11.
Depth information may be acquired, for example, by using a pair of stereoscopic images, by calculating reflection time of ultrasound to calculate distance, by an infrared-light-based TOF (time of flight) method, or by emitting patterned light to calculate distance. Depth information may be combined with a camera parameter such as the focal length of an image, in order to calculate three-dimensional information for a space surrounding the measurement point.
The measuring instrument 100 allows the user to designate the first position in a synthesized image, thereby improving visibility for the user to find the first position. In addition, the measuring instrument 100 can select one of captured images as a selected image in accordance with the position (first position) designated by the user in the synthesized image, so that the user can designate a position (second position in the selected image. Therefore, the user can accurately designate a position that the user wants to designate in an image. As a result, the measuring instrument 100 is capable of acquiring more accurate three-dimensional information on a desired position.
The variation examples of the position designation device 1 described earlier under the headings, “1. Embodiment 1” and “2. Embodiment 2,” are also applicable to the measuring instrument 100.
The following will specifically describe a measuring instrument 100 (position designation device) in accordance with a fourth embodiment of the present invention in reference to drawings.
The control section 10 controls the overall operation of the measuring instrument 100 and serves as an image acquisition section 101, a first position receiving section 102, an image selecting section 103, a second position receiving section 104, and a measuring section 105. The processes implemented by the control section 10 will be described later in detail.
The measuring instrument 100 acquires a plurality of captured images of the same subject (which hereinafter may be referred to simply as “captured images”) and a synthesized image generated from these captured images (which hereinafter may be referred to simply as a “synthesized image”). The measuring instrument 100 then causes the display device 11 to display the synthesized image and receives a designation of a position (first position) in the synthesized image from the user (operator) via the input device 12. Next, the measuring instrument 100 selects as a selected image one of the captured images in accordance with the received first position. Subsequently, the measuring instrument 100 causes the display device 11 to display at least a part of the selected image and receives a designation of a position (second position) in the selected image from the user via the input device 12. The measuring instrument 100 then further acquires a reference image associated with the selected image and calculates, in reference to the reference image, the three-dimensional position (coordinates) of a position on the subject, the latter position corresponding to the second position in the selected image.
Since the control section 10 of the measuring instrument 100 serves as the image acquisition section 101, the first position receiving section 102, the image selecting section 103, and the second position receiving section 104 as described earlier under the headings, “1. Embodiment 1” and “2. Embodiment 2,” the user can accurately and easily designate a desired measurement point in an image. By referring to a reference image associated with the selected image on the basis of the position information (second position information) acquired in this manner in the selected image, the measuring instrument 100 can accurately and in an error-reduced manner calculate the three-dimensional position (coordinates) of a position (measurement point) on the subject, the latter position corresponding to the second position in the selected image.
The following will describe an example measuring process performed by the measuring instrument 100 in reference to
Steps S0101 to S0104 have been described earlier under the headings, “1. Embodiment 1” and “2. Embodiment 2.” Description thereof is not repeated here.
The measuring section 105 acquires a reference image associated with the selected image. A reference image is an image captured of the same subject as is the selected image, but from a different image capturing position and provides a pair of stereoscopic images when combined with the selected image. The measuring section 105 in an aspect, may acquire in advance reference images each associated with a different one of the captured images and select one of the reference images that is associated with the selected image in order to acquire a reference image that is associated with the selected image.
The measuring section 105 calculates depth information associated with the second position in reference to the reference image associated with the selected image and calculates the three-dimensional position (coordinates) of a position (measurement point) on the subject, the latter position corresponding to the second position in the selected image, in reference to the second position information and the depth information. The measuring, section 105 outputs results of the measurement for storage in a storage device (not shown) and/or for a display on the display device 11.
Depth information may be calculated, for example, by triangulation or block matching.
A description will now be given of a triangulation-based method as an example method of calculating depth information using a pair of stereoscopic images consisting of a selected image and a reference image associated with the selected image in reference to
Referring to
A plurality of search points 42 (e.g., search points 42a, 42b, and 42c) is then specified on the straight line 41. Search straight lines 43 (e.g., search straight lines 43a, 43b, and 43c) are also specified each passing through a focal point 48 of the reference image capturing section 6 and a different one of the search points 42. Each corresponding search point in a captured image (not shown) taken by the reference image capturing section 6 is compared with the corresponding measuring point α′ in the captured image 50 taken by the base image capturing section 4, to detect one of the corresponding search points where the same object is captured in the image. The search point on the straight line 41 that corresponds to this corresponding search point (in this case, the search point 42c) is the three-dimensional coordinates 44 of the measurement point α on the subject E. The three-dimensional coordinates 44 of the measurement point α on the subject E can be calculated relative to the base image capturing section 4 and the reference image capturing section 6, from the positional relationship of three straight lines: namely, the straight line 41, a base line 46 that is a straight line passing through the focal point 47 of the base image capturing section 4 and the focal point 48 of the reference image capturing section 6, and a straight line 45 passing through the focal point 48 and the three-dimensional coordinates 44 of the measurement point α on the subject E.
Specifically, a distance β along the base line 46 between the focal point 47 and the focal point 48 is measured in advance. An angle θ1 between the straight line 41 and the base line 46 and an angle θ2 between the straight line 45 and the base line 46 are then calculated. The three-dimensional coordinates 44 of the measurement point a on the subject E can be calculated from the distance β, the angle θ1, and the angle θ2 by relying on congruence of triangles.
The angle θ1 and the angle θ2 can be calculated using a camera projection model, which will now be described in detail in reference to
An image capturing section is composed of an imaging device, lenses, and other components. The image capturing section may be understood to record an image of a subject at coordinates on a projection face at which a straight line linking a thermal point and a subject to be imaged intersects the projection face. Therefore, as shown in
Accordingly, a sensor pixel pitch and a positional relationship between a sensor and a focal point are measured in advance in the base image capturing section 4 and the reference image capturing section 6 as shown in
When images are captured of the same subject with different focal point locations, the positional relationship between the focal point 47 and the projection face differs from one focal point location to another. Accordingly, the positional relationship between the focal point 47 and the projection face is measured in advance for each focal point location in image capturing. Then, the relative position of the straight line 41 can be calculated, which in turn enables calculation of the three-dimensional coordinates 44 of the measurement point a on the subject E.
Next, a description will be given of a block-matching-based method in reference to
The similarity level can be evaluated by means of an evaluation function such as SAD (sum of absolute difference) or SSD (sum of squared difference). An SAD value is calculated as follows. Letting x5 represent a pixel corresponding to the image coordinates 49 of the corresponding measuring point α′ and x′5 represent a pixel corresponding to the image coordinates 54 of the corresponding search point, Equation (1) below is evaluated using the pixel values of the 3×3 pixels surrounding and including x5 or x5′ shown in
If the same object appears at the image coordinates 49 of the corresponding measuring point α′ and at the image coordinates 54 of the corresponding search point, the pixels have close pixel values, which decreases the SAD value. Hence, by selecting one of the search points 42 specified on the straight line 41 (
The pair of images captured by the base image capturing section 4 and the reference image capturing section 6 for use as a comparison target is preferably captured in a synchronized manner because the images will have no discrepancy that is attributable to the shaking of the subject and the position designation device 1. Additionally, this synchronized image capturing is preferably performed with the same exposure settings because the results of the SAD-value-based detection become more reliable. The pair of a base image and a reference image used as a comparison target preferably has a greater contrast difference than the other pairs of a base image and a reference image because such a pair is more likely to produce an SAD value difference and lead to improved precision in the calculation of the three-dimensional coordinates 44 of the measurement point α. For example, since an image containing many saturated regions contains many similar regions, errors in the detection can be made less likely to occur by using an image containing few saturated regions as a comparison target.
The variation examples of the position designation device 1 described earlier under the headings, “1. Embodiment 1” and “2. Embodiment 2,” are also applicable to the measuring instrument 100.
Meanwhile, the measuring section 105 may acquire second position information and a pair of stereoscopic images (i.e., a selected image and a reference image associated with the selected image).
Step S0104 may further include the following substep:
(a) a step in which the second position receiving section 104 further receives an input of a third position in the selected image, the third position differing from the second position (substep S01041).
The third position, which differs from the second position, is designated in the selected image as described earlier under the heading, “1. Embodiment 1.”
Step S0106 may further include the following substep:
(a) a step in which the measuring section 105, in reference to a reference image associated with the selected image in which depth information has been calculated for the second position, measures the three-dimensional position (coordinates) of the position (measurement point) on the subject, the latter position corresponding to the third position in the selected image (substep S01061).
The third position, which differs from the second position is designated in the same selected image as the selected image in which the second position is designated, and a distance between two points is calculated in the single selected image. This configuration can reduce measurement errors that are attributable to movements of the subject, which in turn enables high precision calculation of a distance between two points. If a fourth position and a fifth position are further designated in the same selected image, and the three-dimensional positions (coordinates) of the positions (measurement points) on the subject, the latter positions corresponding to the fourth position and the fifth position in the selected image, are measured, it becomes possible to calculate an area and other measurement with high precision.
Depth information is calculated using a pair of stereoscopic images (specifically, a captured image and a reference image) as an example in the present embodiment. Alternatively, a set of images of the same subject captured from three or more different positions (specifically, a combination of a captured image and two or more reference images) may be used in place of a pair of stereoscopic images, which still produces similar effects.
The following will specifically describe a measuring instrument 100 (position designation device) in accordance with a fifth embodiment of the present invention in reference to drawings.
The control section 10 controls the overall operation of the measuring instrument 100 and serves as an image acquisition section 101, a first position receiving section 102, an image selecting section 103, a second position receiving section 104, and a measuring section 105. The processes implemented by the control section 10 will he described later in detail.
The image capturing device 8 includes a base image capturing section 4 (image capturing section), a control section 5, a reference image capturing section 6 (image capturing section), and a synthesis processing section 7 (synthesizing section). The base image capturing section 4 and the reference image capturing section 6 may be built around lenses and imaging devices such as CCDs (charge coupled devices).
The image capturing device 8 in the measuring instrument 100 captures a plurality of images of the same subject under a plurality of predetermined image capturing conditions (e.g., exposure level and focal point location). Next, the image capturing device 8 acquires a plurality of captured images of the same subject and generates a synthesized image from these captured images. The measuring instrument 100 then acquires the captured images of the same subject and the synthesized image. Subsequently, the measuring instrument 100 causes the display device 11 to display the synthesized image and receives a designation of a position (first position) in the synthesized image from the user (operator) via the input device 12. Next, the measuring instrument 100 selects as a selected image one of the captured images in accordance with the received first position. Subsequently, the measuring instrument 100 causes the display device 11 to display at least a part of the selected image and receives a designation of a position (second position) in the selected image from the user via the input device 12. The measuring instrument 100 then further acquires reference images associated respectively with the captured images and calculates, in reference to one of the reference images associated with the selected image, the three-dimensional position (coordinates) of a position on the subject, the latter position corresponding to the second position in the selected image.
Since the control section 10 of the measuring instrument 100 serves as the image acquisition section 101, the first position receiving section 102, the image selecting section 103, and the second position receiving section 104 as described earlier under the headings, “1. Embodiment 1” and “2. Embodiment 2,” the user can accurately and easily designate a desired measurement point in an image. By referring to a reference image associated with the selected image on the basis of the position information (second position information) acquired in this manner in the selected image, the measuring instrument 100 can accurately and in an error-reduced manner calculate the three-dimensional position (coordinates) of a position (measurement point) on the subject, the latter position corresponding to the second position in the selected image.
The following will describe an example measuring process performed by the measuring instrument 100 in reference to
Prior to step S0101 shown in
The base image capturing section 4 and the reference image capturing section 6 capture a plurality of images of the same subject under a plurality of predetermined image capturing conditions (e.g., exposure level and focal point location). In the present embodiment, the base image capturing section 4 and the reference image capturing section 6 capture images with different focal point location settings.
The control section 5 controls shutter timings for the base image capturing section 4 and the reference image capturing section 6 and also controls the diaphragm, sensor sensitivity, shutter speed, focal point location, and other image capturing settings of the base image capturing section 4 and the reference image capturing section 6. The control section 5, upon receiving an input signal from, for example, a shutter button (not shown), controls the base image capturing section 4 and the reference image capturing section 6 to capture a plurality of images under a plurality of predetermined image capturing conditions (e.g., exposure level and focal point location).
The control section 5 controls the base image capturing section 4 and the reference image capturing section 6 to close their shutters approximately simultaneously in order to capture each pair of images in a synchronized manner. If the predetermined image capturing conditions include different focal point locations, a common focal point location setting is used in both the base image capturing section 4 and the reference image capturing section 6, and the setting is varied for each pair of synchronized images, such that the base image capturing section 4 and the reference image capturing section 6 are separated from the subject to be focused on by substantially the same distance, in other words, such that the base image capturing section 4 and the reference image capturing section 6 can focus on the same subject. The reference image capturing section 6 is located to the right of the base image capturing section 4 in the present embodiment. Alternatively, the reference image capturing section 6 may be located to the left of the base image capturing section 4 and may be located above or below the base image capturing section 4. In addition, the present embodiment provides a single reference image capturing section 6, that is, a total of two image capturing sections. Alternatively, the present embodiment may provide two or more reference image capturing sections.
The base image capturing section 4 and the reference image capturing section 6 output captured images to the image acquisition section 101. The base image capturing section 4 also outputs captured images to the synthesis processing section 7. Image data is described in the present embodiment as being sequentially outputted to the synthesis processing section 7 and the image acquisition section 101. Alternatively, the captured images may be temporarily stored in a memory section (not shown) in the measuring instrument 100 so that the synthesis processing section 7 and the image acquisition section 101 can acquire the data from the memory section (not shown).
If the base image capturing section 4 has captured images al the same subject with different focal point location settings, the synthesis processing section 7 aligns the captured images that have different focal point locations, evaluates the focusing level of each pixel, and weighted-averages the pixel values of image pixels that have high focusing levels to generate a synthesized image with an extended depth of field.
The aligning section 71 aligns the captured images that have been acquired from the base image capturing section 4 and that have different focal point locations. As mentioned above, since the angle of view may differ from one image to another, the aligning section 71 compares the coordinates of a feature point in a pair of images and adjusts the coordinates so that the feature point has the same coordinates in the pair of images. This configuration can place the subject at substantially the same position in the pair of images.
Subsequently, the focusing level evaluating section 72 evaluates the focusing level of each pixel in each image by means of contrast difference in a predetermined region around each pixel. The focusing level evaluating section 72 determines that the focusing level is higher if the region has a greater contrast difference.
Next, the image synthesizing section 73 synthesizes an image on the basis of the evaluation of the focusing levels. Images can be synthesized by a common synthesis method. The pixel value of each pixel in the synthesized image can be calculated using Equation (2) below, where N represents the number of captured images, pi represents the pixel value of a pixel under consideration in a captured image i, ci represents the focusing level of the pixel under consideration and paf represents the pixel value of a pixel under consideration in the synthesized image.
An image can be synthesized with an extended depth of field by focusing-level-weighted averaging the pixel values in the captured images to calculate the pixel value of each pixel. The resultant synthesized image is outputted to the image acquisition section 101 from the synthesis processing section 7 (image synthesizing section 73).
Steps S0101 to S0104 have been described earlier under the heading, “2. Embodiment 2.” Description thereof is not repeated here.
Steps S0205 and S0106 have been described earlier under the heading, “4. Embodiment 4.” Description thereof is not repeated here.
The variation examples of the position designation device 1 described earlier under the headings, “1. Embodiment 1” and “2. Embodiment 2,” are also applicable to the measuring instrument 100. The variation examples of the measuring instrument 100 described earlier under the headings, “3. Embodiment 3” and “4. Embodiment 4,” are also applicable to the measuring instrument 100 in accordance with the present embodiment.
The base image capturing section 4 and the reference image capturing section 6 have been described in the present embodiment as capturing images with different focal point location settings. Alternatively, the base image capturing section 4 and the reference image capturing section 6 may capture images with different exposure level settings.
In this alternative example, a common exposure level setting is used in step S0201 in both the base image capturing section 4 and the reference image capturing section 6, and the setting is varied for each pair of synchronized images.
In addition, in step S0204, the image synthesizing section 73 aligns the captured images that have different exposure levels, weighted-averages image pixels that have suitable gray levels in each pixel, and adjusts the gray level by considering an exposure level differences across the image, in order to generate a synthesized image with an extended dynamic range. If the synthesized image has an excessive bit count, the synthesized image may be subjected to, for example tone mapping for gray level conversion to a desired bit count.
The measuring instrument 100 not only calculates a distance from the measuring instrument 100 (specifically, the focal point of the base image capturing section 4) to the three-dimensional position of the measurement point, but is also capable of, as an example, calculating a distance between two points. The following will describe how this additional function is implemented.
The following will specifically describe a measuring instrument 100 (position designation device) in accordance with a sixth embodiment of the present invention in reference to drawings.
The control section 10 controls the overall operation of the measuring instrument 100 and serves as an image acquisition section 101, a first position receiving section 102, an image selecting section 103, a second position receiving section 104, and a measuring section 105. The processes implemented by the control section 10 will be described later in detail.
The image capturing device 8 includes a base image capturing section 4 (image capturing section), a control section 5, a reference image capturing section 6 (image capturing section), and a synthesis processing section 7 (synthesizing section).
The image capturing device 8 in the measuring instrument 100 captures a plurality of images of the same subject under a plurality of predetermined image capturing conditions (e.g., exposure level and focal point location). Next, the image capturing device 8 acquires a plurality of captured images of the same subject and generates a synthesized image from these captured images. The measuring instrument 100 then acquires the captured images of the same subject and the synthesized image. Subsequently, the measuring instrument 100 causes the display device 11 to display the synthesized image and receives a designation of a position (first position) in the synthesized image from the user (operator) via the input device 12. Next, the measuring instrument 100 selects as a selected image one of the captured images in accordance with the received first position. Subsequently, the measuring instrument 100 causes the display device 11 to display at least apart of the selected image and receives a designation of a position (second position) in the selected image from the user via the input device 12. The measuring instrument 100 then receives a designation of a third position in the selected image from the user via the input device 12, the third position differing from the second position, and re-selects the selected image in accordance with the third position. Next, the measuring instrument 100 causes the display device 11 to display at least a part of the re-selected image and re-receives a designation of the second and third positions in the selected image from the user via the input device 12. The measuring instrument 100 then further acquires reference images associated respectively with the captured images and calculates, in reference to one of the reference images associated with the re-selected image, the three-dimensional position (coordinates) of a position on the subject, the latter position corresponding to the second position in the re-selected image, and the three-dimensional position (coordinates) of a position on the subject, the latter position corresponding to the third position in the re-selected image.
Since the control section 10 of the measuring instrument 100 serves as the image acquisition section 101, the first position receiving section 102, the image selecting section 103, and the second position receiving section 104 as described earlier under the headings, “1. Embodiment 1” and “2. Embodiment 2,” the user can accurately and easily designate a desired measurement point in an image. By re-selecting the selected image in accordance with the third position, the measuring instrument 100 enables the user to accurately and easily designate a plurality of measurement points in the image. By referring to the reference image associated with the re-selected image on the basis of information acquired in this manner on positions in the re-selected image (second position information and third position information), the measuring instrument 100 can accurately and in an error-reduced manner calculate the three-dimensional position (coordinates) of a position on the subject, the latter position corresponding to the second position in the re-selected image, and the three-dimensional position (coordinates) of a position on the subject, the latter position corresponding to the third position in the re-selected image.
The following will describe an example measuring process performed by the measuring instrument 100.
Steps S0201 to S0204 have been described earlier under the heading, “5. Embodiment 5.” Description thereof is not repeated here.
Steps S0101 to 50103 have been described earlier under the headings, “1. Embodiment 1” and “2. Embodiment 2.” Description thereof is not repeated here.
The second position receiving section 104 causes the display device 11 to display at least a part of the captured image as a selected image and receives a designation of a position (second position) in the selected image from the user via the input device 12. The second position receiving section 104 then further receives an input of a third position in the selected image via the input device 12, the third position differing from the second position.
Specifically, the second position receiving section 104 receives an input of the positions of two measurement points (second and third positions) via the input device 12 in order to calculate a distance between the two measurement points. The positions of measurement points are designated as described earlier under the heading, “1. Embodiment 1.” If the second measurement point (i.e., the third position) is designated, since the third position is designated in the selected image in which the second position is previously designated, images are simultaneously captured of the measurement points the distance between which is to be measured. That enables precise calculation of a distance between two points.
If images are captured, for example, with different exposure level settings in this situation, the two measurement points (i.e., the third position) are in either one of the following states. In the selected image in which the first measurement point (second position) is previously designated,
(1) There is occurring no grayscale saturation at the measurement point (third position) that the user wants to designate in a second time.
(2) Grayscale has saturated at the measurement point (third position) that the user wants to designate in a second time, and the measurement point thereby appears either black or white.
In state (1), the second position receiving section 104 needs only to receive an input of a position of the second measurement point (third position) while causing the display device 11 to display the selected image used in the designation of the first measurement point (second position). Meanwhile, in state (2), the image selecting section 103 re-selects a selected image, and the second position receiving section 104 re-receives an input of the positions of the first and second measurement points while causing the display device 11 to display the selected image which is re-selected. In other words, step S0104 includes substeps S01041 to S01041 described below.
The second position receiving section 104 receives a designation of a third position in the selected image from the user via the input device 12, the third position differing from the second position. Referring to
The image selecting section 103 determines whether or not the selected image needs to be re-selected in accordance with the third position in the selected image. If the image selecting section 103 has determined that the selected image needs to be re-selected, the image selecting section 103 performs a step of re-selecting the selected image. On the other hand, if the image selecting section 103 has determined that the selected image does not need to be re-selected, the process of receiving the third position is ended.
The image selecting section 103, as an example, automatically determines, through the edge intensity of the scaled-up display image 111, whether or not there is occurring a grayscale saturation at the third position in the selected image. If the image selecting section 103 has determined that there is occurring no grayscale saturation at the third position in the selected image, the image selecting section 103 determines that the selected image does not need to be re-selected. On the other hand, if the image selecting section 103 has determined that there is occurring a grayscale saturation at the third position in the selected image, the image selecting section 103 determines that the selected image needs to be re-selected.
A description will now be given of how edge intensity is calculated.
The image selecting section 103 may determine, in accordance with a result of detection of a user operation, whether or not the selected image needs to be re-selected. For example, if there is occurring a grayscale saturation (either black or white) at the measurement point α2 in the selected image 210a′ used in the designation of the measurement point α1 as shown in
The image selecting section 103 re-selects one of the captured images as a selected image in accordance with the third position.
The image selecting section 103 may select one of the captured images as a selected image, for example by calculating the grayscale saturation level of a subarea including the third position for each captured image and selecting one of the captured images that has a minimum grayscale saturation level as a re-selected image. The “subarea including the third position” refers to a subarea of a captured image that includes a position (third-position-equivalent position) in the captured image corresponding to the third position. The size of the subarea is not limited in any particular manner and may be, for example, a rectangular region containing a predetermined number of pixels.
More specifically, if there is occurring a grayscale saturation at the measurement point α2 in the selected image 210a′ used in the designation of the measurement point α1 shown in
In addition, if the captured images have different focal point locations, an image is selected in which the image is in focus (focused) at both the measurement point α1 and the measurement point α2. Whether or not an image is in focus (focused) at a measurement point can be determined through contrast: the image is determined to be in better focus (better focused) at the measurement point if the surroundings of the measurement point have a higher contrast. Therefore, when the captured images have different focal point locations, an image in which contrast is high at both the measurement point α1 and the measurement point α2 may be similarly selected from the captured images.
The second position receiving section 104 causes the display device 11 to display at least a part of the re-selected image and re-receives a designation of a third position in the re-selected image from the user via the input device 12.
The second position receiving section 104 re-receives a designation of a second position in the re-selected image from the user via the input device 12.
This step enables a designation of the measurement point α1 as a second position and the measurement point α2 as a third position in the same selected image, which in turn enables measurement of a distance between the measurement point α1 and the measurement point α2.
The measuring section 105 further acquires reference images associated respectively with the captured images and selects one of the reference images that is associated with the selected image (or the re-selected image).
The measuring section 105 calculates depth information associated with the second position in reference to the reference image associated with the selected image (or the re-selected image) and calculates the three-dimensional position (coordinates) of a position (measurement point) on the subject, the latter position corresponding to the second position in the selected image, in reference to the second position information and the depth information. The measuring section 105 similarly calculates the three-dimensional position (coordinates) of a position (measurement point) on the subject, the latter position corresponding to the second position in the selected image. The measuring section 105 outputs results of the measurement for storage in a storage device (not shown) and/or for a display on the display device 11.
A distance can be calculated between two measurement points by calculating the relative positions of the two measurement points. The present embodiment has described calculation of a distance between two points. Alternatively, the number of measurement points may be increased, in which case the present embodiment is still applicable to calculation of, for example, a distance between a point and a straight line, a distance between two straight lines, and an area of a region linking a plurality of points as a result of measurement.
As described so far, a synthesized image is generated from a plurality of images captured under different image capturing conditions. The synthesized image is then used to let the user explore the image for a desired position for a measurement point A second position and a third position are designated in an image selected in accordance with a position (first position) found in the synthesized image or in a re-selected image re-selected in accordance with the third position. This configuration improves visibility of the entire image for the user to explore for a measurement point. Additionally, there is no need to acquire a high-quality synthesized image generated by considering the positional deviations of the subject. Even when the synthesized image is double vision, since the measurement point is designated in the selected image, it becomes possible to designate a desired position for a point by preventing double vision.
The present embodiment has so far described the measuring instrument as incorporating the image capturing device. Alternatively, the image capturing device may be provided separately from the measuring instrument. For example, the image synthesized from the images captured by the above-described image capturing device may be stored in a storage device such as a RAM, a flash memory, or a HDD, so that the measuring instrument can read out an image from the storage device for a display on a display device. Alternatively, the captured images may be stored so that the measuring instrument can synthesize an image.
The control blocks of the position designation device 1 or of the measuring instrument 100 (particularly, the control section 10) may be implemented by logic circuits (hardware) fabricated, for example, in the form of an integrated circuit (IC chip) and may be implemented by software executed by a CPU (central processing unit).
In the latter form of implementation, the position designation device 1 or the measuring instrument 100 includes among others a CPU that executes instructions from programs or software by which various functions are implemented, a ROM (read-only memory) or like storage device (referred to as a “storage medium”) containing the programs and various data in a computer-readable (or CPU-readable) format, and a RAM (random access memory) into which the programs are loaded. The computer (or CPU) then retrieves and executes the programs contained in the storage medium, thereby achieving an object of the present invention. The storage medium may be a “non-transient, tangible medium” such as a tape, a disc, a card, a semiconductor memory, or programmable logic circuitry. The programs may be fed to the computer via any transmission medium (e.g., over a communications network or by broadcasting waves) that can transmit the programs. The present invention, in an aspect thereof, encompasses data signals on a carrier wave that are generated during electronic transmission of the programs.
The present invention, in aspect 1 thereof, is directed to a position designation device (1) including: an image acquisition section (101) configured to acquire a plurality of captured images of the same subject and a synthesized image generated from the captured images; a first position receiving section (102) configured to cause a display device (11) to display the synthesized image and to receive an input of a first position in the synthesized image via an input device (12); an image selecting section (103) configured to select one of the captured images in accordance with the first position as a selected image; and a second position receiving section (104) configured to cause the display device (11) to display at least a part of the selected image and to receive an input of a second position in the selected image via the input device (12).
According to this configuration, the position designation device uses a synthesized image to let the user explore the image for a desired position for a measurement point. A measurement point is designated in an image selected in accordance with a position (first position) found in the synthesized image. Therefore, there is no need to acquire as high-quality synthesized image generated by considering the positional deviations of the subject. Even when the synthesized image is double vision, since the measurement point is designated in the selected image, there occurs no problem that the position of a measurement point cannot be accurately designated in a double-vision synthesized image. In addition, since there occurs no grayscale saturation near the first position in the image selected in accordance with the position (first position) found in the synthesized image, it is possible to designate a position in the selected image. Hence, the position designation device in accordance with aspect 1 of the present invention enables the user to easily and accurately designate a desired position for a measurement point in an image.
In aspect 2 of the present invention, the position designation device (measuring instrument 100) of aspect 1 may be configured to further include a measuring section (105) configured to acquire depth information associated with the selected image and to calculate three-dimensional coordinates of a position (measurement point) on a subject in reference to the depth information, the position corresponding to the second position in the selected image.
According to this configuration, the three-dimensional coordinates of a measurement point can be calculated.
In aspect 3 of the present invention, the position designation device (measuring instrument 100) of aspect 1 may be configured to further include a measuring section (105) configured to acquire a reference image associated with the selected image and to calculate three-dimensional coordinates of a position (measurement point) on a subject in reference to the reference image, the position corresponding to the second position in the selected image.
According to this configuration, the three-dimensional coordinates of a measurement point can be calculated.
In aspect 4 of the present invention, the position designation device (measuring instrument 100) of any one of aspects 1 to 3 may be configured to further include: an image capturing section (the base image capturing section 4) configured to capture a plurality of images as the captured images; and a synthesizing section (the synthesis processing section) configured to generate the synthesized image from the captured images.
According to this configuration, the three-dimensional coordinates of a measurement point can be calculated.
In aspect 5 of the present invention, the position designation device (measuring instrument 100) of any one of aspects 1 to 4 may be configured such that: the second position receiving section (third position receiving section) further receives an input of a third position in the selected image via the input device (12), the third position differing from the second position; the image selecting section (103) re-selects the selected image in accordance with the third position as a re-selected image; and the second position receiving section (third position receiving section) causes the display device (11) to display at least a part of the selected image re-selected by the image selecting section (re-selected image) and re-receives an input of the second and third positions in the selected image via the input device (12).
According to this configuration, the three-dimensional coordinates of a plurality of measurement points can be calculated.
In aspect 6 of the present invention, the position designation device (1) of any one of aspects 1 to 5 may be configured such that the image selecting section (103) calculates a grayscale saturation level of a subarea of each of the captured images, the subarea including the first position, and selects one of the captured images that has a minimum grayscale saturation level as a selected image.
According to this configuration, the position designation device selects, as the selected image, one of the captured images that has a minimum grayscale saturation level its accordance with a position (first position) designated by the user in the synthesized image, so that the user can designate a position (second position) in the selected image. Therefore, the user can accurately designate a position that the user wants to designate in an image.
In aspect 7 of the present invention, the position designation device (1) of any one of aspects 1 to 5 may be configured such that the image selecting section (103) calculates a focusing level of a subarea of each of the captured images, the subarea including the first position, and selects one of the captured images that has a maximum focusing level as a selected image.
According to this configuration, the position designation device selects, as the selected image, one of the captured images that has a maximum focusing level in accordance with a position (first position) designated by the user in the synthesized image, so that the user can designate a position (second position) in the selected image. Therefore, the user can accurately designate a position that the user wants to designate in an image.
The present invention, in aspect 8 thereof, is directed to a method of designating a position, the method including: the first position receiving step (step S0102) of receiving a designation of a first position in a synthesized image generated from a plurality of captured images of the same subject; the image selecting step (step S0103) of selecting one of the captured images in accordance with the first position received in the receiving step (step S0102) as a selected image; and the second position receiving step (step S0104) of receiving a designation of a second position in the selected image selected in the image selecting step (step S0103).
According to this configuration, the user can easily and accurately designate a desired position for a measurement point in an image.
The position designation device of any aspect of the present invention may be implemented on a computer, in which case the present invention encompasses a control program (position designation program) that, for the position designation device, causes a computer to implement the position designation device by causing the computer to operate as the various units (software elements) of the position designation device and also encompasses a computer-readable storage medium containing the position designation program.
The present invention is not limited to the description of the embodiments above and may be altered within the scope of the claims. Embodiments based on a proper combination of technical means disclosed in different embodiments are encompassed in the technical scope of the present invention. Furthermore, a new technological feature may be created by combining different technological means disclosed in the embodiments.
The present application claims the benefit of priority to Japanese Patent Application, Tokugan, No. 2016-156917, filed on Aug. 9, 2016, the entire contents of which are incorporated herein by reference.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2016-156917 | Aug 2016 | JP | national |
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/JP2017/028720 | 8/8/2017 | WO | 00 |