1. Technical Field
The present invention relates to a technique for supporting a user operation of designating a partial area in an image.
2. Related Art
A method called area segmentation (segmentation) has been known, in which a computer performs digital image processing to divide an image into a portion to be extracted (referred to as foreground) and other portions (referred to as background). In some cases, in the area segmentation, a method is employed, in which a user is allowed to designate an area to be the foreground or the background and a part of pixels as an initial value, so that higher dividing accuracy is achieved and segmentation is performed as the user intended. For a user interface for designating an area and a pixel on an image, a method of designating a rectangular area by mouse dragging and the like, a method of selecting a pixel by mouse clicking and the like, or a method of designating a contour of a pixel group or an area by mouse stroke performed like drawing a free curve with drawing software, is generally employed. Any pixel group on the image can be designated as the foreground or the background through such a method.
However, the conventional user interface is suitable for roughly designating an area and a pixel group of any shape, but is likely to cause erroneous designation of selecting an unintended pixel as well. Thus, there are disadvantages that, to accurately designate a narrow and small area and an area with a complex shape, high skill and a careful operation are required, and thus a long time is required for designating such an area. Depending on the use and an installed environment of a system, the area designation might be difficult due to an insufficient function or sensitivity of an input device for performing the area designation, and restricted user operation. For example, when the area designation described above is performed in an image inspection system operating in a manufacturing site, only a keypad and a controller might be provided as the input device, or an input operation might have to be performed with dirty fingers or with a glove on. Under such a condition, it is difficult to designate an area as intended.
One or more embodiments of the present invention provides a technique of enabling an operation of designating an area in an image to be performed, easily and as intended.
One or more embodiments of the present invention employs a user interface, in which subareas as candidates are overlaid on a target image to be presented and allows a user to select a desired subarea from the subareas.
One or more embodiments of the present invention is an area designating method of allowing, when area segmentation processing of dividing a target image into a foreground and a background is performed, a user to designate an area, as a part of the target image, as an area to be the foreground or the background. The method includes: a subarea setting step, in which a computer sets at least one subarea larger than a pixel, in the target image; a display step, in which the computer displays a designating image, on which a boundary of the subarea is drawn on the target image, on a display device; and a designating step, in which the computer allows the user to select the area to be the foreground or the background, from the at least one subarea on the designating image, by using an input device.
With this configuration, the subareas as candidates are recommended by the computer, and the user only needs to select an area satisfying a desired condition, from the candidates. Thus, the area can be intuitively and easily designated. The boundaries of the subareas are clearly shown, and the area is designated in a unit of a subarea. Thus, the designation of the user is more restricted, compared with a conventional method of allowing the user to freely input any area or pixel group with a mouse or the like. The restriction can prevent erroneous designation of selecting an unintended pixel as well, and thus facilitate the intended area designation.
The subarea setting step according to one or more embodiments of the present invention includes a segmentation step of segmenting the target image into a predetermined pattern to form a plurality of the subareas (the segmentation method is referred to as “pattern segmentation”). Because the predetermined pattern is used, simple processing can be achieved and the subareas can be promptly set. Any pattern can be used for the segmentation. For example, when a lattice (grid) shaped pattern is used, the subareas are regularly arranged, and thus a subarea can be easily selected. Here, according to one or more embodiments of the present invention, the subarea setting step further includes an extraction step of extracting a part of the subareas from the plurality of subareas formed in the segmentation step. According to one or more embodiments of the present invention, in the display step, only the subarea extracted in the extraction step is drawn in the designating image. By reducing the number of candidates (that is, options) drawn in the designating image, decision on which candidate is to be selected and a selection operation can be simplified. In the extraction step, for example, according to one or more embodiments of the present invention, the subarea with a uniform color or brightness or the subarea without an edge is extracted with a higher priority. This is because the subarea at a position over both the foreground and the background can be more likely to be excluded through such processing. Furthermore, according to one or more embodiments of the present invention, in the extraction step, the subareas to be extracted be selected in such a manner that a feature of a color or brightness, or a position in the target image varies among the extracted subareas as much as possible. By thus setting the group of candidate subareas, the subareas can be sufficiently set to each of a foreground portion and a background portion as intended by the user.
Furthermore, according to one or more embodiments of the present invention, the subarea setting step include a segmentation step of forming a plurality of the subareas by grouping the pixels based on a feature of at least one of a color, brightness, and an edge. Thus, the subareas with shapes corresponding to the shape, the pattern, the shading, and the like of an object in the target image, are formed. For example, each subarea is formed of a pixel group with a similar feature of the color or the brightness, or a pixel group defined by the edge. Thus, the subarea is less likely to include the pixels of both the foreground and the background. Accordingly, when the subarea thus formed is used as the candidate, even a narrow and small area and an area with a complex shape can be selected easily and as intended. In this method also, it according to one or more embodiments of the present invention, the subarea setting step further includes the extraction step of extracting a part of the subareas from the plurality of subareas formed in the segmentation step. According to one or more embodiments of the present invention, in the display step, only the subarea extracted in the extraction step is drawn in the designating image. This is because, by reducing the number of candidates (that is, options) drawn in the designating image, decision on which candidate to be selected and a selection operation can be simplified. For example, in the extraction step, the subarea without the edge, the subarea with a large size or width, or the subarea with a high contrast at a boundary portion may be extracted with a higher priority. By selecting the subarea without the edge and the subarea with a high contrast at a boundary portion with a higher priority, the subarea including the pixels in both the foreground and the background can be excluded. By the selecting the subarea with a large size or width with a higher priority, the subarea that is difficult to select due to its small size can be excluded. In the extraction step, according to one or more embodiments of the present invention, the subareas to be extracted are selected in such a manner that a feature of a color or brightness, or a position in the target image varies among the extracted subareas as much as possible. By thus setting a group of candidate subareas, subareas can be sufficiently set to various positions, and to the foreground portion and the background portion in the target image.
According to one or more embodiments of the present invention, the subarea selected by the user as the area to be the foreground or the background in the designating step is highlighted. Thus, the subarea selected to be the foreground or the background can be easily distinguished from the other subareas. Thus, the erroneous selection of the subarea can be prevented, and the usability can be improved.
According to one or more embodiments of the present invention, a size of the subarea with respect to the target image is changeable by the user. This is because the area designation is facilitated by appropriately adjusting the size of the subarea in accordance with the size, the shape, and the like of the foreground portion and the background portion in the target image.
According to one or more embodiments of the present invention, the method further includes an image update step, in which the computer updates the designating image displayed on a screen of the display device, in accordance with an instruction from the user to enlarge, downsize, translate or rotate the image. According to one or more embodiments of the present invention, in the image update step, the subarea is enlarged, downsized, translated, or rotated together with the target image. For example, by enlarging the display, pixels in the image, on which the subarea and the contour thereof are overlaid, can be checked in detail. Thus, even a narrow and small area and a portion with a complex shape can be accurately selected easily.
When the subareas are set by the pattern segmentation, according to one or more embodiments of the present invention, in the image update step, only the target image is enlarged, downsized, translated, or rotated, without changing the position and the size of the subareas on the screen. For example, when the subarea is disposed over both the foreground and the background in the initial display screen, the display can be changed in such a manner that the subarea is positioned in the foreground or the background by enlarging, translating, rotating, or performing the like processing on the target image. Thus, accurate designation of only the foreground or the background can be facilitated.
According to one or more embodiments of the present invention, the input device includes a movement key and a selection key, and the designating step includes: a step of putting any one of the subareas on the designating image in a selected state; a step of sequentially changing the subarea to be in the selected state, every time an input by the movement key is received from the user; and a step of selecting the subarea currently in the selected state as the foreground or the background when an input by the selection key is received from the user. With such a user interface, the intended subarea can be selected without fail with simple operation on the movement key and the selection key. Here, according to one or more embodiments of the present invention, the subarea currently in the selected state is highlighted. Thus, the subarea in the selected state can be easily distinguished from the other subareas. Thus, the erroneous selection of the subarea can be prevented, and the usability can be improved.
Furthermore, according to one or more embodiments of the present invention, the input device is a touch panel disposed on the screen of the display device, and in the designating step, the user touch the subarea on the designating image displayed on the screen of the display device, so that the area to be the foreground or the background is selected. With such a user interface, the intended subarea can be selected more intuitively.
One or more embodiments of the present invention is an area designating method including at least one of the processes described above, or an area segmentation method of executing the area segmentation on the target image based on an area designated by the area designating method. One or more embodiments of the present invention is a program for causing a computer to execute the steps in the methods, or as a storage medium storing the program. One or more embodiments of the present invention is an area designating device or an area segmentation device including at least one of means that perform the processes described above.
One or more embodiments of the present invention may provide a user interface that enables an operation of designating an area in an image to be performed, easily and as intended.
a) is a diagram showing an example where a captured image is displayed on an inspection area setting screen.
a)-7(c) are diagrams for explaining a designating image obtained by pattern segmentation of a first embodiment.
a)-8(c) are diagrams for explaining a designating image obtained by over segmentation of a second embodiment.
a)-10(b) are diagrams for explaining a designating image of a third embodiment.
a)-11(c) are diagrams for explaining an area designation operation in a designating image of a fourth embodiment.
a)-12(b) are diagrams for explaining an area designation operation in a designating image of a fifth embodiment.
a)-13(b) are diagrams for explaining an area designation operation in a designating image of a sixth embodiment.
Embodiments of the present invention will be described below with reference to the drawings. In embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid obscuring the invention. One or more embodiments of the present invention relates to an area designating method of allowing, when processing called an area segmentation (segmentation) of dividing a target image into a foreground and a background is performed, a user to designate an area to be the foreground or an area to be the background, in the target image, as an initial value. For example, the area designating method and an area segmentation method according to one or more embodiments of the present invention can be applied to various fields, such as processing of extracting an area of an inspection target object in an original image in image inspection, processing of trimming only a foreground portion from the original image when background composition is performed in image editing, and processing of extracting only a diagnosed organ or portion, from a medical image. In one or more of the embodiments described below, an example where an area designating method according to one or more embodiments of the present invention is implemented in an inspection area setting function (setting tool) in an image inspection apparatus, is described as one application example.
(Image Inspection Apparatus)
As shown in
The apparatus main body 10 may be formed of a computer including, as hardware, a CPU (central processing unit), a main storage device (RAM), and an auxiliary storage device (ROM, HDD, SSD, or the like). The apparatus main body 10 includes, as functions, an inspection processing unit 101, an inspection area extraction unit 102, and a setting tool 103. The inspection processing unit 101 and the inspection area extraction unit 102 are functions related to the inspection processing, and the setting tool 103 is a function for supporting a work performed by the user to set the setting information required for the inspection processing. The functions are implemented when a computer program stored in the auxiliary storage device or the storage device 13 is loaded onto the main storage device, and executed by the CPU.
(Inspection Processing)
Operations related to the inspection processing performed by the image inspection apparatus 1 will be described by referring to
In Step S20, an image of an inspection target object 2 is captured by the image sensor 11, and the image data is captured into the apparatus main body 10. Here, the captured image (original image) is displayed on the display device 12 as appropriate. The upper section in
In Step S21, the inspection area extraction unit 102 reads the required setting information from the storage device 13. The setting information at least includes the inspection area definition information and the inspection logic. The inspection area definition information is information defining the position/shape of the inspection area to be extracted from the original image. The inspection area definition information may be of any format. For example, a bitmask with different labels respectively on the inner and outer sides of the inspection area, and vector data expressing the contour of the inspection area with a Bezier curve or a spline curve may be used as the inspection area definition information. The inspection logic is information defining the detail of the inspection processing. For example, the inspection logic includes a type and a determination method for a feature quantity used for inspection, as well as a parameter and a threshold used for extracting the feature quantity and determination processing.
In Step S22, the inspection area extraction unit 102 extracts a portion as the inspection area from the original image, in accordance with the inspection area definition information. The middle section in
In Step S23, the inspection processing unit 101 extracts a required feature quantity from the inspection area image 31, in accordance with the inspection logic. In this example, the colors of the pixels of the inspection area image 31 and the average value thereof are extracted, as the feature quantities for the inspection for the scratch/the color unevenness of the surface.
In Step S24, the inspection processing unit 101 determines whether there is a scratch/color unevenness, in accordance with the inspection logic. For example, when a pixel group with a color difference from the average value obtained in Step S23 exceeding a threshold is detected, the pixel group may be determined as the scratch or the color unevenness.
In Step S25, the inspection processing unit 101 displays the inspection result on the display device 12, and stores the inspection result in the storage device 13. Thus, the inspection processing on a single inspection target object 2 is completed. In the production line, processing in Steps S20 to S25 in
In the appearance inspection, according to one or more embodiments of the present invention, only pixels to be the inspection target are accurately extracted as the inspection area image 31. This is because, when the inspection area image 31 includes a background and an unnecessary portion (the hinge portion 20 and the button portion 21 in the example of
(Setting Processing of Inspection Area)
The functions and the operations of the setting tool 103 are described by following the flowcharts in
When the setting tool 103 is started, a setting screen in
When the user presses the image capture button 51, the setting tool 103 captures an image of a sample of the inspection target object with the image sensor 11 (Step S40). As the sample, according to one or more embodiments of the present invention, the inspection target object with a good quality is used, and the image is captured under the same condition (relative positions between the image sensor 11 and the sample, lighting, and the like) as the actual inspection processing. The sample image data thus acquired is captured in the apparatus main body 10. When a sample image captured in advance is in the auxiliary storage device and the storage device 13 of the apparatus main body 10, the setting tool 103 may read the data of the sample image from the auxiliary storage device and the storage device 13.
The sample image captured in Step S40 is displayed on the image window 50 in the setting screen as shown in
When the user presses the segmented display button 52, the setting tool 103 generates a grid pattern overlaid image (hereinafter, simply referred to as a designating image) for area designation, and displays the designating image on the image window 50 (Step S42).
When the input event is changing the foreground/background toggle button 53 (Step S51; Y), the setting tool 103 switches between a foreground designating mode and a background designating mode in accordance with the state of the toggle button 53 (Step S52).
When the input event is the selection of the subarea (Step S53; Y), the processing proceeds to Step S54. The subarea may be selected through, for example, an operation of moving a mouse cursor to any of the subareas in the designating image, and clicking the button of the mouse. When the display device 12 is a touch panel display, the subarea can be selected by an intuitive operation of touching the subarea of the designating image. When a subarea is selected, the setting tool 103 checks whether the subarea is the subarea that has already been designated (Step S54). If the subarea has already been designated, the designation is cancelled (Step S55). When the subarea has not been designated, the subarea is designated as the foreground when the current mode is the foreground designating mode (Step S56; Y, Step S57), and the subarea is designated as the background when the current mode is the background designating mode (Step S56; N, Step S58). The subarea designated as the foreground or the background may have the boundary and/or the color therein changed (highlighted), or have a predetermined mark drawn therein, so as to be distinguished from other undesignated subareas. The color, the way of highlighting, or a mark to be drawn may be changed so that the foreground area and the background area can be distinguished from each other.
When the input event is the operation on the area size adjustment slider 54 (Step S59; Y), the processing returns to Step S42 in
When the input event is the pressing of the area segmentation button 55, the segmented display mode is terminated (Step S60; Y). The segmented display mode may be terminated also when the segmented display button 52 is pressed again or when the image capture button 51 is pressed. When the segmented display mode is to be continued, the processing returns to Step S50.
Referring back to
Then, when the user presses the enter button 56, the setting tool 103 generates the inspection area definition information for the extracted inspection area, and stores the inspection area definition information in the storage device 13 (Step S45). When an inappropriate inspection area is extracted in Step S44, the processing may return to the image capturing (Step S40), the foreground/background designation (Step S43), or the like to be redone.
With the configuration described above, a plurality of subareas as candidates are recommended by a computer, and a user only needs to select an area satisfying a desired condition, from the candidates. Thus, the area can be intuitively and easily designated. The boundaries of the subareas are clearly shown, and the area is designated in a unit of a subarea, and thus the designation of the user is more restricted, compared with a conventional method of allowing the user to freely input any area or pixel group with a mouse or the like. The restriction can prevent erroneous designation of selecting an unintended pixel as well, and thus facilitate the intended area designation.
When the target image is segmented into a lattice form at an equal interval as in the first embodiment, the subareas of the same shape are regularly arranged, and thus the subarea can be easily selected. The size of the subarea can be changed by the user with the area size adjustment slider 54. Thus, the size of the subarea can be appropriately adjusted in accordance with the size and the shape of the foreground portion (or the background portion) in the target image, whereby the area designation is facilitated. In the first embodiment, the segmentation into the subareas is performed with a lattice pattern. However, this should not be construed in limiting sense, and a mesh pattern including elements of a polygonal shape such as a triangle or a hexagon, or any other shapes may be used. The subarea may have uniform or non-uniform shapes and sizes, and may be arranged regularly or randomly.
A second embodiment of the present invention is described by referring to
A segmentation method of the second embodiment segments an image into more detailed areas than in area segmentation (dividing between the foreground and the background) performed in the later step, and thus will be hereinafter referred to as “over segmentation”. A method called super pixel, or a method such as clustering and labeling may be used as an algorithm for the over segmentation. The purpose of the segmentation into subareas is to facilitate the designation of the foreground/background provided as the initial value in the area segmentation processing in the later step. Thus, when the over segmentation is performed, whether to integrate the pixels at least based on a feature of the color, the brightness, or the edge, may be determined. In the second embodiment described below, adjacent pixels with a similar feature of the color or the brightness are grouped into a subarea.
a) shows an example of a designating image formed by the over segmentation. In a case of the over segmentation, unlike with the pattern segmentation in the first embodiment, the sizes and the shapes of the subareas are non-uniform, and the subareas having shapes corresponding to the shape, the pattern, the shading, and the like of an object in the target image, are formed. When the subareas formed by the over segmentation are too small, and thus are difficult to select, recalculation for the over segmentation may be performed with a condition changed by the area size adjustment slider 54 as shown in
The configuration of the second embodiment described above provides the following advantageous effects in addition to the same advantageous effects provided by the first embodiment. Specifically, the subareas formed by the over segmentation have shapes reflecting the shape/the pattern/the shading and the like of the object. Thus, even an area with a narrow and small size and a complex shape can be easily selected. The subarea formed by the over segmentation includes a pixel group with a similar feature of the color or the brightness feature, or a pixel group defined by an edge. Thus, the subarea is less likely to include the pixels of both the foreground and the background. Thus, the advantage that the erroneous designation of selecting an unintended pixel is less likely to occur, is further provided.
Next, a third embodiment of the present invention will be described. The third embodiment is different from the first and the second embodiments, in which all the subareas are displayed on the designating image, in that only a part of the subareas is displayed. Specifically, the content of the processing in Step S42 in the flow of
Various rules for extracting subareas to be displayed on the designating image are conceivable.
For example, when the subareas are formed by the pattern segmentation as in the first embodiment, ones with a uniform color or brightness and ones without an edge (a portion with a high contrast) may be extracted with a higher priority. With the pattern segmentation, subareas are formed without taking the features in an image into account, and thus some subareas might be at position over both the foreground and the background. Such a subarea should not be designated as the foreground or the background, and should be excluded from the options in advance, so that higher user friendliness is achieved, and the erroneous designation of such a subarea is prevented in advance.
An extremely narrow and small area might be formed by the over segmentation in the second embodiment. The narrow and small area is not only difficult to select, but also degrades the visibility of the designated image, and thus is not preferable. Thus, in the case of the over segmentation, a method of extracting subareas with a larger size (area) or width, with a higher priority is preferable. When the foreground and the background are almost the same in the color or the brightness, a subarea over both the foreground and the background might be formed by the over segmentation. Thus, a method of evaluating the contrast in the subarea and the contrast in a boundary portion (contour) of the subarea, and extracting a subarea without an edge, a subarea with a high contrast in the boundary, and the like with a high priority, is also preferable. Thus, the subarea including pixels in both the foreground and the background can be excluded.
As a method that can be applied to both the pattern segmentation and the over segmentation, according to one or more embodiments of the present invention, the subareas to be extracted is selected, in such a manner that the feature of the color or the brightness, or the position varies (variety) among the extracted subareas as much as possible. By thus determining candidates of the subarea, subareas can be sufficiently set to various positions and to the foreground portion and the background portion, in an image.
a) is an example where only the subareas (black points) extracted in
a)-11(c) show a fourth embodiment of the present invention. The fourth embodiment is different from the embodiments described above, in which a subarea is selected with a mouse or a touch panel, in that a subarea is selected with an input device such as a keyboard or a keypad. Aside from this, the configuration is similar to one in the other embodiments.
The input device of the fourth embodiment is provided with a movement key and a selection key. In the setting screen of the fourth embodiment, as shown in
With an interface as in the fourth embodiment, the intended subarea can be selected without fail with simple operation on the movement key and the selection key. In the fourth embodiment, the subarea in the selected state is highlighted with the focus frame. Thus, the subarea in the selected state can be easily distinguished from the other subareas. Accordingly, the erroneous selection of the subarea can be prevented, and the usability can be improved. A method of highlighting is not limited to that with the focus frame, and any other methods such as changing the color of the frame of the subarea or the color in the area may be employed.
a)-12(b) show a fifth embodiment of the present invention. In the fifth embodiment, a target image, displayed on the image window 50, can be enlarged, downsized, translated (scrolled), or rotated by the user. The operation instructions may be capable of being performed by dragging or wheeling a mouse for example, or dragging or pinching on a touch panel.
b) shows a state where an image in
a)-13(b) show a sixth embodiment of the present invention. The sixth embodiment is different from the fifth embodiment described above, in which only the image is enlarged/reduced, translated, and rotated, and the position and the size of the subarea remain unchanged, in that the enlarging and the like are performed on both the image and the subarea. An operation instruction such as enlarging is the same as the fifth embodiment.
b) shows a state where an image in
The embodiments described above are merely examples of the present invention, and do not limit the scope of the present invention. For example, in one or more of the embodiments described above, the example where one or more embodiments of the present invention is applied to the inspection area setting in the image inspection apparatus is described. However, one or more embodiments of the present invention can be applied to any apparatus that uses the area segmentation (segmentation).
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Accordingly, the scope of the invention should be limited only by the attached claims.
Number | Date | Country | Kind |
---|---|---|---|
2012-057058 | Mar 2012 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2012/079839 | 11/16/2012 | WO | 00 |