This application is a 371 U.S. National Stage filing of PCT/JP2009/063382, filed Jul. 28, 2009, which claims priority to Japanese Patent Application Number JP 2008-201463 filed Aug. 5, 2008, all of which are incorporated herein by reference.
The present invention relates to an image input device including an image pickup function, an image input/output device including an image display function and an image pickup function, and an image processing apparatus and an image processing method applied to a labeling process in such an image input device or such an image input/output device.
Some image displays include touch panels. Types of touch panels include an optical type touch panel optically detecting a finger or the like in addition to a resistance type touch panel using a change in electrical resistance and a capacitance type touch panel using a change in capacitance. For example, in the optical type touch panel, an image is displayed on a display surface thereof by modulating light from a backlight in a liquid crystal element, and light emitted from the display surface and then reflected from a proximity object such as a finger is received by photoreception elements arranged on the display surface so as to detect the position or the like of the proximity object. Patent Document 1 discloses such an image display. The display disclosed in Patent Document 1 includes a display section including a display means for displaying an image and an image-pickup means for picking up an image of an object.
When such an optical type touch panel detects a plurality of points, in some cases, a process of providing an identification number to each connected region considered as one set of points is performed on data captured as an image from photoreception elements (for example, refer to Patent Document 2). Such a process is called a labeling process.
However, in a labeling process in related art in Patent Document 2 or the like, two-dimensional data as a labeling image is temporarily stored in a frame memory, and the labeling process is performed based on the labeling image. Therefore, it is difficult to perform a real-time process on data obtained from photoreception elements, and it is desirable to achieve a higher speed of a labeling process.
The present invention is made to solve the above-described issue, and it is an object of the invention to provide an image processing apparatus and an image processing method which are allowed to achieve a higher speed of a labeling process than ever before, and an image input device and an image input/output device which includes such an image processing apparatus.
An image processing apparatus of the invention includes: a scanning section performing sequential scanning on pixels in an image represented by binarized pixel data; and an information obtaining section performing a process so that during the sequential scanning on the pixels, while label information representing an identification number for each connected region in the image is, as occasion arises, allocated to a target pixel based on values of pixel data of the target pixel and neighboring pixels thereof, position information and area information for each connected region corresponding to each label information are updated as occasion arises, and thereby acquisition of the label information, the position information and the area information about the whole image is completed on completion of the sequential scanning. Herein, “connected region” means pixel region which is allowed to be considered as one set of points.
An image processing method of the invention including: performing sequential scanning on pixels in an image represented by binarized pixel data, and performing a process so that during the sequential scanning on the pixels, while label information representing an identification number for each connected region in the image is, as occasion arises, allocated to a target pixel based on values of pixel data of the target pixel and neighboring pixels thereof, position information and area information for each connected region corresponding to each label information are updated as occasion arises, and thereby acquisition of the label information, the position information and the area information about the whole image is completed on completion of the sequential scanning.
An image input device of the invention includes: an input panel including a plurality of photoreception elements arranged along an image pickup surface to receive light reflected from an external proximity object; a scanning section performing sequential scanning on pixels in a picked-up image represented by binarized pixel data, the picked-up image being obtained based on photoreception signals from the photoreception elements; an information obtaining section performing a process so that during the sequential scanning on the pixels, while label information representing an identification number for each connected region in the picked-up image is, as occasion arises, allocated to a target pixel based on values of pixel data of the target pixel and neighboring pixels thereof, position information and area information for each connected region corresponding to each label information are updated as occasion arises, and thereby acquisition of the label information, the position information and the area information about the whole picked-up image is completed on completion of the sequential scanning; and a position detection section obtaining information about one or more of the position, shape and size of the external proximity object based on the label information, the position information and the area information obtained by the information obtaining section.
A first image input/output device of the invention includes: an input/output panel including a plurality of display elements arranged along a display surface to display an image based on an image signal and a plurality of photoreception elements arranged along the display surface to receive light reflected from an external proximity object; a scanning section performing sequential scanning on pixels in a picked-up image represented by binarized pixel data, the picked-up image being obtained based on photoreception signals from the photoreception elements; an information obtaining section performing a process so that during the sequential scanning on the pixels, while label information representing an identification number for each connected region in the picked-up image is, as occasion arises, allocated to a target pixel based on values of pixel data of the target pixel and neighboring pixels thereof, position information and area information for each connected region corresponding to each label information are updated as occasion arises, and thereby acquisition of the label information, the position information and the area information about the whole picked-up image is completed on completion of the sequential scanning; and a position detection section obtaining information about one or more of the position, shape and size of the external proximity object based on the label information, the position information and the area information obtained by the information obtaining section.
A second image input/output device of the invention includes: an input/output panel including a display panel and a position detection section formed in the display panel, the display panel including a liquid crystal layer between a first substrate and a second substrate, the position detection section including a first sensor electrode and a second electrode which are allowed to come into contact with each other when the second substrate is depressed and detecting a depressed position of the second substrate corresponding to the position of an external proximity object by detecting a change in potential caused by contact between the first sensor electrode and the second sensor electrode; a scanning section performing sequential scanning on pixels in an image represented by binarized pixel data, the image being obtained based on photoreception signals from the photoreception elements; an information obtaining section performing a process so that during the sequential scanning on the pixels, while label information representing an identification number for each connected region in the image is, as occasion arises, allocated to a target pixel based on values of pixel data of the target pixel and neighboring pixels thereof, position information and area information for each connected region corresponding to each label information are updated as occasion arises, and thereby acquisition of the label information, the position information and the area information about the whole image is completed on completion of the sequential scanning; and a position detection section obtaining information about one or more of the position, shape and size of the external proximity object based on the label information, the position information and the area information obtained by the information obtaining section.
In the image processing apparatus, the image processing method, the image input device and the image input/output devices of the invention, sequential scanning is performed on pixels in an image (for example, a picked-up image) represented by binarized pixel data. At this time, while label information representing an identification number for each connected region in the image is, as occasion arises, allocated to a target pixel according to value of pixel data of the target pixel and neighboring pixels thereof, position information and area information for each connected region corresponding to each label information are updated as occasion arises. Thereby, acquisition of the label information, the above-described position information and the above-described area information about the whole image is completed on completion of such sequential scanning. In other words, unlike related art, it is not necessary to form a labeling image, and label information and the like about the whole image are obtained by one sequential scanning process.
According to the image processing apparatus, the image processing method, the image input device and the image input/output devices of the invention, sequential scanning is performed on pixels in an image represented by binarized pixel data, and during sequential scanning, while label information representing an identification number for each connected region in the image is, as occasion arises, allocated to a target pixel based on values of pixel data of the target pixel and neighboring pixels thereof, position information and area information for each connected region corresponding to each label information are updated as occasion arises, so label information, the above-described position information and the above-described area information about the whole image are obtainable by one sequential scanning process. Therefore, a higher speed of a labeling process than ever before is achievable.
Embodiments of the invention will be described in detail below referring to the accompanying drawings.
As illustrated in
As illustrated in
The display signal processing section 21 illustrated in
As illustrated in
The light emission side scanner 41 has a function of selecting a light emission cell CW to be driven in response to the light emission timing control signal outputted from the display signal holding control section 40. More specifically, a light emission selection signal is supplied through a light emission gate line connected to each pixel 16 of the input/output panel 11 to control a light-emitting element selection switch. In other words, when a voltage for turning on the light-emitting element selection switch of a given pixel 16 is applied in response to the light emission selection signal, the pixel 16 emits light with a luminance corresponding to a voltage supplied from the display signal driver 42.
The display signal driver 42 has a function of supplying display data to a light emission cell CW to be driven in response to display signals for one horizontal line outputted from the display signal holding control section 40. More specifically, a voltage corresponding to display data is supplied to the pixel 16 selected by the above-described light emission side scanner 41 through a data supply line connected to each pixel 16 of the input/output panel 11. When the light emission side scanner 41 and the display signal driver 42 perform line-sequential operations in conjunction with each other, an image corresponding to arbitrary display data is displayed on the input/output panel 11.
The light reception side scanner 43 has a function of selecting a light reception cell CR to be driven in response to the light reception timing control signal outputted from the display signal holding control section 40. More specifically, a light reception selection signal is supplied through a light reception gate line connected to each pixel 16 of the input/output panel 11 to control a photoreception element selection switch. In other words, as in the case of the operation of the above-described light emission side scanner 41, when a voltage for turning on a photoreception element selection switch of a given pixel 16 is applied in response to the light reception selection signal, a photoreception signal detected by the pixel 16 is outputted to the photoreception signal receiver 45. Thereby, for example, light reflected from an object touching or in proximity to the input/output panel 11 from light emitted from a given light emission cell CW is allowed to be received and detected by the light reception cell CR. Moreover, the light reception side scanner 43 outputs a light reception block control signal to the photoreception signal receiver 45 and the photoreception signal holding section 46, and also has a function of controlling a block contributing to these light reception operations. In addition, in the information input/output device 1 of the embodiment, the above-described light emission gate line and the above-described light reception gate line are separately connected to each of the light-emission/reception cells CWR, and the light emission side scanner 41 and the light reception side scanner 43 are operable independently.
The photoreception signal processing section 13 illustrated in
The photoreception signal receiver 45 has a function of obtaining photoreception signals for one horizontal line from the light reception cells CR in response to the light reception block control signal outputted from the light reception side scanner 43. The photoreception signals for one horizontal line obtained in the photoreception signal receiver 45 are outputted to the photoreception signal holding section 46.
The photoreception signal holding section 46 has a function of reconstructing photoreception signals for each screen (for each field of display) from the photoreception signals outputted from the photoreception signal receiver 45 in response to the light reception block control signal outputted from the light reception side scanner 43, and storing and holding the photoreception signals in, for example, a field memory configured of an SRAM or the like. Data of the photoreception signals stored in the photoreception signal holding section 46 is outputted to a position detection section 47 in the image processing section 14 (refer to
The image processing section 14 (refer to
More specifically, a labeling process section 14a (an image processing apparatus) in the image processing section 14 performs a labeling process as will be described below so as to obtain label information about the whole picked-up image (information representing identification numbers of connected regions in the picked-up image), and position information and area information for each connected region. In other words, as will be described in detail later, the labeling process section 14a performs sequential scanning on pixels in the picked-up image represented by binarized pixel data, and during the sequential scanning, while label information is, as occasion arises, allocated to a target pixel based on values of pixel data of the target pixel and neighboring pixels thereof, position information and area information for each connected region corresponding to each label information are updated as occasion arises, and thereby the above-described label information, the above-described position information and the above-described area information are obtained. In addition, the labeling process section 14a corresponds to a specific example of “a scanning section” and “an information obtaining section” in the invention.
Moreover, the position detection section 47 (refer to
The electronic device body 20 (refer to
As illustrated in
Next, referring to
As illustrated in
The condition determining circuit 141 sequentially obtains binarized data Din as binarized pixel data as illustrated in, for example,
The new label number issuing circuit 142 issues a new label based on a determination result by the condition determining circuit 141. More specifically, in the case where a label is new, an unallocated register number (corresponding to label information) is issued in the address list 143.
For example, as illustrated in
The line buffer control circuit 145 controls writing, reading and the like of the register numbers in the line buffer 144.
The additional information memory 148 associates, for example, additional information illustrated in
For example, as illustrated in
The address list control circuit 146 controls writing, reading and the like of information in the address list 143.
The label memory controller 147 controls writing, reading and the like of the additional information in the additional information memory 148, and outputs the above-described label information about the whole picked-up image, and the above-described position information and the above-described area information for each connected region as label information Dout.
Next, referring to
First, referring to
Display data outputted from the electronic device body 20 is inputted into the display signal processing section 12. The display signal processing section 12 drives the input/output panel 11 so as to display an image on the input/output panel 11 based on the display data.
While the input/output panel 11 displays an image on the display elements 11a through the use of light emitted from the backlight, the input/output panel 11 drives the photoreception elements 11b. Then, when the external proximity object such as a finger touches or comes close to the display elements 11a, an image displayed on the display elements 11a is reflected from the external proximity object, and reflected light is detected by the photoreception elements 11b. By the detection, the photoreception signals are outputted from the photoreception elements 11b. Then, the photoreception signals are inputted into the photoreception signal processing section 13, and the photoreception signal processing section 13 performs a process such as amplification to process the photoreception signals (step S10 in
Next, the picked-up image is inputted from the photoreception signal processing section 13 to the image processing section 14, and the image processing section 14 performs a binarization process on the picked-up image (step S11). In other words, the image processing section 14 stores a preset threshold value, and performs the binarization process in which the signal intensity of picked-up image data is set to “0” or “1” depending on whether the signal intensity of the picked-up image data is smaller than the threshold value, or equal to or larger than the threshold value. Thereby, a part where light reflected from the external proximity object is received is set to “1”, and the other part is set to “0”.
Then, the image processing section 14 removes an isolated point from the binarized picked-up image (step S12). In other words, the image processing section 14 performs noise removal by removing a part set to “1” isolated from the external proximity object in the case where the picked-up image is binarized in the above-described manner.
After that, the image processing section 14 performs a labeling process in the labeling processing section 14a (step S13). In other words, the labeling processing section 14a performs a labeling process on the part set to “1” in the case where the picked-up image is binarized in the above-described manner. Then, the labeling processing section 14a detects a region set to “1” as a region of the external proximity object, and obtains the barycenter or the central coordinates of the region. Such data is outputted to the control section 21 as point information (the above-described label information Dout).
Next, the control section 21 performs a necessary process such as changing a display image through the use of the point information inputted from the image processing section 14. For example, if an operation menu is displayed on a screen, which button in the operation menu is selected by a finger of a user is detected, and a command corresponding to the selected button is executed. Thus, the basic operation in the image input/output device 1 is completed.
Next, referring to
First, for example, as illustrated in
In this case, for example, as illustrated in
In this case, in the case where scanning along one line is not yet completed (step S144: N), for example, as illustrated in
In this case, for example, as illustrated in
Next, for example, as illustrated in
Now, as illustrated in
On the other hand, for example, as illustrated in
Next, for example, as illustrated in
On the other hand, in the case where it is determined that in the step S131, the pixel data of the target pixel is “1” (step S131: Y) and it is determined that in the step S133, only the label of a pixel on the left of the target pixel is valid (step S133: only the pixel on the left is valid), processes in steps S138 and S139 which will be described below are performed. In other words, the same label as that of the pixel on the left is allocated to the target pixel (step S138), and additional information is updated (step S139).
Moreover, for example, as illustrated in
On the other hand, in the case where it is determined that the labels of the pixel above the target pixel and the pixel on the left of the target pixel are different from each other in the step S140 (step S140: Y), an address list integration process which will be described below (step S141) is performed, and the same label as that of one of the pixel above the target pixel and the pixel on the left of the target pixel is allocated (step S142), and additional information is updated (step S143). More specifically, for example, as illustrated in
Thus, when the labeling process indicated by the steps S131 to 148 is performed, for example, as illustrated in
In the labeling process of the embodiment, sequential scanning is performed on pixels in the picked-up image represented by the binarized data Din in such a manner. Then, during the sequential scanning, while a register number is, as occasion arises, allocated to a target pixel based on the values of pixel data of the target pixel and neighboring pixels thereof, additional information (position information and area information) for each connected region corresponding to each label information is updated as occasion arises. Thereby, acquisition of label information about the whole picked-up image, and position information and area information for each connected region is completed on completion of the sequential scanning. In other words, unlike related art, it is not necessary to form a labeling image, and labeling information and the like about the whole image is obtained by one sequential scanning process.
Thus, in the embodiment, sequential scanning is performed on pixels in the picked-up image represented by the binarized data Din, and during the sequential scanning, while a register number (label information) representing an identification number of each connected region in the picked-up image is, as occasion arises, allocated to the target pixel based on the values of the pixel data of the target pixel and neighboring pixels thereof, additional information (position information and area information) for each connected region corresponding to each label information is updated as occasion arises, so label information, position information and area information are obtainable by one sequential scanning process. Therefore, a higher speed of a labeling process than ever before is achievable.
Moreover, a high-speed labeling process is achieved, so compared to related art, real time capability of the labeling process is allowed to be improved, and a streaming process is achievable.
Further, unlike related art, it is not necessary to form a labeling image, so a frame memory for holding such an image is also not necessary. In other words, in the embodiment, the labeling process is performed using the line buffer, so compared to related art, a used memory amount is allowed to be reduced. Therefore, the labeling process is easily achieved on hardware.
Next, a second embodiment of the invention will be described below. An image input/output device of the embodiment is the same as the image input/output device 1 of the first embodiment illustrated in
For example, as illustrated in
For example, as illustrated in
Moreover, the additional information memory 148 of the embodiment associates, for example, additional information illustrated in
Next, referring to
First, for example, as illustrated in
In this case, for example, as illustrated in
In this case, in the case where scanning along one line is not yet completed (step S245: N), for example, as illustrated in
On the other hand, in the case where it is determined that scanning along one line is completed (step S245: Y), next, the condition determining circuit 141 determines whether or not the label of the pixel on the left of the target pixel is “0” (step S247). In this case, the label of the pixel on the left of the target pixel is “0” (step S232: Y), next, the labeling process proceeds to a step S250. In addition, in the case where the label of the pixel on the left of the target pixel is “1” (step S247: N), the line buffer 144b and the label memory controller 147 performs the following processes in steps S248 and S249, respectively. In other words, current label information “0” is stored in the additional information memory 148 (step S248), and current label information is erased from the label memory controller 147 (step S249), and then the labeling process proceeds to the step S250.
In the step S250, the condition determining circuit 141 determines whether or not scanning along all lines in the picked-up image is completed (step S250). In this case, scanning along all lines is not yet completed (step S250: N), for example, as illustrated in
On the other hand, for example, as illustrated in
Next, for example, as illustrated in
Next, for example, as illustrated in
On the other hand, for example, as illustrated in
Moreover, for example, as illustrated in
On the other hand, in the case where it is determined that the labels of the pixel above the target pixel and the pixel on the left of the target pixel are different from each other in the step S240 (step S240: Y), processes in steps 241 to S244 which will be described below are performed, and the same label as that of one pixel selected from the pixel above the target pixel and the pixel on the left of the target pixel is allocated to the target pixel, and additional information is updated. More specifically, for example, as illustrated in
The labeling process represented by the steps S231 to 251 is performed in such a manner, thereby, for example, as illustrated in
In this case, also in the labeling process of the embodiment, as in the case of the first embodiment, sequential scanning is performed on pixels in the picked-up image represented by the binarized data Din. Then, during the sequential scanning, while a label number is, as occasion arises, allocated to a target pixel based on the values of pixel data of the target pixel and neighboring pixels thereof, additional information (position information and area information) for each connected region corresponding to each label information is updated as occasion arises. Thereby, acquisition of label information about the whole picked-up image, and position information and area information for each connected region is completed on completion of such sequential scanning. In other words, unlike related art, it is not necessary to form a labeling image, and labeling information and the like about the whole image is obtained by one sequential scanning process.
Thus, also in the embodiment, the same effects as those in the first embodiment are obtainable by the same functions as those in the first embodiment. In other words, label information, position information and area information about the whole picked-up image are obtainable by one sequential scanning process. Therefore, a higher speed of the labeling process than ever before is achievable.
Moreover, in the embodiment, the address list 143 in the first embodiment is not necessary, and label information is allowed to be directly updated, so compared to the first embodiment, real-time capability is further improved. Therefore, the labeling process on hardware is achieved more easily, and a used memory amount is allowed to be reduced.
Although the present invention is described referring to the first and second embodiments, the invention is not limited thereto, and may be variously modified.
For example, in the above-described embodiments, the case where as the neighboring pixels, pixels in two directions, that is, above the target pixel and on the left of the target pixel are used to perform the labeling process is descried; however, for example, the labeling process may be performed using pixels in three directions, that is, above the target pixel, on the left of the target pixel and at the upper right from the target pixel as the neighboring pixels.
Moreover, in the above-described embodiment, the case where as the value of the pixel data, “1” is a valid value, and “0” is an invalid value is described; however, on the contrary, as the value of the pixel data, “0” may be a valid value, and “1” may be an invalid value.
Further, in an example illustrated in
Moreover, in the image input/output devices 1 and 2 described in the above-described embodiments, as the input/output panel 11, a configuration using the liquid crystal display panel is described. However, the information input/output device of the invention may have a configuration using an organic electroluminescence (EL) panel or the like as the input/output panel. An organic EL element has characteristics of, when a forward bias voltage is applied, emitting light, and, when a backward bias voltage is applied, receiving light to generate a current. Therefore, the organic EL element includes a display element 11a and a photoreception element 11b. In this case, the input/output panel 11 is configured by arranging the organic EL element for each pixel, and when the forward bias voltage is applied to each organic EL element, thereby each organic EL element is allowed to emit light, an image is displayed, and when the backward bias voltage is applied to other organic EL elements, the organic EL elements are allowed to receive reflected light.
Further, in the above-described embodiments, the invention is described referring to the image input/output device 1 which includes the input/output panel 11 including a plurality of display elements 11a and a plurality of photoreception elements 11b as an example; however, the invention is applicable to an image input device (an image pickup device) which includes an input panel including a plurality of photoreception elements 11b.
Moreover, the image processing apparatus of the invention is applicable to not only a picked-up image based on photoreception signals obtained by the photoreception elements 11b but also an image produced by any other technique. More specifically, the image processing apparatus of the invention is applicable to, for example, an image produced in an image input/output device including an input/output panel 5 (with a sectional configuration in a pixel Px) illustrated in
Further, the processes described in the above-described embodiments may be performed by hardware or software. In the case where the processes are performed by software, a program forming the software is installed in a general-purpose computer or the like. Such a program may be stored in a recording medium mounted in the computer in advance.
Number | Date | Country | Kind |
---|---|---|---|
2008-201463 | Aug 2008 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2009/063382 | 7/28/2009 | WO | 00 | 3/27/2010 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2010/016411 | 2/11/2010 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4718101 | Ariga et al. | Jan 1988 | A |
4953224 | Ichinose et al. | Aug 1990 | A |
5199083 | Takeda | Mar 1993 | A |
5717784 | Yanagishita et al. | Feb 1998 | A |
5909507 | Yanagishita et al. | Jun 1999 | A |
5937091 | Yanagishita et al. | Aug 1999 | A |
6483942 | Curry | Nov 2002 | B1 |
6973259 | Todaka | Dec 2005 | B1 |
7190336 | Fujisawa | Mar 2007 | B2 |
8121414 | Yoshino | Feb 2012 | B2 |
20070253623 | Ohira et al. | Nov 2007 | A1 |
20080136980 | Rho et al. | Jun 2008 | A1 |
Number | Date | Country |
---|---|---|
1178433 | Feb 2002 | EP |
1603024 | Dec 2005 | EP |
58-151669 | Sep 1983 | JP |
61-145689 | Jul 1986 | JP |
07-175925 | Jul 1995 | JP |
07175925 | Jul 1995 | JP |
2002-164017 | Jan 2004 | JP |
2008-097172 | Apr 2008 | JP |
2008097172 | Apr 2008 | JP |
Number | Date | Country | |
---|---|---|---|
20100253642 A1 | Oct 2010 | US |