Field of the Invention
The present invention relates to technologies for detecting touched positions and touchdown.
Description of the Related Art
Conventionally, arranging some devices on a screen as a projection target enables touch input when performing the touch input through an image projected by a projector. Japanese Patent Application Laid-Open Nos. 2004-272353 and 2001-43021 disclose techniques for coordinates input and touch determination, the techniques comprising: arranging means for projecting infrared rays and means for receiving the infrared rays on a screen; and detecting fingers interrupting the infrared rays by using these means so as to perform the coordinates input and the touch determination. Meanwhile, Japanese Patent Application Laid-Open Nos. 2001-236179, 2004-265185 and 2000-81950 disclose techniques of inputting indication through an image projected by a projector without arranging any device on a screen. Unfortunately, the techniques disclosed in these documents require a dedicated pointing device. The pointing device emits light or ultrasonic waves, which is detected in proximity to a projection area or at a position apart from the screen, thereby enabling coordinates input and touch determination.
A projector, which is regarded as one of display devices, has an advantage of capability of projection onto any place, in comparison with a flat panel type display device, e.g., a liquid crystal display device. The feature of “capability of projection onto any place” is incompatible with the requirement of “arranging some devices on a screen”, which hinders its convenience. Likewise, the feature of “capability of projection onto any place” is incompatible with the requirement of “a dedicated pointing device”, which also hinders its convenience. For compatibility therebetween, Japanese Patent Application Laid-Open No. 2011-118533 discloses a technique of touch input, the technique comprising detecting shadows of hands or fingers using a camera arranged on a side of a projector. The disclosed technique detects a touched position and determines a touchdown when a user touches a certain position on a projected image with a hand.
In the technique disclosed in Japanese Patent Application Laid-Open No. 2011-118533, however, the detection of the touched position and determination of the touchdown are performed based on a relationship between a real image of a hand illuminated with projection light and a shadow image of the hand generated by the projection light. If there is a black region in the projected image, no effective light beam can be obtained in the region using this technique, and consequently the shadow image and the real image cannot be sufficiently obtained. Accordingly, the touched position and touchdown cannot be correctly detected by the disclosed technique.
Thus, it is an object of the present invention to reliably obtain a shadow image and to correctly detect pertaining to a touch.
A touch detection apparatus of the present invention comprises: a lighting unit configured to project light including invisible wavelength components onto a screen; an image sensing unit configured to have a sensitivity to wavelength components of the light projected from the lighting unit, and to take images of regions of the screen on which the light is being projected by the lighting unit; and a detection unit configured to detect a touch pertaining to the screen based on images taken by the image sensing unit.
The present invention enables to reliably obtain a shadow image and to correctly detect a touch.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Preferred embodiments of the present invention will now be described in detail in accordance with the accompanying drawings.
Exemplary embodiments to which the present invention is applied will hereinafter be described in detail with reference to accompanying drawings. Configurations described in the following embodiments are only examples of the present invention. The present invention is not limited to the following embodiments.
A first embodiment of the present invention will now be described.
A screen 8 is a target on which an image is projected by the image projecting unit 7 of the projector 6, and is a target area of touch input. An imaging area of the image sensing device 1 and illumination areas of the lighting devices 2 to 5 include a part or the entire parts of the screen 8. The lighting devices 2 to 5 project light that does not include wavelength components of visible light. Here, it is assumed that the lighting devices 2 to 5 project infrared rays. However, the light is not limited thereto. The light may be any type of invisible light.
The image sensing device 1 has sensitivity at least for a part of or the entire wavelength bands of the infrared rays projected by the lighting devices 2 to 5. The present invention does not limit whether the wavelength characteristics of the image sensing device 1 includes a visible region or not. In the case with no visible region, the device is more robust against ambient light. On the contrary, in the case of including a visible region, an image projected by the image projecting unit 7 can be captured. Accordingly, closer cooperation can be achieved between the projected image and touch input.
Shadow images 9 and 10 of a hand imaged on the screen 8 in
Image data taken by the image sensing device 1 is temporarily stored in the image data storage unit 62, subsequently output to the first image processing unit 63 and, in turn, output to the second image processing unit 64. The image data is exchanged via an image data bus. The second image processing unit 64 calculates indication coordinates data and touch determination data, based on image data input via the first image processing unit 63. The calculated indication coordinates data and touch determination data are temporarily stored in the indication coordinates data/touch determination data storage unit 65 and then transmitted by the coordinates/touch state transmission unit 66. The destination of the indication coordinates data and the touch determination data is one of a PC, a smart terminal and the projector 6 itself, and transmitted in a wireless or wired manner. The control unit 61 reads necessary programs and data from a recording medium, not illustrated, and executes the programs, thereby realizing after-mentioned processes of a flowchart illustrated in
First, when a user extends a hand above the screen 8, image data illustrated in
Setting as illustrated in
At step S101, the image sensing device 1 performs a process of obtaining images in a state where light is not projected by the first to fourth lighting devices 2 to 5. At step S102, the image sensing device 1 performs a process of obtaining images in a state where no indication input by a hand is performed and light is projected by the first to fourth lighting devices 2 to 5. Here, the process of obtaining images is performed in the state where light is projected with first to fourth lighting patterns. At step S103, the first image processing unit 63 generates four pieces of reference image data by subtracting the pieces of image data taken at step S101 from each of the pieces of image data taken at step S102 that are correspond to first to fourth lighting patterns. This process is an example of a process in a reference image data generation unit.
At step S104, the image sensing device 1 performs a process of obtaining images in a state where light is not projected by the first to fourth lighting devices 2 to 5. At step S105, the image sensing device 1 performs a process of obtaining images in a state where an indication input by the hand is performed and light is projected by the first to fourth lighting devices 2 to 5. Here, the process of obtaining images is performed in the state where light is projected with the first to fourth lighting patterns.
At step S106, the first image processing unit 63 generates four pieces of shadow image data by subtracting the pieces of image data taken at step S104 from each of the pieces of image data corresponding to the first to fourth lighting patterns taken at step S105. This process is an example of a process in a shadow image data generation unit. At step S107, the first image processing unit 63 selects a set of (two) pieces of shadow image data from among the four pieces of shadow image data generated at step S106. At step S108, the second image processing unit 64 performs after-mentioned image processing.
At step S109, the second image processing unit 64 calculates indication coordinates data and touch determination data, based on a result of image processing. At step S110, the control unit 61 evaluates if the indication coordinates data and touch determination data calculated at step S109 are correct. At step S111, the control unit 61 determines whether the indication coordinates data and the touch determination data are correct or not, according to correctness evaluation at step S110. If the indication coordinates data and the touch determination data are correct, the process proceeds to step S112. In contrast, if the indication coordinates data and the touch determination data are not correct, the process returns to step S104. At step S112, the coordinates/touch state transmission unit 66 transmits the indication coordinates data and the touch determination data to a PC or the like. The control unit 61 determines whether to accept an indication for finishing the process from the user or not. If the indication for finishing the process is accepted, the process is finished. In contrast, if the indication for finishing the process is not accepted, the process returns to step S104.
Next, a process of selecting a set of pieces of image data at step S106 will be described in detail. In this embodiment, plural pieces of shadow image data corresponding to the first to fourth lighting patterns are generated. An optimal set of pieces of shadow image data in which shadow images are not shaded by the hand or arm of the user is selected from among the pieces of data.
Here, the first to fourth lighting patterns are illustrated as respective patterns p01 to p04 in
A piece of non-lighting state image data R0 is data taken at step S101 in a state where light is not projected by the first to fourth lighting devices 2 to 5. Pieces of image data R1 to R4 are data taken at step S102 in a state where no indication input by a hand is performed and light is projected by the first to fourth lighting devices 2 to 5 with the first to fourth lighting patterns. At step S103, pieces of reference image data Ref1 to Ref4 from which ambient light is removed are generated by subtracting the non-lighting state image data R0 from the image data R1 to R4. The processes are represented in following Expressions 1-1 to 1-4.
Ref1=R1−R0 Expression 1-1
Ref2=R2−R0 Expression 1-2
Ref3=R3−R0 Expression 1-3
Ref4=R4−R0 Expression 1-4
A piece of non-lighting state image data A0 is data taken at step S104 in the state where light is not projected by the first to fourth lighting devices 2 to 5. Pieces of image data A1 to A4 are data taken at step S105 in the state where no indication input by a hand is performed and light is projected by the first to fourth lighting devices 2 to 5 with the first to fourth lighting patterns. At step S106, pieces of shadow image data K1 to K4 in a normal sample from which ambient light is removed are generated by subtracting the non-lighting state image data A0 from the image data A1 to A4. The processes are represented in following Expressions 2-1 to 2-4.
K1=A1−A0 Expression 2-1
K2=A2−A0 Expression 2-2
K3=A3−A0 Expression 2-3
K4=A4−A0 Expression 2-4
At step S106, the set of (two) pieces of shadow image data are selected. Here, it is assumed that the pieces of shadow image data K1 and K2 are selected. The processes at and after step S107 are executed only on pieces of image data with subscripts 1 and 2. The selection of the set of pieces of shadow image data may be performed at a stage where the pieces of image data A1 to A4 are obtained at step S105.
Next, the image processing at step S108 will be described in detail. At step S108, first, the second image processing unit 64 normalizes the set of pieces of shadow image data selected at step S107 with the reference image data generated at step S103. In actuality, it is useful to treat data in the state with a shadow as data having a positive direction. Accordingly, the data is inverted and normalized to generate pieces of “normalized shadow image data” KN1 and KN2. The processes are represented in following Expressions 3-1 to 3-2.
KN1=(Ref1−K1)/Ref1 Expression 3-1
KN2=(Ref2−K2)/Ref2 Expression 3-2
Next, the second image processing unit 64 performs a process of binarizing each of pieces of normalized shadow image data KN1 and KN2. Binarization of the pieces of normalized shadow image data KN1 and KN2 generates pieces of shadow region image data KB1 and KB2 that only include position information but do not include level information. The processes are represented in following Expressions 4-1 to 4-2. These processes are an example of a process in a shadow region image data generation unit.
KB1=Slice(KN1) Expression 4-1
KB2=Slice(KN2) Expression 4-2
where the function Slice( ) returns one if input image data exceeds a prescribed threshold, and returns zero if the data does not exceed the threshold, on each pixel.
Next, the second image processing unit 64 applies an OR operation to two pieces of shadow region image data to generate OR shadow region image data. The process is represented in following Expression 5. This process is an example of a process in an OR shadow region image data generation unit.
KG=KB1 KB2 Expression 5
Here, referring to
Next, the second image processing unit 64 further performs image processing based on the OR shadow region image data, and calculates two positions P1 and P2 to calculate the indication coordinates data and the touch determination data.
As illustrated in
The isolation region removal filter 641 removes an isolation region based on the unclear part of the input image data (OR shadow region image data) KG described with reference to
KG5=(KG3)Λ(KG4) Expression 6
The isolation region removal filter 641, the concavity removal filter 642 and the convexity removal filter (1) 643 have a function of removing noise for detecting a finger shape. The convexity removal filter (2) 644 has a function of temporarily removing the finger part and outputting the image data KG4 to be input into the finger extraction logic filter 645. Each of the concavity removal filter 642, the convexity removal filter (1) 643 and the convexity removal filter (2) 644 is a region expansion/diminution filter configured by combining the region expansion filter and the region diminution filter with each other.
Next, referring to
In this embodiment, a region expansion filter (1) 6421 and a region diminution filter (1) 6422 in the concavity removal filter 642 are associated with the same number of taps, which is Tp1=10. A region diminution filter (2) 6431 and a region expansion filter (2) 6432 in the convexity removal filter (1) 643 are associated with the same number of taps, which is Tp2=10. Here, Tp1 and Tp2 do not necessarily have the same value.
In the convexity removal filter (2) 644, image data KG4 is required to be a little larger, in order not to leave noise owing to masking in the finger extraction logic filter 645. Accordingly, the number of taps of a region diminution filter (3) 6441 is set such that TP3_1=40, and the number of taps of a region expansion filter (3) 6442 is set such that Tp3_2=50.
According to the numbers Tp1, Tp2, Tp3_1 and Tp3_2, each application is optimized in consideration of the size of a shadow to be treated and the pixel resolution of the obtained image data, thereby allowing stable detection of a finger. In the case where more importance is attached to the performance than to calculation time and packaging cost, the convexity removal filter and the concavity removal filter may be redundantly stacked in plural stages instead of one stage. Appropriate selection of the number of taps allows better image processing to be achieved.
As illustrated in
If d1<d, the coordinates/touch state transmission unit 66 in
If d2<d≦d1, the coordinates/touch state transmission unit 66 transmits the indication coordinates data representing the coordinate value P0 together with touch determination data representing “no touchdown”.
If d≦d2, the coordinates/touch state transmission unit 66 transmits the indication coordinates data representing the coordinate value P0 together with the touch determination data representing “presence of touchdown”.
Next, a second embodiment of the present invention will be described.
Next, a third embodiment of the present invention will be described.
Next, a fourth embodiment of the present invention will be described.
According to this embodiment, for instance, for the lighting pattern p01 in
At step S201, the image sensing device 28 and the image sensing device 29 execute a process of obtaining images in the state where light is not projected by the first to fourth lighting devices 2 to 5. At step S202, the image sensing device 28 and the image sensing device 29 execute the process of obtaining images in the state where no indication input by a hand is performed and light is projected by the first to fourth lighting devices 2 to 5 with the first to fourth lighting patterns. At step S203, the first image processing unit 63 generates four pieces of reference image data by subtracting the pieces of image data taken at step S201 from each of the pieces of image data corresponding to first to fourth lighting patterns taken at step S202.
At step S204, the image sensing device 28 and the image sensing device 29 execute the process of obtaining images in the state where light is not projected by the first to fourth lighting devices 2 to 5. At step S205, the image sensing device 28 and the image sensing device 29 execute the process of obtaining images in the state where indication input is performed by a hand and light is projected by the first to fourth lighting devices 2 to 5 with the first to fourth lighting patterns. At step S206, the first image processing unit 63 generates eight pieces of shadow image data by subtracting the pieces of image data taken at step S204 from each of the pieces of image data corresponding to the first to fourth lighting patterns taken at step S205. At step S207, the first image processing unit 63 selects a set of (two) pieces of shadow image data from among eight pieces of shadow image data generated at step S206. Thereafter, steps S208 to S213, which are equivalent to steps S108 to S113 in
A fifth embodiment of the present invention will be described.
Next, a sixth embodiment of the present invention will be described.
Next, a seventh embodiment of the present invention will be described.
The above-described embodiments can reliably obtain a shadow image even in the case with ambient light, and correctly detect a touched position and presence or absence of a touchdown. The above-described embodiments detect a touched position and presence or absence of a touchdown, based on image data taken in a state where the plurality of lighting devices project light onto a screen. Accordingly, detection can be correctly performed even in the case where the positions and indications of a hand and an arm are in any direction.
Other Embodiments
Aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment(s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment(s). For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (e.g., computer-readable medium).
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2012-156904, filed Jul. 12, 2012, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2012-156904 | Jul 2012 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
4886943 | Suzuki et al. | Dec 1989 | A |
4887245 | Mori et al. | Dec 1989 | A |
4910363 | Kobayashi et al. | Mar 1990 | A |
4931965 | Kaneko et al. | Jun 1990 | A |
4980518 | Kobayashi et al. | Dec 1990 | A |
5070325 | Tanaka et al. | Dec 1991 | A |
5097102 | Yoshimura et al. | Mar 1992 | A |
5142106 | Yoshimura et al. | Aug 1992 | A |
5239138 | Kobayashi et al. | Aug 1993 | A |
5500492 | Kobayashi et al. | Mar 1996 | A |
5539678 | Tanaka et al. | Jul 1996 | A |
5565893 | Sato et al. | Oct 1996 | A |
5714698 | Tokioka et al. | Feb 1998 | A |
5726686 | Taniishi et al. | Mar 1998 | A |
5736979 | Kobayashi et al. | Apr 1998 | A |
5805147 | Tokioka et al. | Sep 1998 | A |
5818429 | Tanaka et al. | Oct 1998 | A |
5831603 | Yoshimura et al. | Nov 1998 | A |
5936207 | Kobayashi et al. | Aug 1999 | A |
6415240 | Kobayashi et al. | Jul 2002 | B1 |
6636199 | Kobayashi | Oct 2003 | B2 |
6798926 | Hiramatsu | Sep 2004 | B2 |
6862019 | Kobayashi et al. | Mar 2005 | B2 |
7075524 | Kobayashi et al. | Jul 2006 | B2 |
7119788 | Gomi et al. | Oct 2006 | B2 |
7486281 | Kobayashi et al. | Feb 2009 | B2 |
7538894 | Kobayashi | May 2009 | B2 |
7965904 | Kobayashi | Jun 2011 | B2 |
20080013826 | Hillis et al. | Jan 2008 | A1 |
20080055266 | Harada | Mar 2008 | A1 |
20100097349 | Kim | Apr 2010 | A1 |
20100238138 | Goertz | Sep 2010 | A1 |
20110164114 | Kobayashi et al. | Jul 2011 | A1 |
20110234542 | Marson | Sep 2011 | A1 |
20110242054 | Tsu | Oct 2011 | A1 |
20120154408 | Yukawa | Jun 2012 | A1 |
20130234955 | Takano et al. | Sep 2013 | A1 |
20130238124 | Suzuki et al. | Sep 2013 | A1 |
20130257814 | Kobayashi et al. | Oct 2013 | A1 |
Number | Date | Country |
---|---|---|
2000-081950 | Mar 2000 | JP |
2001-043021 | Feb 2001 | JP |
2001-236179 | Aug 2001 | JP |
2004-265185 | Sep 2004 | JP |
2004-272353 | Sep 2004 | JP |
2008-059253 | Mar 2008 | JP |
2011-118533 | Jun 2011 | JP |
Entry |
---|
U.S. Appl. No. 13/948,380, filed Jul. 23, 2013 by Machii et al. |
U.S. Appl. No. 13/973,651, filed Aug. 22, 2013 by Matsushita et al. |
Japanese Office Action dated Apr. 26, 2016 in Japanese Application No. 2012156904. |
Number | Date | Country | |
---|---|---|---|
20140015950 A1 | Jan 2014 | US |