The present application claims priority to and incorporates by reference the entire contents of Japanese Patent Application No. 2014-085080 filed in Japan on Apr. 17, 2014 and Japanese Patent Application No. 2014-263887 filed in Japan on Dec. 26, 2014.
1. Field of the Invention
Embodiments of the present invention relate to an input-operation detection device, an image display apparatus including the input-operation detection device, a projector apparatus, and a projector system including the projector apparatus. More particularly, embodiments of the invention relate to an input-operation detection device configured to detect an input operation performed by a user, an image display apparatus such as a projector apparatus, an interactive whiteboard apparatus, or a digital signage apparatus including the input-operation detection device, and a projector system.
2. Description of the Related Art
In recent years, what are generally referred to as interactive projector apparatuses having a function that allows writing characters, drawing, and the like to a projected image projected on a screen and a function that allows operations including enlarging/reducing the projected image and page-by-page scrolling are commercially available. These functions are typically implemented by detecting position and motion of input means and sending a result of detection to a computer or the like. The input means can be a finger(s) of an operator that touches the screen, a pen or a pointing tool held by an operator, or the like.
For instance, Japanese Laid-open Patent Application No. 2013-61552 discloses a projector apparatus including projection means that projects a projection image on a projection surface, imaging means that captures an image of an imaging area including the projection surface using a plurality of image sensors, distance obtaining means that obtains distance information indicating a distance to an object in the imaging area based on a plurality of images output from the plurality of image sensors, input-unit detecting means that detects, if the distance information indicates that the object is within a predetermined distance from the projection surface, the object as an input unit that performs an input operation on the projected image, and analyzing means that analyzes the input operation performed on the projected image based on position or motion of the input unit on the projected image.
The projector apparatus disclosed in Japanese Laid-open Patent Application No. 2013-61552 is susceptible to improvement in terms of reduction in detection error.
It is an object of the present invention to at least partially solve the problem in the conventional technology.
It is an object of the present invention to at least partially solve the problems in the conventional technology.
According to the present invention, there is provided an input-operation detection device for detecting an input operation performed by a user on at least a portion of a displayed image, the input-operation detection device comprising: a light emitter configured to emit detection light toward an area where the input operation is to be performed; an imaging unit that includes an imaging optical system, and an image sensor, and that is configured to capture an image of at least one of the displayed image and the input operation and output a result of image capture; and a processing unit configured to detect position, at which the input operation is performed, or motion, by which the input operation is provided, based on the image capture result output from the imaging unit, a position where the optical axis of the imaging optical system intersects a display surface of the displayed image and center of the displayed image being on the same side relative to a position at which the imaging unit is mounted.
The present invention also provides an image display apparatus comprising the above-described input-operation detection device.
The present invention also provides a projector apparatus configured to operate in accordance with an input operation performed by a user on at least a portion of a projected image, the projector apparatus comprising: a light emitter configured to emit detection light toward an area where the input operation is to be performed; an imaging unit that includes an imaging optical system and an image sensor, and that is configured to capture an image of at least one of the projected image and the input operation and output a result of image capture; and a processing unit configured to detect position, at which the input operation is performed, or motion, by which the input operation is provided, based on the image capture result output from the imaging unit, a position where the optical axis the imaging optical system intersects a projection surface of the projected image and center of the projected image being on the same side relative to a position at which the imaging unit is mounted.
The present invention also provides a projector system comprising: the above-described projector apparatus; and a control device configured to perform image control based on the position, at which the input operation is performed, or the motion, by which the input operation is provided, obtained by the projector apparatus.
The above and other objects, features, advantages and technical and industrial significance of this invention will be better understood by reading the following detailed description of presently preferred embodiments of the invention, when considered in connection with the accompanying drawings.
In recent years, projector apparatuses having a function that allows writing characters, drawing, and the like to a projected image projected on a screen and a function that allows operations including enlarging/reducing the projected image and page-by-page scrolling are commercially available. These functions are typically implemented by detecting position and motion of a finger(s) of an operator that touches the screen, a pen or a pointing tool held by an operator, or the like, and sending a result of detection to a computer. Hereinafter, “a finger(s) of an operator that touches a screen, a pen or a pointing tool held by an operator, or the like” is referred to as “input means”.
As a method for detecting position and motion of input means, a method using a camera is known. For example, in a known method, position and motion of input means are detected by irradiating entire area where an image is projected of a screen with laser light and capturing light scattered from the input means with a camera. However, this method disadvantageously requires that laser light emitted from a laser light source should be parallel to the screen and in proximity of the screen, which makes it considerably difficult to arrange the laser light source at an appropriate position. Furthermore, this method is inapplicable to a curved screen, which is disadvantageous from the viewpoint of practical use.
As a technique for overcoming these disadvantages, a method of three-dimensionally detecting position and motion of input means using two cameras is proposed. An example of this method is known from Japanese Laid-open Patent Application No. 2013-61552.
However, a specific example of appropriate layout of the two cameras relative to a projected image is not disclosed in conventional methods using two cameras. Unless the two cameras are arranged at appropriate positions, an optical system having an angle of view wider than necessary is required. As a result, detection becomes susceptible to aberration, which can lead to an increase in detection error.
An embodiment of the present invention is described below with reference to
The projector system 100 includes a projector apparatus 10 and an image management apparatus 30. The projector apparatus 10 is an example of an image display apparatus including an input-operation detection device. An operator (user) performs an input operation on an image (hereinafter, sometimes referred to as “projected image”) projected on a projection surface of a screen 300 by touching either the projection surface or a position near the projection surface with input means such as a finger(s), pen, or a pointing tool.
The projector apparatus 10 and the image management apparatus 30 are placed on a desk, a table, or a dedicated pedestal (hereinafter, “stage 400”). Hereinafter, it is assumed that, in the three-dimensional Cartesian coordinates system, the direction perpendicular to a mount surface of the stage 400 where the apparatuses are placed is the Z-axis direction. It is assumed that the screen 300 is placed on the positive Y side of the projector apparatus 10. The projection surface is a surface of the screen 300 on the negative Y side. Meanwhile, as the projection surface, a surface of various items such as a board surface of a whiteboard or a wall surface can be used.
The image management apparatus 30 holds a plurality of image data and sends image information concerning an image to be projected (hereinafter, sometimes referred to as “projection image information”) and the like to the projector apparatus 10 in accordance with an instruction from an operator. Communication between the image management apparatus 30 and the projector apparatus 10 may be either wired communication via a cable such as a USB (universal serial bus) cable or wireless communication. As the image management apparatus 30, a personal computer (PC) on which predetermined program instructions are installed may be used.
When the image management apparatus 30 has an interface for a removable recording medium such as a USB memory or a secure digital (SD) card, an image stored in the recording medium can be used as a projection image.
The projector apparatus 10 is what is generally referred to as an interactive projector apparatus and may include a projection unit 11, a rangefinder unit 13, and a processing unit 15 as illustrated in
In the projector apparatus 10 according to the embodiment, an input-operation detection device according to an aspect of the present invention is made up of the rangefinder unit 13 and the processing unit 15.
The projection unit 11 includes, as do conventional projector apparatuses, a light source, a color filter, and various optical devices and is controlled by the processing unit 15.
The processing unit 15 mutually communicates with the image management apparatus 30 and, upon receiving projection image information, performs predetermined image processing on the projection image information and performs projection on the screen 300 using the projection unit 11.
The rangefinder unit 13 may include a light emitter 131, an imaging unit 132, and a computing unit 133 as illustrated in
The light emitter 131 includes a light source that emits near-infrared light and emits light (detection light) toward a projected image. The processing unit 15 turns on and off the light source. As the light source, an LED (light-emitting diode), a semiconductor laser (LD (laser diode)), or the like can be used. The light emitter 131 may include an optical device and/or a filter for adjusting the light emitted from the light source. When including an optical device and/or a filter, the light emitter 131 can adjust an emission direction (angle) of the detection light or emit structured light (see
The imaging unit 132 may include an image sensor 132a and an imaging optical system 132b as illustrated in
In the embodiment, the image-capture target of the imaging unit 132 can be the projection surface where no projection image is projected, a projected image projected on the projection surface, or the input means and a projected image.
The imaging optical system 132b is what is generally referred to as a coaxial optical system and has a defined optical axis. Hereinafter, the optical axis of the imaging optical system 132b may be referred to as the “optical axis of the rangefinder unit 13” for convenience. In the embodiment, the direction parallel to the optical axis of the rangefinder unit 13 is referred to as “a-axis direction”; the direction perpendicular to both the a-axis direction and the X-axis direction is referred to as “b-axis direction”. The imaging optical system 132b is configured to have an angle of view that allows capturing an image of the entire area of the projected image.
Referring back to
The computing unit 133 calculates distance information about the distance to the image-capture target based on information about when light is emitted by the light emitter 131 and information about when the image sensor 132a has captured reflected light. The computing unit 133 obtains three-dimensional information about a captured image or, in short, a depth map. Center of the thus-obtained depth map is on the optical axis of the rangefinder unit 13.
The computing unit 133 obtains depth maps of the image-capture target at predetermined time intervals (frame rate) and sends the depth maps to the processing unit 15.
The processing unit 15 calculates position or motion of the input means based on the depth maps obtained by the computing unit 133 and calculates input operation information that depends on the position or the motion. Furthermore, the processing unit 15 sends the input operation information to the image management apparatus 30.
Upon receiving the input operation information from the processing unit 15, the image management apparatus 30 performs image control in accordance with the input operation information. Hence, the input operation information is incorporated in the projected image.
A process, performed by the processing unit 15, of calculating input operation information (hereinafter, sometimes referred to as the “input-operation-information detection process”) is described below with reference to the flowchart of
A depth map is obtained in a condition where the input means is not present and held as a reference depth map in a memory (not shown) of the processing unit 15 in advance. It is assumed that the input means is a finger of an operator.
Whether or not a new depth map has been sent from the computing unit 133 is determined at the first step (S401). If a new depth map has not been sent from the computing unit 133 (NO at step S401), processing waits until a new depth map is sent. On the other hand, if a new depth map has been sent from the computing unit 133 (YES at step S401), processing proceeds to S403.
The difference between the depth map sent from the computing unit 133 and the reference depth map is calculated at step S403. Hereinafter, this difference may be referred to as the “depth difference map”.
Whether or not the input means is present is determined based on the depth difference map at the next step (step S405). If the depth difference map is equal to or lower than a predetermined threshold, it is determined that “the input means is not present”, and processing goes back to step S401. On the other hand, if the depth difference map is higher than the predetermined threshold, it is determined that “the input means is present”, and processing proceeds to step S407.
A shape of the finger is extracted based on the depth difference map at step S407.
Position of a fingertip of the finger is estimated from the extracted shape at the next step (step S409).
At the next step (step S411), the position (hereinafter, sometimes referred to as “differential distance”) of the fingertip from the projection surface in the Y-axis direction is calculated, and whether or not the fingertip is in contact with or near the projection surface is determined. If the differential distance is equal to or smaller than a preset value (e.g., 3 mm (millimeters)) (YES at step S411), processing proceeds to step S413.
The input operation information is calculated based on the position of the fingertip at step S413. The input operation information may be, for example, an input operation of clicking an icon as instructed by a projected image projected at the position of the fingertip or an input operation of drawing a letter or a line on a projected image during a period when the fingertip is moving.
The thus-obtained input operation information is sent to the image management apparatus 30 at the next step (step S415). Upon receiving the input operation information, the image management apparatus 30 performs image control in accordance with the input operation information. Put another way, the input operation information is incorporated in the projected image. Processing then goes back to step S401.
If the differential distance is greater than the preset value (e.g., 3 mm) (NO at step S411), processing goes back to step S401.
A first specific example and the comparative example are described below.
A point where a straight line parallel to the Z-axis and extending through the center A of the projected image intersects an end of the projected image on the negative Z side is named as point B. A point where the straight line intersects an end of the projected image on the positive Z side is named as point C. A point where a straight line parallel to the X-axis and extending through the center A of the projected image intersects an end of the projected image on the positive X side is named as point D. A point where a straight line parallel to the Z-axis and extending through the point D intersects the end of the projected image on the negative Z side is named as point E. A point where the straight line intersects the end of the projected image on the positive Z side is named as point F. A point where, in the specific example 1, a straight line parallel to the X-axis and extending through the point G where the optical axis of the rangefinder unit 13 intersects the projection surface intersects the end of the projected image on the positive X side is named as point H. The center of the imaging optical system 132b is named as point O.
Specific numerical examples of the first specific example and the comparative example in a condition where the distance between the center O of the imaging optical system 132b and the projection surface in the Y-axis direction is 400 mm, and a 60-inch projection image (screen aspect ratio: 16:9) is projected onto the projection surface are determined.
Numerical examples of the comparative example are given in
Because the screen size is 60 inch and the screen aspect ratio is 16:9, the screen is 1,328 mm in the X-axis direction and 747 mm in the Z-axis direction. The Z-coordinate of the center A of the projected image is 518.5 mm. In the first specific example, the position G where the optical axis of the rangefinder unit 13 intersects the projection surface is on the negative Z side of the center A of the projected image.
In the comparative example, the maximum half angle of view of the imaging optical system 132b is ∠AOE, which is 62.9 degrees. In contrast, in the first specific example, the maximum half angle of view of the imaging optical system 132b is ∠GOE, which is 60.2 degrees. Thus, the first specific example can reduce the half angle of view by no less than 2.7 degrees relative to the comparative example. The reduction of 2.7 degrees on the wide-angle side where the half angle of view is greater than 45 degrees (total angle of view: 90 degrees) is considerably advantageous in terms of aberration and manufacturing cost.
Reducing the maximum half angle of view can thus be achieved by configuring the rangefinder unit 13 to have such a mount angle that places the position G where the optical axis of the rangefinder unit 13 intersects the projection surface on the negative Z side of the center A of the projected image.
In the first specific example, values (absolute values) of ∠GOB and ∠GOC are equal to each other. Under this condition, the values (absolute values) ∠GOB and ∠GOC are at their minimum. Consequently, the angle of view of the imaging optical system 132b in the Z-axis direction is minimized, which leads to most favorable measurement in terms of optical accuracy in the Z-axis direction. In other words, in the comparative example, as shown in
A second specific example is described below with reference to
In the second specific example, the maximum half angle of view of the imaging optical system 132b is ∠GOE, 57.5 degrees. Thus, the second specific example can reduce the half angle of view by no less than 5.4 degrees relative to the comparative example. The reduction of 5.4 degrees on the wide-angle side where the half angle of view is greater than 45 degrees (total angle of view: 90 degrees) is considerably advantageous in terms of aberration and manufacturing cost. However, under this condition, the maximum angle of view in the Z-axis direction is increased.
Relationship between size of an image formed on a light-receiving surface of the image sensor 132a when a projected image is captured by the imaging unit 132 and number of pixels of the image sensor 132a is discussed below. Hereinafter, for convenience, the image formed on the light-receiving surface of the image sensor 132a may be referred to as “captured image”.
When the number of pixels of the rectangular image sensor 132a in the X-axis direction is denoted by M1, and that in the Z-axis direction is denoted by M2, and pixel pitch is denoted by P, the size of the image sensor 132a in the X-axis direction is P×M1, and that in the Z-axis direction is P×M2.
A captured image captured by the image sensor 132a should desirably fit within the image sensor 132a so that the image sensor 1323a can capture a projected image. Put another way, the image sensor 132a should desirably be equal to or larger than the captured image in size as illustrated in
To implement this condition, Equations (1) and (2) should desirably be satisfied.
P×M1≧f×L1 (1)
P×M2≧f×L2 (2)
Equations (1) and (2) can be satisfied by reducing the focal length of the imaging optical system 132b, by increasing the pixel pitch, or by increasing the number of pixels. However, reducing the focal length of the imaging optical system 132b reduces imaging magnification of the projected image, which results in a decrease in spatial resolution of the captured image. Increasing the pixel pitch reduces the spatial resolution of the captured image. Increasing the number of pixels leads to a considerable increase in cost of the image sensor 132a.
Described below are approaches for reducing the difference in size between the image sensor 132a and the captured image so that the imaging unit 132 can capture the projected image without causing a decrease in the spatial resolution of the captured image nor a considerable increase in cost. Hereinafter, when it is unnecessary to distinguish between L1 and L2 or when it is unidentifiable between L1 and L2, L1 or L2 may be denoted by L. When it is unnecessary to distinguish between M1 and M2 or when it is unidentifiable between M1 and M2, M1 or M2 may be denoted by M.
Approach A makes the size of the image sensor 132a in the X-axis direction and that of the captured image equal to each other. More specifically, the approach A is implemented by configuring the image sensor 132a and the captured image to satisfy Equation (3) below. Equation (4) below derives from Equation (3).
P×M1=f×L1 (3)
P/f=L1/M1 (4)
To implement this condition, Equation (2) given above should desirably be satisfied in the Z-axis direction. Accordingly, the value of P/f given by Equation (4) should desirably satisfy Equation (5) below as well.
P/f≧L2/M2 (5)
In this way, the approach A optimizes the relationship between the size of the captured image and the number of pixels of the image sensor 132a in the X-axis direction.
Approach B makes the size of the image sensor 132a in the Z-axis direction and that of the captured image equal to each other. More specifically, the approach B is implemented by configuring the image sensor 132a and the captured image to satisfy Equation (6) below. Equation (7) below derives from Equation (6).
P×M2=f×L2 (6)
P/f=L2/M2 (7)
To implement this condition, Equation (1) given above should desirably be satisfied in the X-axis direction. Accordingly, the value of P/f given by Equation (7) should desirably satisfy Equation (8) below as well.
P/f≧L1/M1 (8)
In this way, the approach B optimizes the relationship between the size of the captured image and the number of pixels of the image sensor 132a in the Z-axis direction.
Approach C makes the size of the image sensor 132a in the X-axis direction and in the Z-axis direction and that of the captured image equal to each other. More specifically, the approach C is implemented by configuring the image sensor 132a and the captured image to satisfy Equations (4) and (7) given above. Equation (9) below derives from Equations (4) and (7).
L1/M1=L2/M2 (9)
In this way, the approach C optimizes the relationship between the size of the captured image and the number of pixels of the image sensor 132a in the X-axis direction and in the Z-axis direction.
Meanwhile, the image sensor 132a can be used in an orientation of being rotated 90 degrees about the a-axis. Under this condition, Equations (10) and (11) should desirably be satisfied.
P×M1≧f×L2 (10)
P×M2≧f×L1 (11)
Approaches for reducing the difference in size between the image sensor 132a and the captured image when the image sensor 132a is used in the orientation of being rotated 90 degrees are discussed below.
Approach D makes the size of the image sensor 132a in the X-axis direction and the size of the captured image in the Z-axis direction equal to each other. More specifically, the approach D is implemented by configuring the image sensor 132a and the captured image to satisfy Equation (12) below. Equation (13) below derives from Equation (12).
P×M2=f×L1 (12)
P/f=L1/M2 (13)
To implement this condition, Equation (10) given above should desirably be satisfied. Accordingly, the value of P/f given by Equation (13) should desirably satisfy Equation (14) below as well.
P/f≧L2/M1 (14)
The approach D optimizes the relationship between the size of the captured image and the number of pixels of the image sensor 132a in the Z-axis direction when the image sensor 132a is used in the orientation of being rotated 90 degrees.
Approach E makes the size of the image sensor 132a in the Z-axis direction and the size of the captured image in the X-axis direction equal to each other. More specifically, the approach E is implemented by configuring the image sensor 132a and the captured image to satisfy Equation (15) below. Equation (16) below derives from Equation (15).
P×M1=f×L2 (15)
P/f=L2/M1 (16)
To implement this condition, Equation (11) given above should desirably be satisfied. Accordingly, the value of P/f given by Equation (16) should desirably satisfy Equation (17) below as well.
P/f≧L1/M2 (17)
The approach E optimizes the relationship between the size of the captured image and the number of pixels of the image sensor 132a in the X-axis direction when the image sensor 132a is used in the orientation of being rotated 90 degrees.
Approach F is implemented by configuring the image sensor 132a and the captured image to satisfy Equations (13) and (15) given above. Equation (18) below derives from Equations (13) and (15).
L1/M2=L2/M1 (18)
In this way, the approach F optimizes the relationship between the size of the captured image and the number of pixels of the image sensor 132a in the X-axis direction and in the Z-axis direction when the image sensor 132a is used in the orientation of being rotated 90 degrees.
Some of the six approaches are specifically described below.
In advance of the description about the approaches, a third specific modification is described. It is assumed that the image sensor 132a has the VGA (registered trademark) (video graphics array) resolution (640×480), where M1=640 and M2=480. The rangefinder unit 13 is configured as in the first specific example. Specific numerical examples of coordinates of the points a to h and specific numerical examples of L1 and L2 are given in
For example, when the approach A is used, the value of P/f is obtained using Equation (4) as: P/f=3.39/640=0.00529. Because the value of L2/M2 is obtained as: L2/M2=0.85/480=0.00177, Equation (5) given above is satisfied. Under this condition, the image sensor 132a can receive light from the entire captured image efficiently as illustrated in
For another example, when the approach B is used, the value of P/f is obtained using Equation (7) as: P/f=0.85/480=0.00177. Because the value of L1/M1 is obtained as: L1/M1=3.39/640=0.00529, Equation (8) given above is not satisfied. Under this condition, the image sensor 132a cannot receive light from the entire captured image as illustrated in
For another example, when the approach D is used, the value of P/f is obtained using Equation (13) as: P/f=3.39/480=0.00706. Because the value of L2/M1 is obtained as: L2/M1=0.85/640=0.00132, Equation (14) given above is satisfied. Under this condition, if the image sensor 132a is used in the orientation of being rotated 90 degrees, the image sensor 132a can receive light from the entire captured image efficiently as illustrated in
For another example, when the approach E is used, the value of P/f is obtained using Equation (16) as: P/f=0.85/640=0.00132. Because the value of L1/M2 is obtained as: L1/M2=3.39/480=0.00706, Equation (17) given above is not satisfied. Under this condition, even if the image sensor 132a is used in the orientation of being rotated 90 degrees, the image sensor 132a cannot receive light from the entire captured image as illustrated in
Thus, when the approach A or the approach D is used in the third specific example, the imaging unit 132 can capture the projected image without causing a decrease in the spatial resolution of the captured image nor causing a considerable increase in cost.
A fourth specific example is described below. It is assumed that the image sensor 132a has the VGA resolution (640×480), where M1=640 and M2=480. The rangefinder unit 13 is configured as in the second specific example.
For example, when the approach A is used, the value of P/f is obtained using Equation (4) as: P/f=3.13/640=0.00489. Because the value of L2/M2 is obtained as: L2/M2=0.96/480=0.00200, Equation (5) given above is satisfied. Under this condition, the image sensor 132a can receive light from the entire captured image efficiently as illustrated in
For another example, when the approach B is used, the value of P/f is obtained using Equation (7) as: P/f=0.96/480=0.00200. Because the value of L1/M1 is obtained as: L1/M1=3.13/640=0.00489, Equation (8) given above is not satisfied. Under this condition, the image sensor 132a cannot receive light from the entire captured image.
For another example, when the approach D is used, the value of P/f is obtained using Equation (13) as: P/f=3.13/480=0.00652. Because the value of L2/M1 is obtained as: L2/M1=0.96/640=0.0015, Equation (14) given above is satisfied. Under this condition, if the image sensor 132a is used in the orientation of being rotated 90 degrees, the image sensor 132a can receive light from the entire captured image efficiently.
For another example, when the approach E is used, the value of P/f is obtained using Equation (16) as: P/f=0.96/640=0.0015. Because the value of L1/M2 is obtained as: L1/M1=3.13/480=0.00652, Equation (17) given above is not satisfied. Under this condition, even if the image sensor 132a is used in the orientation of being rotated 90 degrees, the image sensor 132a cannot receive light from the entire captured image.
Thus, when the approach A or the approach D is used in the fourth specific example, the imaging unit 132 can capture the projected image without causing a decrease in the spatial resolution of the captured image nor causing a considerable increase in cost.
Hence, the imaging unit 132 can capture the projected image without causing a decrease in the spatial resolution of the captured image nor causing a considerable increase in cost by optimizing the size of the image sensor 132a and the size of a captured image in a direction where the value of L/M is large in the X-axis direction and in the Z-axis direction as in the third specific example and the fourth specific example.
Note that the approach C or the approach F is applicable when at least any one of the number of pixels of the image sensor 132a and the shape of the captured image is adjustable. Under this condition, the imaged image can be tightly enclosed within the image sensor 132a in the X-axis direction and in the Z-axis direction as illustrated in
However, in practice, dimensional error is introduced during assembly of the imaging optical system 132b and the image sensor 132a and/or during machining or the like of the imaging optical system 132b. Accordingly, such a design that makes the size of the captured image precisely identical to the size of the image sensor 132a lacks robustness. For this reason, it may be preferable to configure the captured image to be several percent larger than the image sensor 132a to obtain robustness.
Meanwhile, the value of L/M corresponds to a per-pixel size of the captured image or, put another way, resolution in imaging of the projected image. Therefore, the smaller the value of L/M, the more preferable for high-resolution measurement.
For example, when the approach A is used, the value of P/f is obtained as: P/f=L1/M1=0.00529 in the third specific example; the same is obtained as P/f=L1/M1=0.00489 in the fourth specific example. Accordingly, with the number of pixels of the image sensor 132a fixed, the mount angle of the rangefinder unit 13 of the fourth specific example is more preferable than the mount angle of the rangefinder unit 13 of the third specific example.
More specifically, imaging resolution can be increased by configuring the rangefinder unit 13 so that the optical axis of the rangefinder unit 13 has a tilt angle, with respect to the Y-axis direction, that minimizes the value of L/M in a direction where the value of L/M is large in the X-axis direction and in the Z-axis direction.
As described above, the projector apparatus 10 according to the embodiment includes the projection unit 11, the rangefinder unit 13, and the processing unit 15.
The projection unit 11 projects an image on the screen 300 in accordance with an instruction from the processing unit 15. The rangefinder unit 13 includes the light emitter 131, the imaging unit 132, and the computing unit 133 and obtains a depth map of a projection area containing the input means. The light emitter 131 emits detection light toward the projected image. The imaging unit 132 includes the imaging optical system 132b and the image sensor 132a and captures at least any one of the projected image and the input means. The computing unit 133 receives a result of image capture from the imaging unit 132. The processing unit 15 obtains input operation information indicated by the input means based on the depth map fed from the rangefinder unit 13.
In the Z-axis direction, the rangefinder unit 13 is configured to have a mount angle that places the position where the optical axis of the rangefinder unit 13 intersects the projected image and the rangefinder unit 13 on the same side of the center of the projected image.
Under this condition, increasing the angle of view of the imaging optical system 132b wider than required is prevented. As a result, an increase in detection error can be prevented or at least reduced. Furthermore, because the angle of view of the imaging optical system 132b can be reduced, reduction in cost can be achieved.
In the input-operation-information detection process of the embodiment, it is determined that a fingertip is in contact with a projection surface if the differential distance is equal to or smaller than a preset value (e.g., 3 mm). Performing the determination in this manner allows a desired input operation to be performed even if the distance obtained by the rangefinder unit 13 has an error. Furthermore, performing the determination in this manner is practical because the fingertip is determined as being in contact with the projection surface so long as the fingertip is near the projection surface even if the fingertip is not in contact therewith.
The projector system 100 according to the embodiment includes the projector apparatus 10. Accordingly, the projector system 100 can perform a desired image display operation properly.
In the embodiment, that the projector apparatus 10 and the image management apparatus 30 may be configured in one piece.
The projector apparatus 10 of the embodiment may be modified, as a first modification, such that the rangefinder unit 13 is externally and removably attached to the casing via a mounting member (not shown) (see
The embodiment described above may be modified such that at least a part of processing performed by the processing unit 15 is performed by the image management apparatus 30. For instance, when the input-operation-information detection process is to be performed by the image management apparatus 30, the depth map obtained by the rangefinder unit 13 may preferably be sent to the image management apparatus 30 via a cable or wirelessly.
The projector apparatus 10 of the embodiment may be modified, as a second modification, such that the projector apparatus 10 includes a plurality of the rangefinder units 13. For instance, in a situation where the angle of view in the X-axis direction is considerably large, a configuration in which a plurality of the rangefinder units 13 each having a less-wide-angle imaging optical system are arranged along the X-axis direction can be less expensive in cost than a configuration in which the single rangefinder unit 13 having a super-wide-angle imaging optical system covers the entire angle of view. In short, the second modification allows providing a projector apparatus having a super-wide angle of view in the X-axis direction less expensively.
The rangefinder unit 13 of the embodiment may be modified, as a first modification, such that the light emitter 131 emits certain structured light as illustrated in
The rangefinder unit 13 of the embodiment may be modified, as a second modification, such that the light emitter 131 emits light whose intensity is modified at a predetermined frequency as illustrated in
The rangefinder unit 13 of the embodiment may be modified, as a third modification, such that the light emitter 131 emits texture-imparting light as illustrated in
In the embodiment described above, the projector apparatus 10 is used as being placed on the stage 400; however, applicable layout is not limited thereto. For instance, the projector 10 may be modified, as a third modification, so as to be used as being hung from a ceiling as illustrated in
The input-operation detection device including the rangefinder unit 13 and the processing unit 15 may be used to implement an image display apparatus, such as an interactive whiteboard apparatus and a digital signage apparatus, including an input-operation detection device. In either case, the input-operation detection device can prevent, or at least reduce, an increase in detection error.
As described above, the input-operation detection device including the rangefinder unit 13 and the processing unit 15 is suitable for an apparatus having an interactive feature or an apparatus to which an interactive feature is desirably added.
According to an aspect of the present invention, an input-operation detection device can prevent, or at least reduce, an increase in detection error.
Although the invention has been described with respect to specific embodiments for a complete and clear disclosure, the appended claims are not to be thus limited but are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art that fairly fall within the basic teaching herein set forth.
Number | Date | Country | Kind |
---|---|---|---|
2014-085080 | Apr 2014 | JP | national |
2014-263887 | Dec 2014 | JP | national |