BRIEF DESCRIPTION OF THE DRAWINGS
Other objects, advantages, and novel features of the present invention will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings.
FIG. 1
a shows a schematic diagram of a pointer positioning device according to one embodiment of the present invention.
FIG. 1
b shows a configuration of the auxiliary points of the pointer positioning device according to the embodiment of the present invention.
FIG. 1
c shows another configuration of the auxiliary points of the pointer positioning device according to the embodiment of the present invention.
FIG. 1
d shows a further configuration of the auxiliary points of the pointer positioning device according to the embodiment of the present invention.
FIG. 2
a shows a flow chart of a pointer positioning method according to one embodiment of the present invention.
FIG. 2
b shows a flow chart of positioning an aiming point according to the first embodiment of the present invention, wherein the pointer positioning is based on absolute coordinate.
FIG. 2
c shows part of the flow chart of the pointer positioning method based on absolute coordinate according to the first embodiment of the present invention shown in FIG. 2b.
FIG. 2
d shows another part of the flow chart of the pointer positioning method based on absolute coordinate according to the first embodiment of the present invention shown in FIG. 2b.
FIG. 2
e shows a further part of the flow chart of the pointer positioning method based on absolute coordinate according to the first embodiment of the present invention shown in FIG. 2b.
FIG. 3
a shows a flow chart of positioning an aiming point according to the second embodiment of the present invention, wherein the pointer positioning is based on relative coordinate.
FIG. 3
b shows part of the flow chart of the pointer positioning method based on relative coordinate according to the second embodiment of the present invention shown in FIG. 3a.
FIG. 3
c shows another part of the flow chart of the pointer positioning method based on relative coordinate according to the second embodiment of the present invention shown in FIG. 3a.
FIG. 4 shows a schematic diagram of a method to obtain the correction vector used in the pointer positioning device and method according to the embodiments of the present invention.
FIG. 5 shows a schematic diagram of a method to obtain the reference distance information used in the pointer positioning device and method according to the embodiments of the present invention.
FIG. 6 shows a schematic diagram of the images of the reference points while respectively aiming at four corners of the image display formed by the pointer positioning device and method according to the embodiments of the present invention.
FIG. 7 shows a schematic diagram of the rotating angle compensation used in the pointer positioning device and method according to the embodiments of the present invention.
FIG. 8 shows a schematic diagram of the projective transformation used in the pointer positioning device and method according to the embodiments of the present invention.
FIG. 9 shows a schematic diagram of the sensitivity adjusting by means of a scale parameter used in the pointer positioning device and method according to the embodiments of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
First, it should be noted that in the following description of the present invention, similar elements are designated by the same reference numerals.
Referring to FIG. 1a, it shows a pointer positioning device 10 according to one embodiment of the present invention, which can be applied to position an aiming point on an image display, e.g. a projection device, a display of a game machine system or a display of a computer system. The image display has a display screen 90 for displaying image, e.g. the display screen 90 may be a part of or the whole image display area of a projection screen, a display screen of a game machine system or a display screen of a computer system, and “A”, “B”, “C” and “D” are four points on the display screen 90 or outside the display screen 90, e.g. four corners of the display screen 90 as shown in FIG. 1a. The four points form a quadrangle.
The pointer positioning device 10 includes two auxiliary points 111 and 112, an image sensor 12 and an optical filter 13. The auxiliary points 111 and 112 are light sources of a predetermined spectrum, such as IR (infrared) light sources, and they may be emitting light sources or non-emitting light sources. If the auxiliary points 111 and 112 are emitting light sources, they generate predetermined spectrum, e.g. IR LED (light emitting diode) for generating IR spectrum; if the auxiliary points 111 and 112 are non-emitting light sources, they can reflect the predetermined spectrum, e.g. IR mirror for reflecting IR spectrum. If the auxiliary points 111 and 112 are non-emitting light sources, preferably the pointer positioning device 10 further includes a light source 123 for generating the predetermined spectrum so as to provide the predetermined spectrum to be reflected by the non-emitting light sources (auxiliary points 111 and 112). The light source 123 can be fixed on the image sensor 12, and it also can be screwed onto or integrated on the image sensor 12 by other method during or before operation. The light source 123 also can be disposed at the peripheral of the image sensor 12 not to be integrated thereon according to practical requirement. In addition, in other embodiment, environment light also may be utilized as a light source to provide the predetermined spectrum to be reflected by the non-emitting light source (auxiliary points 111 and 112).
It should be noted that the photographing distance and the rotating angle of the image sensor 12, e.g. rotating along the arrow shown in FIG. 1, may effect the positions of detected images on the sensing array of the image sensor 12. In order to increase the accuracy of pointer positioning, in the embodiment of the present invention, two auxiliary points are utilized as an example for illustrating the procedure of positioning the aiming point. However, it is not used to limit the present invention; in practical use, only one auxiliary point can be utilized for assisting pointer positioning. It also should be noted that the sizes of the auxiliary points 111 and 112 may be the same or different, and the detailed reason will be described hereafter.
Referring to FIGS. 1a to 1d, although the auxiliary points 111 and 112 can be disposed in any positions surrounding to the image display, preferably they are disposed as the configurations shown in FIGS. 1a to 1d. For the reason that the images need to be detected by the image sensor 12 during photographing are the signals generated by the auxiliary points 111 and 112 rather than the whole display area of the display screen 90. If the auxiliary points 111 and 112 are disposed as the configurations shown in FIGS. 1a to 1d, the area need to be detected by the image sensor 12 is minimized thereby the viewing angle of the image sensor 12 can be decreased. In addition, the auxiliary points 111 and 112 may be integrated on the image display or manufactured as an individual auxiliary positioning device according to different applications.
The image sensor 12 is operated in front of the display screen 90, and may have an optical axis 80 to project an aiming point 14 on the display screen 90, e.g. a spot projected by a projector, a bullet drop point projected by a light gun or a cursor controlled by a mouse. In other embodiment, the optical axis 80 may be a fictitious axis. The image sensor 12 mainly includes a sensing unit 121 and a processing and storage unit 122. The image sensor 12 is utilized for detecting optical image signals containing the images of the auxiliary points 111 and 112. The sensing unit 121 may be a CMOS (complementary metal-oxide semiconductor) image sensor or a CCD (charge-coupled Device) image sensor, which can transfer the detected optical image signals to electrical image signals. The processing and storage unit 122 is electrically coupled to the sensing unit 121. It utilizes the pointer positioning method described in the latter paragraphs to calculate an initial setup by correcting the aiming point 14 of the image sensor 12 according to the auxiliary points 111 and 112 after receiving the electrical image signals and perform the calculation of positioning the aiming point 14. The image sensor 12 of the present invention can be used as a pointer for pointing inside a predetermined range on an image screen, e.g. a pointer of a projection screen system, a light gun of a game machine system or a cursor controller of a computer system.
The optical filter 13 is disposed in front of the image sensor 12 for filtering out spectrum outside the predetermined spectrum generated from the auxiliary points 111 and 112, such that the image sensor 12 can only detect the signals of the predetermined spectrum. In this embodiment, the optical filter 13 is preferably an IR filter (infrared filter). In this manner, since the sensing unit 121 of the image sensor 12 can not accept signals outside the predetermined spectrum, the information that will be processed by the processing and storage unit 122 only includes the initial setup information of the auxiliary points 111 and 112 and spatial relationships between the aiming point 14 on the display screen 90 and the auxiliary points 111 and 112. Therefore, the calculating complexity can be significantly decreased and the positioning accuracy can be improved. In addition, the optical filter 13 can be fixed on the image sensor 12 before leaving the factory; it also can be screwed onto or integrated on the image sensor 12 by other kinds of methods during operating.
Referring to FIG. 2a, there is disclosed a pointer positioning method according to the embodiment of the present invention. The method can be applied to position an aiming point 14 pointed through the optical axis 80 of the image sensor 12 on a plane. In this embodiment, the plane is formed by four corners “A”, “B”, “C” and “D” of the display screen 90. The pointer positioning method includes the following steps: disposing two auxiliary points 111 and 112 at the peripheral of the display screen 90 for generating a predetermined spectrum (step 170); disposing an optical filter 13 in front of the image sensor 12 for filtering out spectrum outside the predetermined spectrum such that the image sensor 12 can merely receive signals of the predetermined spectrum from the auxiliary points 111 and 112 (step 180); and correcting and positioning the aiming point 14 according to the spatial relationship between the aiming point 14 and the signals of the predetermined spectrum of the auxiliary points 111 and 112 sensed by the image sensor 12 (step 190). Wherein, the correcting and positioning the aiming point 14 step, i.e. step 190, can be performed by pointer positioning method based on absolute coordinate according to the first embodiment of the present invention, as shown in FIG. 2b, which comprises the steps of: correcting the optical axis 80 of the image sensor 12 (step 200); correcting the images of the auxiliary points 111 and 112 formed on the image sensor 12 while respectively aiming at four corners of the display screen 90 (step 300); and positioning an arbitrary aiming point 14 (step 400).
Referring to FIG. 2c, there is disclosed a flowchart of correcting the optical axis 80 of the image sensor 12, i.e. step 200, which comprises the following steps: aiming a reference point by the image sensor 12 (step 201); photographing a digital image by the image sensor 12 (step 202); identifying positions and sizes of the auxiliary points 111, 112 forming on the digital image (step 203); and obtaining a correction vector of the optical axis 80 and a reference distance information (step 204).
Referring to FIGS. 2c, 4 and 5, the detail of correcting the optical axis 80 of the image sensor 12, i.e. step 200, is described. It should be noted that step 200 may be a correction procedure before the products using the method leaves the factory, or it may be a correction procedure during operation or setup. First, aiming a reference point through the optical axis 80 of the sensing unit 121 (step 201), e.g. the auxiliary point 111. Then the sensing unit 121 can detect an optical image as shown in FIG. 4 (step 202), where the bold cross represents an aiming point 14 of the image sensor 12 and I111, I112 respectively represent images of the auxiliary points 111 and 112 forming on the sensing unit 121 of the image sensor 12. The optical image is then transferred to an electrical image and sent to the processing and storage unit 122, which identifies the positions and sizes of the images I111 and I112 corresponding to the auxiliary points 111 and 112 and stores the information of the identification. The processing and storage unit 122 can identify that the aiming point 14 is aiming at the auxiliary point 111 or the auxiliary point 112 through a predetermined principle, e.g. the aiming point 14 is predetermined aiming at the auxiliary point 111 in this embodiment; the processing and storage unit 122 can also identify that the aiming point 14 is aiming at the auxiliary point with larger area through a predetermined area determining principle, i.e. a principle to determine the aiming point by means of the areas or sizes of the auxiliary points.
Referring to FIG. 4 again, it is a digital image detected by the image sensor 12 while aiming at the auxiliary point 11 through the optical axis 80. It can be seen that the aiming point 14 and the image I111 corresponding to the auxiliary point 111 sensed by the image sensor 12 do not overlap with each other, hence the optical axis 80 has to be corrected such that the optical axis 80 can aim at the desired point without displacement, i.e. aiming at position I111 in this embodiment. From this digital image, the processing and storage unit 122 calculates a correction vector of the optical axis 80 (step 204), i.e. a vector between the aiming point 14 and the image I111 formed on the image sensor 12, and the correction vector will be stored in a memory (not shown) of the processing and storage unit 122 for being utilized in the following steps.
Referring to FIG. 5, a reference distance information, including an average coordinate (X, Y) of the images of the auxiliary points 111 and 112 formed on the image sensor 12 while photographing at a predetermined distance, e.g. 3 meters, from the display screen 90 and a distance L therebetween, can be stored in the processing and storage unit 122 of the image sensor 12, i.e. step 204, for being utilized in the calculation of pointer positioning. In FIG. 5, I111-ref and I112-ref are images of the auxiliary points 111 and 112 formed on the image sensor 12 while photographing at the above mentioned distance; I111-any and I112-any are images need to be corrected, which are images of the auxiliary points 111 and 112 formed on the image sensor 12 while photographing at any distance (not the predetermined distance) from the display screen 90 but aiming at the same point. As can be seen, since the detected images I111-any and I112-any are smaller than I111-ref and I112-ref, I111-any and I112-any represent the images photographing at a distance larger than the predetermined distance. A distance compensation then is performed by the processing and storage unit 122 according to a proportional relationship between a distance “L” between the images I111-ref and I112-ref and a distance “l” between the images I111-any, I112-any, i.e. the coordinate (x, y) will be corrected by the equation of (x′, y′)=(xL/l, yL/l), where (x′, y′) denotes the average coordinate of the images of the auxiliary points 111 and 112 formed on the image sensor 12 after the distance compensation is performed. If (x′, y′)=(X, Y), the images I111-ref, I112-ref and I111-any, I112-any represent the images of the auxiliary points 111 and 112 using the image sensor 12 aiming at the same point on the display screen 90 with different photographing distances. As mentioned above, a correction vector of the optical axis 80, i.e. a vector between the aiming point 14 and the image I111 as shown in FIG. 4, and a reference distance information, i.e. average coordinate (X, Y) of the images of the two auxiliary points 111 and 112, and the distance therebetween, i.e. “L”, are stored in the processing and storage unit 122 as part of initial setup of the pointer positioning method after finishing the correcting the optical axis of the image sensor step (step 200).
Referring to FIG. 2d, there is disclosed a flowchart of correcting the images of the auxiliary points 111, 112 formed on the image sensor 12 while respectively aiming at four corners of the display screen 90, i.e. step 300, which comprises: aiming four corners “A”, “B”, “C” and “D” of the display screen 90 by the image sensor 12 (step 301); photographing a digital image by the image sensor 12 (step 302); identifying positions and sizes of the images of the auxiliary points 111, 112 forming on the digital image (step 303); determining whether images of the auxiliary points 111, 112 formed on the image sensor 12 while respectively aiming at four corners “A”, “B”, “C” and “D” of the display screen 90 have been obtained, if not, proceeding the steps 301 to 303 again; if yes, proceeding step 305; compensating distance and rotating angle of the images of the auxiliary points 111, 112 by using the correction vector of the optical axis 80 and the reference distance information for correction (step 305); calculating coordinates of four corners “A”, “B”, “C” and “D” of the display screen 90 formed on the digital image (step 306); and calculating a conversion matrix from the coordinates of four corners “A”, “B”, “C” and “D” of the display screen 90 on the digital image (step 307). In should be noted that the step 307 may be neglected according to different applications. If it is performed, the calculation amount during correction process, i.e. step 300, is increased but the calculation of pointer positioning, i.e. step 400, can be simplified and memory requirement can be decreased.
Referring to FIG. 2d and FIGS. 6 to 8, the detail of correcting the images of the auxiliary points 111, 112 formed on the image sensor 12 while respectively aiming at four corners of the display screen 90, i.e. step 300, is described. It should be noted that step 300 may be a correction procedure before the products using the method leaves the factory; it also may be performed during setup or operation after the products being sold. Utilize the aiming point 14 to respectively aim at four corners “A”, “B”, “C” and “D” of the display screen 90 through the optical axis 80, which has been corrected in the step 200 (step 301), and photograph a digital image by the image sensor 12 whenever aiming at each of the four corners (step 302). Then identify positions and sizes of the images of the auxiliary points 111, 112 formed on the digital image (step 303). Since they are identical to the steps 202 and 203 aforementioned, they will not be described in detail herein. After the images of the auxiliary points 111, 112 formed on the image sensor 12 while respectively aiming at four corners “A”, “B”, “C” and “D” of the display screen 90 by using the image sensor 12 have been obtained, i.e. step 304, a digital image will be formed as shown in FIG. 6. Where IA111, IB111, IC111 and ID111 denote images of the auxiliary point 111 formed on the image sensor 12 while the aiming point 14 respectively aiming at four corners “A”, “B”, “C” and “D” of the display screen 90; IA112, IB112, IC112 and ID112 denote images of the auxiliary point 112 formed on the image sensor 12 while the aiming point 14 respectively aiming at four corners “A”, “B”, “C” and “D” of the display screen 90; “A′” is the average coordinate of IA111 and IA112; “B′” is the average coordinate of IB111 and IB112; “C′” is the average coordinate of IC111 and IC112; “D′” is the average coordinate of ID111 and ID112.
Referring to FIG. 7, it shows the method to perform rotating angle compensation in step 305, where I111-ref and I112-ref are images of the auxiliary points 111 and 112 formed on the image sensor 12 while photographing at the reference distance, as described in step 204, and they are pre-stored in the memory of the processing and storage unit 122. They are utilized as reference points of calculating the rotating angle of the image sensor 12 during photographing. I111-any and I112-any are images need to be corrected, e.g. the images of the auxiliary points 111 and 112 detected by the image sensor 12 under arbitrary rotating angle while aiming at the same point as the time obtaining the reference image, i.e. I111-ref and I112-ref. Since a rotating angle deviation θ exists with respected to the reference image, the image will be corrected by the processing and storage unit 122 according to the following equation (1):
wherein, θ denotes a rotating angle of the image sensor 12 while photographing with respect to taking the reference image; X and Y denote average coordinates of the images of the auxiliary points 111 and 112 formed on the digital image before being compensated; X′ and Y′ denote average coordinates of the images of the auxiliary points 111 and 112 formed on the digital image after being compensated, and the digital image may be an image shown in FIG. 7. It should be noted that if the auxiliary points 111 and 112 have identical size or area, then when the rotating angle exceeds 180 degrees, the image sensor 12 may not able to correctly recognize the auxiliary points 111 and 112 thereby causing incorrect rotating angle compensation. In one embodiment, a mercury switch (not shown) may be integrated inside the image sensor 12 so as to solve this problem. In the embodiment of the present invention, the problem is solved by utilizing different auxiliary points 111 and 112, e.g. different sizes or areas. Therefore misrecognition problem caused by unable to distinguish the auxiliary points 111 and 112 can be solved and the rotating angle compensation can be correctly performed under any rotating angle during photographing.
The distance compensation in step 305 is performed based on the reference distance information obtained in step 200 such that the deviation caused by different photographing distance can be compensated. The correction vector of the optical axis 80 also should be added simultaneously so as to obtain correct coordinates of four corners “A′”, “B′”, “C′” and “D′” (step 306), which will be stored in the memory of the processing and storage unit 122 of the image sensor 12. In addition, although it is possible to realize correction of the aiming point 14 by only one auxiliary point, in this embodiment, two auxiliary points are utilized to facilitate the distance and rotating angle compensation and further increase accuracy of pointer positioning.
Referring to FIG. 8, it shows the method to obtain conversion matrix from the coordinates of four corners of the display screen 90 obtained in the step 306. The conversion procedure is also performed by the processing and storage unit 122. Where A′(xA′, yA′), B′(xB′, yB′), C′(xC′, yC′) and D′(xD′, yD′) represent average coordinates of the images of two auxiliary points 111 and 112 formed on the image sensor 12 while the aiming point 14 is respectively aimed at four corners “A”, “B”, “C” and “D” of the display screen 90. Because of the photographing angle of the image sensor 12 and the distortion of the image during photographing, a quadrangle formed by “A”, “B”, “C” and “D” may not be a regular rectangular. By using a conventional projective transformation, a non-regular quadrangle can be converted into a standard unit square, i.e. a square with unit sides, and the conversion matrix will be stored in the processing and storage unit 122 of the image sensor 12 for being utilized in the following steps. Since “A′”, “B′”, “C′” and “D′” are average coordinates of the images of four corners of the display screen 90, any point inside the range of the display screen 90 converted through the conversion matrix will be appeared inside the unit square. As mentioned above, after finishing the step 300, correction information (initial setup), including a conversion matrix, distance compensation and rotating angle compensation information, will be stored in the processing and storage unit 122. In this manner, the whole initial setup of the pointer positioning method is finished and it will be utilized in the following steps.
It should be noted that the substep 307 of the step 300 can be ignored, i.e. the positioning an arbitrary aiming point step (step 400) still can be performed only with the average coordinates of the four corners “A′”, “B′”, “C′” and “D′” of the display screen 90 stored in the processing and storage unit 122 of the image sensor 12. In this manner, the calculating amount during correction procedure, i.e. step 300, can be reduced but the calculating amount and memory requirement during the positioning an arbitrary aiming point step, i.e. step 400, are increased.
Referring to FIG. 2e, there is disclosed a flowchart of positioning an arbitrary aiming point, i.e. step 400, which comprises the following steps: aiming an arbitrary point on the display screen 90 by the image sensor 12 (step 401); photographing a digital image by the image sensor 12 (step 402); identifying positions and sizes of the images of the auxiliary points 111 and 112 formed on the digital image (step 403); compensating distance and rotating angle of the images of the auxiliary points 111, 112 by using the correction vector of the optical axis 80 and the reference distance information for correction (step 404); and calculating the coordinate of the arbitrary aiming point (step 405).
Referring to FIG. 2e and FIGS. 6 to 8, the details of positioning an arbitrary aiming point step (step 400) are described hereafter. The step 400 is performed based on the initial setup information obtained in steps 200 and 300, including the correction vector of the optical axis 80, the reference distance information, the average coordinates of four corners of the display screen 90 and the conversion matrix. Utilize the aiming point 14 to aim at an arbitrary point on the display screen 90 through the optical axis 80 (step 401), then proceed the photographing a digital image by the image sensor step (step 402), the identifying positions and sizes of the images of the auxiliary points forming on the digital image step (step 403) and the compensating distance and rotating angle of the images of the auxiliary points step (step 404) sequentially. Since their performing procedures are identical to the substeps 302, 303 and 305 of the step 300, they will not be described in detail herein. The coordinate of an arbitrary point calculated by the processing and storage unit 122 has to be calculated based on the coordinates of four corners of the display screen 90 or the conversion matrix obtained in step 300, i.e. if the information stored in the memory of the processing and storage unit 122 are average coordinates of the images of four corners of the display screen 90, the calculating performed in step 405 utilizes the average coordinates of four corners of the display screen 90; on the other hand, if the one stored in the memory of the processing and storage unit 122 is the conversion matrix, the calculating performed in step 405 utilizes the conversion matrix. In this manner, the coordinate of an arbitrary aiming point on the display screen 90 can be obtained (step 405), i.e. the coordinate of the images of an arbitrary aiming point is determined by a plane coordinate system formed by the average coordinates of four corners of the display screen 90 or by the conversion matrix.
Referring to FIG. 3a, there is disclosed a flowchart of correcting and positioning the aiming point 14 of step 190 according to the second embodiment of the present invention, which utilizes a pointer positioning method based on relative coordinate. The differences between the second embodiment and the first embodiment are that the correcting the images of the auxiliary points formed on the image sensor 12 while respectively aiming at four corners of the display screen 90 step, i.e. step 300, is not performed in the second embodiment. Herein a relative reference point on the display screen 90 is defined during the correcting the optical axis of the image sensor step (step 500); the relative reference point also may be selected by a user. The pointer positioning of this embodiment is performed by calculating a spatial relationship between the aiming point 14 aimed through the optical axis 80 and the relative reference point. The pointer positioning method is also applied to position an aiming point 14 on a display screen 90. By disposing two auxiliary points 111 and 112 at the peripheral of the image display for generating a predetermined spectrum, utilizing the image sensor 12 to receive the signals of the predetermined spectrum generated by the auxiliary points 111, 112 and disposing an optical filter 13 in front of the image sensor 12 so as to filter out the spectrum outside the predetermined spectrum such that the image sensor 12 can merely detect the signals of the predetermined spectrum from the auxiliary points 111 and 112. The pointer positioning method includes the following steps: correcting the optical axis of the image sensor (step 500) and positioning an arbitrary point (step 600). Its detailed description will be illustrated hereinafter.
Referring to FIG. 3b and FIGS. 4 to 5, the correcting the optical axis of the image sensor step (step 500) is a correction procedure which can be performed before the products using the method leaves the factory; it also can be performed during setup or operation after the products are sold. The correcting the optical axis of the image sensor step comprises: aiming an arbitrary point on the display screen 90 (step 501); photographing a digital image by the image sensor 12 (step 502); identifying positions and sizes of the images of the auxiliary points forming on the digital image (step 503); obtaining a correction vector of the optical axis 80 and a reference distance information (step 504). Since their operating procedures are similar to that in the step 200, they will not be described in detail herein. Only the differences between this embodiment and the first embodiment will be illustrated. In step 504, besides the correction vector of the optical axis 80 and the reference distance information can be obtained as described in the first embodiment, the reference point aimed by the image sensor 12 in the step 501 can further be set as a relative reference point, i.e. an original point of the relative coordinate, and it is utilized as a reference while performing pointer positioning based on relative coordinate (step 504). The position information of the relative reference point is stored in the processing and storage unit 122.
Referring to FIG. 3c and FIGS. 6 to 9, the positioning an arbitrary point step (step 600) comprises the following steps: aiming an arbitrary point on the display screen 90 (step 602); photographing a digital image by the image sensor 12 (step 603); identifying positions and sizes of the images of the auxiliary points formed on the digital image (step 604); compensating distance and rotating angle of the images of the auxiliary points 111, 112 by utilizing the correction vector of the optical axis 80 and the reference distance information for correction (step 605); and calculating the position of the aiming point (step 606). Their performing procedures are similar to that of the step 400 illustrated in the first embodiment and they will not be described in detail, therefore only the differences therebetween will be described herein. Before performing the positioning of an arbitrary point step, in addition to the relative reference point selected in the step 500 can be used as a reference point in the relative coordinate, a user can define a relative reference point according to his usual habit (step 601). For example, in this embodiment a point (x0, y0) is selected as the relative reference point either in step 504 or by a user, as shown in FIG. 9, and the calculation of the movement of the aiming point 14 is based on this reference point. If the relative reference point is defined in step 504, then this step can be ignored. In addition, during the calculating the coordinate of the aiming point step, a scale parameter (Xscale, Yscale) can be inputted to the processing and storage unit 122 for adjusting the moving sensitivity of the average coordinate (x1, y1) of the images of the auxiliary points 111, 112 related to the relative reference point (x0, y0) on the image sensor 12, and the moving sensitivity can be adjusted according to the following equation (2):
where Xscale and Yscale are adjustable scale parameters, which can be adjusted by a user; x0 and y0 are coordinates of the relative reference point defined by the user or in the step 504; x1 and y1 are the average coordinates of the images of the auxiliary points 111 and 112 formed on the image sensor 12 when the aiming point moves; ΔX and ΔY are the adjusted moving distance. In FIG. 9, D is the moving distance of the current aiming point (x1, y1) with respect to the relative reference point (x0, y0). It can be understood from equation (2) that when the Xscale and Yscale are getting larger, in order to obtain identical moving effect, the moving distance of the aiming point has to be relatively large.
As shown above, because the conventional pointer positioning device and method has to detect information of the whole display screen, it has the problem to recognize the image area and requires a video camera having large viewing angle. As compared to the conventional one, the pointer positioning device and method according to the present invention, as shown in FIGS. 1a, 2a and 3a, utilizes auxiliary points 111, 112 to generate a predetermined spectrum incorporated with an image sensor 12 integrated with an optical filter 13 to perform pointer positioning. The image sensor 12 merely can detect the signals generated from the auxiliary points 111 and 112, therefore, by using the present invention, the viewing angle of the image sensor is decreased; the calculating complexity is simplified; the positioning accuracy is increased and the present invention can be applied to any types of image displays.
Although the invention has been explained in relation to its preferred embodiment, it is not used to limit the invention. It is to be understood that many other possible modifications and variations can be made by those skilled in the art without departing from the spirit and scope of the invention as hereinafter claimed.