The present invention relates generally to three-dimensional (3D) pointing devices, techniques and systems.
A conventional 3D pointing device may generally have a rotational sensor and an accelerometer for generating outputs which a processor may use to determine the movement of the 3D pointing device. However, the costs associated with the rotational sensor and the accelerometer are high, and the calculation involved for determining the movement is complicated.
On the other hand, a system having a hand-held remote and a set of markers disposed on a display device, which the hand-held remote points at and displays a pointer whose movement is controlled by the hand-held remote, was disclosed. The hand-held remote has an image sensor, an emitter and a processor. The markers may be retro-reflectors, which reflect the light emitted by the emitter in the hand-held remote, and the reflected light is captured by the image sensor to form images of the retro-reflectors and the display device for the processor to determine the position of the hand-held remote relative to the display device. The system has the disadvantage of that the hand-held remote may only function with display devices that have a set of markers disposed thereon in a predefined configuration, so that the movement of the hand-held device may be determined based on the predefined algorithm stored in the hand-held remote.
Examples of the present invention may provide a device that comprises at least one image sensor and a processing unit. The at least one image sensor is configured to consecutively capture a plurality of images at a predetermined rate. The processing unit is configured to identify in each of the plurality of images a first region and a second region, wherein intensities of the first region and the second region are different; determine a displacement of the first region from the first image of the plurality of images to the last image of the plurality of images; and output a first signal comprising the displacement.
Some examples of the present invention may also provide a system that comprises at least one image sensor, a processing unit, and a display device. The at least one image sensor is configured to consecutively capture a plurality of images at a predetermined rate. The processing unit is configured to identify in each of the plurality of images a first region and a second region, wherein intensities of the first region and the second region are different; determine a displacement of the first region from the first image of the plurality of images to the last image of the plurality of images; and output a first signal comprising the displacement. The display device is configured to receive the first signal and displays a pointer on a screen of the display device moving in accordance with the displacement of the first signal.
Other objects, advantages and novel features of the present invention will be drawn from the following detailed embodiments of the present invention with attached drawings.
The foregoing summary as well as the following detailed description of the preferred examples of the present invention will be better understood when read in conjunction with the appended drawings. For the purposes of illustrating the invention, there are shown in the drawings examples which are presently preferred. It is understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown. In the drawings:
Reference will now be made in detail to the present examples of the invention illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like portions. It should be noted that the drawings are made in simplified form and are not drawn to precise scale.
The 3D pointing device 100 may further have a first button 102 for activating and deactivating the image sensor 101, either directly or through the processing unit 104. In accordance with an example of the present invention, a user of the 3D pointing device 100 may press the first button 102 before he begins to motion the 3D pointing device 100 for moving a pointer 201 on a screen of a display device 200 from a first position to a second position, hold the first button 102 while he motions the 3D pointing device 100, and release the first button 102 when the pointer 201 arrives at the second position, where no further movement is desired. Alternatively, a user may first press-and-release the first button 102 to indicate the start of a pointer movement, and, again, press-and-release the first button 102 to indicate the end of the pointer movement. It will be appreciated by those skilled in the art that the method for indicating the activation and deactivation of the image sensor 101 using the first button 102 may be varied, and is not limited to the examples described herein.
The processing unit 104 may receive the images obtained by the image sensor 101, process the images to determine the movement of the pointer 201 indicated by the user using the 3D pointing device 100, and output a signal containing movement information via the communication interface 103. In addition, the distance between 3D pointing device 100 and an illuminating object 300 may be determined by comparing images obtained by two or more image sensors. The output signal is received by a receiver 110 that is capable of receiving signals from the 3D pointing device 100, and providing the received signal to the display device 200, which is configured to display the pointer movement on the screen in accordance with the received signal.
In accordance with an example of the present invention, the 3D pointing device 100 is pointed at the illuminating object 300, which may include but is not limited to a lamp 300, for position reference. An exemplary method for obtaining and processing the images with the 3D pointing device 100 to determine the movements of the 3D pointing device 100 is illustrated in reference to the flow chart in
The method illustrated in
For example, at time t1, the pointer 201 is at a first position on the screen of the display device 200 as shown in block 520. The image sensor 101 obtains a first captured image 510. The region 500a in the first image 510 will be brighter than the rest of the image. The processing unit 104 identifies a dark region 500b which surrounds the bright region 500a and produces a first processed image 510′ from the first captured image 510. The first processed image 510′ comprises at least the identified dark region 500b.
Subsequently, the processing unit 104 tracks the movements of the dark region 500b in the subsequent processed images in order to determine the movements of the 3D pointing device 100. For example, at time t2, the image sensor 101 obtains an Nth captured image 511, and the processing unit 104 obtains an Nth processed image 511′ from the Nth captured image 511. The Nth processed image 511′ also comprises the identified dark region 500b.
By consecutively comparing each of the N processed images obtained between time t1 to time t2 with the processed image that immediately follows the respective one of the N processed image, the processing unit 104 may determine the movements of the 3D pointing device 100 based on the displacement of the identified dark region 500b from the first processed image 510′ to the Nth processed image 511′. The displacement of the identified dark region 500b between each pair of consecutive images is determined by way of digital signal processing.
For example, the first processed image 510′ is compared with the second processed image, the second processed image is compare with the third processed image, and the comparison continues until the (N−1)th processed image is compared with the Nth processed image 511′. Movement information including the distance and direction of the movement may be generated and output via the communication interface 103 to the display device 200. The display device 200 then shows the pointer 201 moving from the position shown in block 520 to the position shown in block 520′.
For example, at time t1, the pointer 201 is at a first position on the screen of the display device 200 as shown in block 620. The image sensor 101 obtains a first captured image 610. The processing unit 104 may identify the screen of the display device 200 as the bright region 500a, and the boarder of the display device as the dark region 500b, and produces a first processed image 610′ which comprises at least the identified dark region 500b. The processing unit 104 tracks the displacement of the dark region 500b to determine the movement of the 3D pointing device 100. For example, at time t2, the image sensor 101 obtains an Nth captured image 611, and the processing unit 104 obtains an Nth processed image 611′ from the Nth captured image 611. The Nth processed image 611′ also comprises the identified dark region 500b, which partially surrounds the bright region 500a.
By consecutively comparing each of the N processed images obtained between time t1 to time t2 with the processed image that immediately follows the respective one of the N processed image, the processing unit 104 may determine the movements of the 3D pointing device 100 based on the displacement of the identified dark region 500b from the first processed image 610′ to the Nth processed image 611′.
Movement information including the distance and direction of the movement may be generated and output via the communication interface 103 to the display device 200. The display device 200 then shows the pointer 201 moving from the position shown in block 620 to the position shown in block 621.
A signal having the displacement of the dark region 500b is transmitted to the display device 200 via the communication interface 103 and the receiver 110, and the display device 200 displays the pointer 201 moving in accordance with the displacement in the received signal.
For example, at time t1, the pointer 201 is at a first position on the screen of the display device 200 as shown in block 640. The image sensor 101 obtains a first captured image 630. The first captured image 630 comprises a plurality of regions 60, 61, 62, 63, 64, 65, and the brightness, luminance and intensity of each region 60, 61, 62, 63, 64, 65 is different from the brightness, luminance and intensity of at least one other region 60, 61, 62, 63, 64, 65. By comparing the brightness, luminance or intensity of each of the plurality of regions 60, 61, 62, 63, 64, 65 with a predetermined threshold value, the processing unit 104 may produce a first processed image 630′, which comprises a plurality of bright regions 500a and a plurality of dark regions 500b.
For example, the processing unit 104 may compare the intensity of each region 60, 61, 62, 63, 64, 65 with a predetermined threshold value. In the example illustrated in
Subsequently, the processing unit 104 tracks the movements of the dark region 500b in the subsequent processed images in order to determine the movements of the 3D pointing device 100. For example, at time t2, the image sensor 101 obtains an Nth captured image 631, which comprises region 60, region 61, region 62, region 63, region 64, region 65 and region 66. The processing unit 104 obtains an Nth processed image 631′ from the Nth captured image 631. The Nth processed image 631′ also comprises a plurality of dark regions 500b and a plurality of bright regions 500a.
By consecutively comparing each of the N processed images obtained between time t1 to time t2 with the processed image that immediately follows the respective one of the N processed image, the processing unit 104 may determine the movements of the 3D pointing device 100 based on the displacement of the plurality of dark regions 500b from the first processed image 630′ to the Nth processed image 631′. Movement information including the distance and direction of the movement may be generated and output via the communication interface 103 to the display device 200. The display device 200 then shows the pointer 201 moving from the position shown in block 640 to the position shown in block 641.
The image capturing device 702 further comprises a communication interface 703, which is capable of communicating with the communication interface 103 of the 3D pointing device 700. When the light-emitting unit 701 is turned on by the first button 102, a signal is sent from the 3D pointing device 700 to the image capturing device 702 via the communication interfaces 103 and 703, so that the image sensor 101 may start to continuously obtain images at a predetermined rate. The image capturing device 702 is set up so that it may capture images of a space, in which a light spot formed by the light-emitting unit 701 moves around when the light-emitting unit 701 is being used for controlling the movement of the pointer 201 displayed in the display device 200. In an example in accordance with the present invention, the image capturing device 702 may be set up to capture images of the entire display device 200 as illustrated in
At time t1, for example, the image capturing device 702 obtains a first captured image 810, and the processing unit 104 obtains a first processed image 810′ from the first captured image 810 by identifying a dark region 800b surrounding the bright region 800a. At time t2, the image capturing device 702 obtains an Nth captured image 811, and the processing unit 104 obtains an Nth processed image 811′ form the Nth captured image 811. Based on the images obtained between time t1 and time t2, the processing unit 104 may determine the movement of the 3D pointing device 700, and generate movement information including the distance and direction of the movement. The movement information may be provided to the display device 200, so that the pointer 201 may be moved from the position shown in block 820 to the position shown in block 821.
The 3D pointing devices 100, 700, 900 in accordance with the present invention provide users the ability to control a pointer on a display device from an arbitrary location. For example, unlike a conventional optical mouse which must be used on a flat surface, the 3D pointing devices 100, 700, 900 in accordance with the present invention may be motioned in the air. Furthermore, the distance between the 3D pointing devices 100, 900 and the illuminating object 300, and the distance between the 3D pointing devices 700, 900 and the space, in which a light spot formed by the light-emitting unit 701 moves around when the 3D pointing device 700, 900 is being used for controlling the movement of the pointer 201 displayed in the display device 200, may range from 0.5 to 8 meter (m). One of ordinary skill in the art would appreciate that the 3D pointing device 100, 900 may, for example, further comprise a lens system for providing variable focal length, so that the range of the distance between the 3D pointing device 100, 900 and the illuminating object 300 may be further expanded or customized.
The 3D pointing devices and systems in accordance with the present invention described in the examples provides versatile uses. For instance, it may be used with any display device that has a communication interface that is compatible with the signal output interface of the receiver or compatible with a communication interface of a computing device. Alternatively, the 3D pointing devices 100, 900 may transmit the signal containing movement information via a Bluethooth® communication interface to a smart TV or computer which comprises a Bluethooth® communication interface, so as to control the movement of the pointer 201 without an external receiver.
In describing representative examples of the present invention, the specification may have presented the method and/or process of operating the present invention as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. As one of ordinary skill in the art would appreciate, other sequences of steps may be possible. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. In addition, the claims directed to the method and/or process of the present invention should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the present invention.
It will be appreciated by those skilled in the art that changes could be made to the examples described above without departing from the broad inventive concept thereof. It is understood, therefore, that this invention is not limited to the particular examples disclosed, but it is intended to cover modifications within the spirit and scope of the present invention as defined by the appended claims.