The disclosure relates in general to a controlling system and a controlling method for virtual display.
Along with the development in the interactive technology, various interactive display technologies such as virtual reality (VR), augmented reality (AR), substitutional reality (SR) and mixed reality (MR) are provided. The interactive display technology has been applied in some professional areas such as gaming, virtual shops, virtual offices, and virtual tour. The interactive display technology can also be used in areas such as education to provide a learning experience which is lively and impressive.
Conventional interactive display technology is normally operated through a user interface (UI). However, the user's hand often affects object recognition. In the conventional interactive display technology, the user cannot control the object at a remote end. The user normally needs to physically touch the user interface, and therefore has a poor user experience.
Moreover, according to the interactive display technology, virtual display should be infinitely extended. However, an effective cursor controlling method capable of concurrently controlling an object located afar and another object located nearby is still unavailable.
The disclosure is directed to a controlling system and a controlling method for virtual display capable of controlling each object in an infinitely extended virtual display with a cursor by using a visual line tracking technology and a space transformation technology.
According to one embodiment, a controlling system for virtual display is provided. The controlling system for virtual display includes a visual line tracking unit, a space forming unit, a hand information capturing unit, a transforming unit and a controlling unit. The visual line tracking unit is used for tracking a visual line of a user. The space forming unit is used for forming a virtual display space according to the visual line. The hand information capturing unit is used for obtaining a hand location of the user's one hand in a real operation space. The transforming unit is used for transforming the hand location to be a cursor location in the virtual display space. The controlling unit is used for controlling the virtual display according to the cursor location.
According to another embodiment, a controlling method for virtual display is provided. The controlling method for virtual display includes following steps: tracking a visual line of a user; forming a virtual display space according to the visual line; obtaining a hand location of the user's one hand in a real operation space; transforming the hand location to be a cursor location in the virtual display space; and controlling the virtual display according to the cursor location.
The above and other aspects of the invention will become better understood with regard to the following detailed description of the preferred but non-limiting embodiment(s). The following description is made with reference to the accompanying drawings.
In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are schematically shown in order to simplify the drawing.
Referring to
To put it in greater details, a hand location L0 of the user' one hand in the real operation space S0 will correspond to a cursor location L1 (or a cursor location L2) in the virtual display space S1 (or the virtual display space S2) to control the virtual display according to the cursor location L1 (or the cursor location L2).
Referring to
The object detection unit 120 is used for detecting the objects O1 and O2. The space forming unit 130 is used for forming the virtual display spaces S1, S2. The hand information capturing unit 140 is used for capturing the hand location L0. The hand information capturing unit 140 can be realized by a combination of a depth image capturing device 141 and a hand recognizer 142.
The transforming unit 150 is used for transforming the hand location L0 to be cursor locations L1 and L2 The transforming unit 150 can be realized by a combination of a ratio calculator 151 and a mapper 152. The controlling unit 160 is used for controlling virtual display.
The visual line tracking unit 110, the object detection unit 120, the space forming unit 130, the hand information capturing unit 140, the transforming unit 150, and the controlling unit 160 can be realized by such as a chip, a circuit, a firmware, a circuit board, an electronic device or a recording device storing multiple programming codes. The operations of each element are disclosed below with a flowchart.
Referring to
Next, in step S120, the object detection unit 120 provides an object O1 (or an object O2) at which the user is looking according to the visual line VS1 (or the visual line VS2). In an embodiment, the object detection unit 120 detects the background image according to at least one contour line of the background by using an edge detection algorithm, and connects the at least one contour line to form the object O1 (or the object O2). Or, the object detection unit 120 searches a database to locate the object O1 (or the object O2) corresponding to the visual line VS1 (or the visual line VS2) according to the visual line VS1 (or the visual line VS2).
Then, in step S130, the space forming unit 130 forms a virtual display space S1 (or a virtual display space S2) according to the object O1 (or the object O2) corresponding to the visual line VS1 (or the visual line VS2). The sizes of the virtual display spaces S1 and S2 vary with the objects O1 and O2, but are irrelevant with the distances of the objects O1 and O2. For example, the object O1 is larger, so the virtual display space S1 is also larger; the object O2 is smaller, so the virtual display space S2 is smaller. Besides, the size of the real operation space S0 may be the same as or different from the size of the virtual display space S1 (or the virtual display space S2).
Moreover, the length/width/height ratio of the virtual display spaces S1 and S2 is not fixed but depends on the objects O1 and O2. In an embodiment, the step S120 can be omitted, and the virtual display space S1 (or the virtual display space S2) can be directly formed according to the visual line VS1 (or the visual line VS2).
Then, in step S140, the hand information capturing unit 140 obtains a hand location L0 of the user's one hand 700 (illustrated in
Then, in step S150, the transforming unit 150 transforms the hand location L0 to be a cursor location L1 (or a cursor location L2) in the virtual display space S1 (or the virtual display space S2). Referring to
The virtual display space S1 has a first display axis , a second display axis , and a third display axis . The first display axis is a vector formed by point V1 and point V2. The second display axis is a vector formed by point V1 and point V3. The third display axis is a vector formed by point V1 and point V4. The cursor location vector is a vector formed by point V1 and the cursor location L1.
The angle relationship among the first operation axis , the second operation axis , and the third operation axis may be different from or the same as the angle relationship among the first display axis , the second display axis , and the third display axis . For example, the real operation space S0 may be a Cartesian coordinate system, and the virtual display space S1 may be a non-Cartesian coordinate system (that is, not every angle formed by two axes is a right angle).
In step S150, based on formulas (1) to (3), the ratio calculator 151 calculates a first relative ratio Xrate of the hand location L0 in the first operation axis , a second relative ratio Yrate of the hand location L0 in the second operation axis , and a third relative ratio Zrate of the hand location L0 in the third operation axis . The first hand projection vector is a projection vector of the hand location vector in the first operation axis . The second hand projection vector is a projection vector of the hand location vector in the second operation axis . The third hand projection vector is a projection vector of the hand location vector in the third operation axis .
Based on formula (4), the mapper 152 calculates a first display coordinate XL1 of the hand location L0 corresponding to the first display axis according to the first relative ratio Xrate, a second display coordinate YL1 of the hand location L0 corresponding to the second display axis according to the second relative ratio Yrate, and a third display coordinate ZL1 of the hand location L0 in the third display axis according to the third relative ratio Zrate. The point V1 has the first display coordinate XV1, the second display coordinate YV1, and the third display coordinate ZV1.
(XL1,YL1,ZL1)=(XV1,YV1,ZV1)+Xrate*+Yrate*+Zrate* (4)
Thus, the transforming unit 150 can transform the hand location L0 to be the cursor location L1 in the virtual display space S1.
Then, in step S160, the controlling unit 160 controls the virtual display according to the cursor location L1 (or the cursor location L2). During the control of virtual display, the movement of the cursor is adjusted according to the first relative ratio Xrate, the second relative ratio Yrate, and the third relative ratio Zrate. Thus, regardless of the objects O1 and O2 being located afar or nearby, the same effect can be generated as long as the operations performed in the real operation space S0 are of the same scale. For example, as indicated in
Similarly, during the operation of virtual display, the movement of the cursor is adjusted according to the first relative ratio Xrate, the second relative ratio Yrate, and the third relative ratio Zrate. Thus, regardless of the size of the objects O1 and O2, the same effect can be generated as long as the operations performed in the real operation space S0 are of the same scale. As indicated in
Through the above steps, the user can use a cursor to control each object in an infinitely extended virtual display with a cursor by using a visual line tracking technology and a space transformation technology of the interactive display technologies.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments. It is intended that the specification and examples be considered as exemplary only, with a true scope of the disclosure being indicated by the following claims and their equivalents.