This application represents the second application for a patent directed towards the invention and the subject matter, and claims priority from UK Patent Application Numbers GB1701877.1 filed on 5 Feb. 2017 and GB1718258.5 filed on 3 Nov. 2017.
The present invention relates to navigating a three-dimensional environment, and in particular relates to moving the user's viewpoint in a three-dimensional environment in response to gestures made with an input device.
The computer mouse has revolutionised desktop computing, and the touch screen has more recently revolutionised mobile computing. These two input methods highlight the way that certain devices can transform advanced technologies from being exclusively scientific tools into low cost everyday items that can directly benefit a very large number of people. In spite of diverse research efforts, there is no known universal input device for navigating three-dimensional environments, such as those used for virtual reality, that has had the same enabling effect. Such environments are presented with increasingly high quality due to the ontinuing decrease in cost of graphics processors in accordance with Moore's Law. Displays more than a meter across are increasingly commonplace consumer products. Virtual environments displayed on them must be navigated using a joystick, or a mouse and keyboard, or any one of several specialised input technologies.
Examples of virtual environments include many kinds of computer games, three-sixty degree videos and photographs, and hybrid systems, such as Google Earth, that combine projections of photography with terrain data to simulate a fly-through. Anyone with a web browser can rotate, zoom and otherwise navigate these virtual environments. In many cases, a keyboard and mouse, or just a keyboard, can be used to rotate and move the user's point-of-view. However, these methods of navigation are very different from the sensation of walking through an environment in the real world. Another kind of virtual environment is a remote environment, where cameras and other sensors supply data to a user's location, such that the user feels as if he or she is actually present in the remote environment. Another kind of virtual environment is the environment of a remotely piloted aircraft, such as a drone. The environment may be presented to the pilot on a display that shows images from a camera on the drone. Alternatively, the pilot flies the drone by looking at it from a distance.
One attempt to present virtual environments that are more convincing is to use a stereoscopic headset, replacing most of the user's field of view with a pair of synthetic images, one for each eye. Head movements may be tracked so that the images supplied to each eye are updated as if the user is located in the virtual environment, giving a sense of immersion. Although the sense of immersion can be profound, it is easily broken when moving around in the virtual environment, due to the nature of input devices used to facilitate such movement. Furthermore, a headset isolates the user off from their immediate environment, and may be uncomfortable to wear for extended periods of time.
Movement of a user's point of view in a virtual environment is known as locomotion. The problem of locomotion in virtual reality (VR) is widely considered to be a significant obstacle to its adoption. However, more generally, user movement in any kind of three-dimensional environment lacks a universal input device analogous to the mouse or touch screen.
Several solutions to the problem of locomotion in VR have been proposed. For example, the virtual environment can be navigated using room-scale tracking, in which the user walks around a room in the real world, and their location in the virtual world is updated according to their real world location. Room-scale tracking is prohibitive for all but the most dedicated of users, because it requires an entire room to be mostly cleared of obstacles. Furthermore, part of the attraction of virtual environments is that their size is potentially unlimited, and the need to restrict user movement to the area of a room prevents this from being achieved in practice.
Other hardware locomotion solutions include various kinds of joystick input devices, including those present on controllers used with game consoles. Although these are ideal for many kinds of gaming, the resulting way in which the virtual environment is navigated is entirely different from natural movement in the real world. This is because the position of the joystick determines acceleration or velocity, rather than location. If a joystick were to be used to control location, the range of movement would be limited to a very small area.
A further possibility, now widely used in VR gaming, is a software locomotion technique, known as virtual teleportation. In this method of locomotion, the user indicates a distant location, and they are instantly moved to that location, possibly including some kind of animation to show to the user that their location has changed, and in what direction their point of view has been moved. Teleportation greatly reduces the user's sense of immersion; it solves the problem of locomotion by avoiding natural movement entirely.
Another proposed solution is the omnidirectional treadmill, such as the Virtuix Omni™. A treadmill is expensive and large, but it does serve to illustrate the effort that has been applied to solve the problem of locomotion in VR.
In U.S. Pat. No. 6,891,527 B1 a hand-held spherical input device is described. Mouse cursor movements are obtained by tracking the location of a fingertip on a touch sensitive surface that covers the sphere. However, gestures for navigating a three dimensional virtual environment are not described. In 2011, a proposal was made for a universal spherical input device, available at http://lauralahti.com/The-Smartball. This hand-held input device is also spherical, and is described as having applications in 3D development and augmented reality, by virtue of the ability to manipulate a virtual object using pinch, pull and grab gestures. However, the use of the device for movement of the user's point of view is not described.
The requirement to wear a VR headset greatly limits the circumstances in which virtual environments can be viewed and navigated. However, a headset does solve the problem of being able to look at the virtual environment from any angle. Clearly it would be preferable to be able to look around just as easily without having to put on a VR headset, and also to move in the virtual environment just as easily as one moves in the real world.
According to an aspect of the present invention, there is provided an apparatus for supplying gestural-data to an external-processing-device thereby allowing the external-processing-device to move a viewpoint in a three-dimensional virtual environment and to render the virtual environment from the viewpoint, implemented as a substantially spherical manually-rotatable input device supported in the hands of a user, comprising a rotation-detector configured to generate gestural-data in response to manual rotation and a wireless transmitter for transmitting the gestural-data to the external-processing-device. Preferably the rotation-detector is an inertial-measurement-unit and the input device further comprises a hand-area-sensor and is configured to generate additional gestural-data in response to measurements made with the hand-area-sensor.
According to another aspect of the present invention, there is provided a method of adjusting the location of a viewpoint in a three-dimensional environment, comprising the steps of generating rotation-data in response to a manual rotation of a substantially spherical hand-supported input device supported in the hands of a user, wirelessly transmitting the rotation-data to an external-processing-device, and moving the location of a viewpoint in the three-dimensional environment in response to the received rotation-data. Preferably the method includes rendering image data from the three-dimensional environment with respect to the viewpoint location and displaying the image data to the user. Preferably, the step of moving the location includes the steps of moving the viewpoint forwards in the virtual environment in response to a pitch-rotation of the input device about an x axis, translating the viewpoint sideways in the virtual environment in response to a roll-rotation of the input device about a z-axis, and yaw-rotating the viewpoint in the virtual environment in response to a yaw-rotation of the input device about a y-axis.
According to another aspect of the present invention, there is provided a system for presenting immersive images to a user via a display device, in which the user has a viewpoint within a three-dimensional virtual environment, the system comprising a substantially spherical manually-rotatable hand supported input device with a rotation-detector, a device-processor and a wireless transmitter, wherein the device-processor generates gestural-data in response to manual rotation of the input device measured by the rotation-detector, and the transmitter transmits the gestural-data, the system further comprising an external-processing-device, which receives the gestural-data wirelessly, moves the user viewpoint in the three-dimensional virtual environment in response to manual rotation of the input device and renders image-data from the virtual environment with respect to the viewpoint, and a display device, in which the display device presents the image data to a user such that the user experiences locomotion within the virtual environment in response to their rotation of the input device. Preferably the external-processing-device is configured to move the viewpoint forwards in the virtual environment in response to a pitch-rotation of the input device about an x-axis, translate the viewpoint sideways in the virtual environment in response to a roll-rotation of the input device about a z-axis, and yaw-rotate the viewpoint in the virtual environment in response to a yaw-rotation of the input device about a y-axis.
A system for presenting immersive images to a user is shown in
The virtual environment 102 is a simulated three-dimensional environment constructed from data representing various objects, their appearance and physics properties. In an embodiment, the virtual environment 102 is a real environment at a remote location, or a mixture of simulated and real environments. This may include volumetric or visual three dimensional recordings made at remote locations that the user can navigate at a time of their choosing, as well as real time data that allows the user 101 to view or interact with remote people and events. In a further embodiment, the virtual environment is a three-sixty degree video or photograph in which the user 101 may adjust their viewpoint 104 by zooming, and/or vertically and/or horizontally panning using the input device 105. When navigating three-sixty video or photograph, the effect of moving forward or backwards in the virtual environment 102 is achieved by zooming in or out.
In an embodiment, the display 103 is a virtual reality (VR) headset that replaces the user's field of view with stereoscopic images supplied individually to each eye. However, it will be appreciated that an advantage of the system shown in
An external-processing-device 106 receives gestural-data from the input device 105 via a receiver 107, and renders the virtual environment 102. In an embodiment, the virtual environment 102 is rendered and displayed using a laptop computer, and the display 103 is part of the laptop computer. In a further embodiment, the receiver 107 is also part of the laptop computer. In a further embodiment, the receiver is part of a VR headset. In an embodiment, the external-processing-device 106 is part of a VR headset. However, an advantage of the preferred embodiment, is that the user 101 feels a sense of immersion without the need for a headset. This is due to the correspondence between gestures made with the input device 105 and resulting adjustments made to the user's viewpoint 104 shown on the display 103.
An SD Card 108 stores instructions for the external-processing-device 106 and the input device 105.
The input device 105 is hand-supported, resulting in a contact-area 109 between the user's hands and the input device 105. The contact-area 109 is the area of the user's hands that imparts a manual rotation to the input device 105. The purpose of the input device's spherical shape is to ensure that it feels substantially the same to the user 101, even after manual rotation. A sphere is the only shape that has this property.
The receiver 107 is oriented with respect to the user's sense of forwards. When the user 101 is viewing the virtual environment 102 on the display 103 it is natural for the user 101 to face the display 103. Thus, the receiver 107 may be aligned with the display 103, by mounting it on the wall in front of the display 103. As a result, the receiver 107 has an orientation with respect to the user 101, when the user 101 is navigating the virtual environment 102.
Components of the external-processing-device 106 shown in
Operation of the external-processing-device 106 detailed in
As a result of the steps shown in
Data in RAM 203 includes gestural-data 407 received from the input device 105. Gestural-data 407 includes hand-area-data 408, which provides an indication of the contact-area 109. Gestural-data 407 further includes rotation-data 409, that describes the orientation of the input device 105 using a quaternion, Q, 410. A quaternion is a vector of four components, defining orientation angles about perpendicular x-, y- and z-axes using three imaginary components i, j and k, plus a real magnitude, w. The quaternion 410 is updated at two hundred times a second, so a manual rotation of the input device 105 results in changing values of the components of the quaternion 410. Gestural-data 407 also includes acceleration-data 411, which has x, y and z components and is used to identify non-rotational gestures made with the input device 105, such as gestures that include tapping on its surface.
Data contents of RAM 203 also include compass-data 412. The compass-data 203 includes a geomagnetic compass bearing, BETA, 413, which defines the forward-facing direction of the user 101 in terms of the Earth's geomagnetic field.
Data in RAM 203 further includes virtual environment data 414. This includes all object data, physics data, bitmaps and so on that are used to define a virtual environment. Virtual environment data 414 also includes location coordinates 415 of the user's viewpoint 104, and viewpoint angles 416. The first of the viewpoint angles 416 is PHI, and describes the rotation of the viewpoint 104 about a vertical axis in the virtual environment 102. The second of the viewpoint angles 416 is THETA, which describes whether the user is looking up or down in the virtual environment 102. THETA defines the rotation of the viewpoint 104 about a horizontal x-axis in the virtual environment 102, that extends through the viewpoint 104, from left to right. Virtual environment data 414 also includes a locomotion-factor, F, 417 and a view-factor, V, 418.
Also in RAM 203 is image data 419, that is generated as the result of rendering the virtual environment data 414. Image data 419, and other data, may be held in memory in the graphics card 205, but is shown in the main memory of
The step 308 of running virtual environment instructions 405 is detailed in
At step 503, the input device driver instructions 403 are executed to obtain new movement and angle data from the gestural-data 407 and compass-data 412. At step 504, virtual environment data 414 is updated, including the coordinates 415 and angles 416 of the viewpoint 104. At step 505, the virtual environment 102 is rendered to generate image data 419.
At step 506, the rendered image data 419 is supplied to the display 103. The receiver 107 is also capable of transmitting data to the input device 105 when necessary. At step 507, haptics commands are transmitted to the input device 105, via the receiver 107. Haptics commands cause the input device 105 to vibrate, providing physical feedback to the user 101.
After completion of step 507, control is directed back to step 501. The steps of
The step 503 of executing input device driver instructions, shown in
The orientation quaternion, Q, 410, is part of the rotation-data 409 received in the gestural-data 407 at step 502. At step 604, the orientation quaternion, Q 410 is rotated around its vertical axis in response to the compass bearing, beta, 413. The purpose of this is to interpret user gestures, including forward locomotion gestures, with respect to the user's orientation with respect to the display 103. In other words, when the user 101 rolls the input device 105 forwards towards the display 103, the user perceives a forward movement of their viewpoint 104 in the virtual environment 102 as it is shown on the display 103.
At step 605, a previous orientation quaternion, P, is subtracted from Q, 410, to obtain a rotation difference quaternion, R. After R has been calculated, the value of Q is copied into P in preparation for the next loop. A distinction is made between a rotation, which is a circular movement, and an orientation, which can be a static condition. The orientation quaternion, Q, 410, represents the static condition of the input device at the moment in time when its orientation is measured. The rotation quaternion, R, represents the change in orientation that has occurred over the previous five milliseconds.
At step 606, the rotation, R, is converted into changes in pitch, roll, and yaw, represented by DP, DR and DPHI respectively. DP is the change in pitch, which is a forward rotation of the input device 105 about an x-axis with respect to the user's forwards direction. DR is the change in roll, which is a lateral roll of the input device 105 about a forward-facing z-axis with respect to the user's sense of direction. DPHI is the change in yaw, which is a rotation of the input device 105 about a vertical y-axis.
At step 607, the locomotion-factor, F, 417 and the view-factor, V, 418 are derived from an analysis of the hand-area-data 408.
At step 608, the viewpoint rotation and movement are interpolated in response to the values of F and V calculated at step 607 in response to the hand-area-data 408. This results in updates of variables DTHETA, DPHI, DZ and DX. DTHETA is the change in the up and down pitch-angle of the viewpoint 104 about an x-axis with respect to the user's orientation in the virtual environment 102. DPHI is the change in a yaw-angle of the viewpoint 104 about a vertical y-axis in the virtual environment 102. Together, DTHETA and DPHI completely define the angle of the user's viewpoint 104 in the virtual environment 102. DTHETA is affected by the view-factor, V, 418, such that angular up and down rotations of the viewpoint 104 only occur when the user 101 is manipulating the input device 105 with a large contact-area 109. A large contact-area 109 can be obtained by supporting the device within the palms of both hands. When the contact-area 109 is small, for example when the user manipulates the input device 105 only using their fingertips, the view-factor, V, is low, and the same rotation of the input device results in locomotion. DZ defines forwards and backwards movement of the viewpoint 104 with respect to the user's orientation in the virtual environment 102, and is affected by the locomotion-factor, F, 417, which has an inverse relation to the view-factor 418. DX defines side-to-side movement of the viewpoint 104, also known as strafing. DX is not affected by the view-factor, F.
The calculations performed in step 608 also depend on the calibration factor, C, and a locomotion scaling constant, K. The calibration factor C changes from zero to one over a short time during the calibration gesture identified at step 602. The locomotion scaling constant defines the number of meters moved per degree of rotation, and may be set differently for different kinds of virtual environment.
The result of the calculations performed at step 608, is that the user 101 can easily and naturally move around the virtual environment 102, by rotating the input device. Forward movement is obtained by rotating the input device 105 forwards. The direction of movement can be changed by rotating the input device 105 about its vertical axis. Sideways, strafing movement can be obtained by rotating the device about its forward-facing axis. The user can change the up and down angle of the viewpoint 104 by holding the device in the palms of the hands, resulting in an increased contact-area 109, and then rotating the device forwards or backwards.
The step 601 of analysing acceleration-data 411, shown in
The steps of
The purpose of the calibration gesture is to ensure the accuracy of the compass-data 412. The first-magnetometer 1508, located in the receiver 107, may be subject to magnetic fields from loudspeakers or other sources, reducing its accuracy. Having obtained approximate compass-data 412 from the receiver 107, the user may improve the accuracy of the compass-data 412 by performing the calibration gesture described. In the presence of a large distorting magnetic field, the receiver's magnetometer data may not be usable, in which case the calibration gesture provides the only reliable way of defining the user's forward-facing direction.
The step 603 of performing calibration gesture processing, shown in
At step 801, a question is asked as to whether a substantial device rotation has been detected, by analysing the rotation-data 409, including the orientation quaternion 410. If rotation has been detected, control is directed to step 802, where the average rotation direction is accumulated as a new compass bearing 413. At step 803, the calibration factor, C, is set to a value in proportion to the amount of consistent rotation since the start of the second part of the calibration gesture. C takes a value in the range zero to one, and is gradually increases to reintroduce locomotion at step 608 in
At step 804, a question is asked as to whether the calibration factor, C, has reached its maximum value of one. If so, the calibration gesture state is set as complete at step 805. If no significant device rotation was detected at step 801, control is directed to step 806, where the calibration gesture is cancelled.
The step 604 of rotating the orientation quaternion 410, shown in
At step 904, a compass bearing quaternion, B, is updated from the compass bearing angle, BETA, 413. At step 905, the compass-data 412 is subtracted from the rotation-data 409. This is implemented by multiplying the compass bearing quaternion, B, by the orientation quaternion, Q, 410, and updating Q, 410 with the result. This removes the Earth's geomagnetic field from the orientation, Q, so that any rotations about the vertical axis of the input device are then measured with respect to the user's forwards direction. This process of establishing the frame of reference for user gestures, with respect to the user's subjective awareness, may also be referred to as normalisation.
The step 607 of deriving the view-factor, V, and locomotion-factor, F, shown in
At step 1002, a question is asked as to whether A is greater than T1. If not, F and V are not modified, and no further calculation is required. Alternatively, if the T1 threshold is exceeded, steps 1003 to 1006 are performed. At step 1003, F is interpolated to a value between one and zero, in response to the value of A. At step 1004, the calculation of F is completed by limiting its lowest value to zero. At step 1005, V is interpolated to a value between zero and one, in response to the value of A. At step 1006, the calculation of V is completed by limiting its highest value to one. In these calculations, F and V change inversely with respect to each other. As A increases from T1 to T2, F decreases from one to zero, and V increases from zero to one.
The effect of the steps of
If only a single threshold were used to switch between these two kinds of rotation gesture, the abrupt transition between two different modes would be disorienting for the user. Instead, two thresholds are interpolated, enabling the user 101 to automatically adjust to the transition between locomotion and up and down view rotation, thereby facilitating smooth navigation of the virtual environment 102.
Having established the change in viewpoint location and angle, these are applied to the virtual environment 102 in step 504, shown in
At step 1101 the orientation of of the user's viewpoint 104 is updated, as defined by the two angular changes DPHI and DTHETA, calculated at step 608 in
At step 1102, the z and x absolute coordinates of the viewpoint 104 are updated in response to gestural-data 407 and via the calculations performed as described above. At step 1103, additional virtual environment events are generated in response to tap event data generated at step 706.
Manipulation of the input device 105 shown in
The input device 105 has a high sensitivity to rotation 1203, 1204, and even a small rotation results in some degree of movement of the user's viewpoint 104. This results in a sense of immersion for the user 101, even though the virtual environment 102 is displayed to the user 101 on a conventional display 103.
Adjustment of the viewpoint 104 in response to rotation of the input device 105 during a low contact-area 109 is summarised in
In an embodiment, the movement of the location of the viewpoint is implemented by zooming in on an image. This makes it possible to move around the virtual environment 102 even when it is generated from a panoramic image or three-sixty video, such as that provided by a three-sixty camera or multiple images stitched together. Usually, the forward rotation 1301 causes a change in the position of the viewpoint 104. In an embodiment, the forward rotation 1301 causes a change in the velocity of movement of the viewpoint 104. Whichever method is used, the pitch rotation 1301 of the input device 105 causes a forward movement 1303 of the viewpoint 104 along the z-axis 1304.
A roll-rotation 1305 of the input device 105 about a z-axis 1306 in the frame of reference of the user 101, results in strafing movement 1307 of the viewpoint 104 along an x-axis 1308, relative to the viewpoint 104 in the virtual environment 102. Strafing 1307 is not affected by the contact-area 109. Strafing movement 1307 may be referred to as a translation of the viewpoint's coordinates 415. More generally, movement of an object's coordinates in a three-dimensional environment is referred to as translation.
A yaw-rotation 1309 of the input device 105 about a vertical y-axis 1310 in the frame of reference of the user 101, results in a corresponding yaw-rotation 1311 of the viewpoint 104 about a vertical y-axis 1312, relative to the viewpoint 104, in the virtual environment 102. As with strafing, yaw-rotation 1311 is not affected by the contact-area 109.
The user 101 naturally combines all three rotations 1301, 1305, 1309 when moving through the virtual environment 104. Usually one rotation of the three will be much larger than the others, but the other small rotations combine to provide the sense of immersion in the virtual environment 102. It will be understood that the device rotation 1310 will result in rotation of the z-axis 1304 and the x-axis 1308 in the global coordinate system of the virtual environment 102.
Rotations of the input device 105 are shown in
Interpolated adjustment of the viewpoint 104 in response to rotation of the input device 105 is detailed in
When sufficient contact-area 109 exists between the user's hands 1201, 1202 and the input device 105, the hand-area-data 408 exceeds T2, giving a locomotion-factor, F, 417 of zero, and a view-factor, V, 418, of one. This condition can be achieved by manipulating the device within the palms of both hands, or with all fingers of both hands in contact with the surface of the input device 105.
Under this condition, a forward pitch rotation 1404 of the input device 105 about its x-axis 1302 gives no locomotion. The input device rotation 1404 is entirely converted into a pitch rotation 1405 of the viewpoint 104 around the viewpoint's x-axis 1308.
The receiver 107 shown in
Other components in the receiver 107 include an MPU-9250 inertial-measurement-unit (IMU) 1507 that includes a three-axis-first-magnetometer 1508 and a three-axis-accelerometer 1509. The MPU-9250 also includes a three-axis-gyroscope, which is not used by the receiver 107. The MPU-9250 is available from InvenSense Inc., 1745 Technology Drive, San Jose, Calif. 95110, U.S.A. The receiver 107 further includes a USB 110 an power supply circuit 1510, which provides an interface to the external-processing-device 106 via a USB connector 1511. Power for the receiver 107 is obtained from the connector 1511.
In an embodiment, the receiver components shown in
Instructions held in the FLASH memory 1503 of the receiver's SOC 1501 shown in
At step 1605 the gestural-data and receiver compass bearing are sent to the external-processing-device 106 via the USB connection 1511. At step 1606 any haptic data is received from the external-processing-device 106, and at step 1607 the haptic data is transmitted to the input device 105.
The input device 105 shown in
Other components of the input device 105 include a battery and power management circuit 1707 and a haptics peripheral 1708, that can be activated to vibrate the input device 105. A hand-area-sensor 1709 detects the contact-area 109 between the user's hands 1201, 1202 and the surface of the input device 105. A rotation-detector 1710 is provided by an MPU-9250 inertial-measurement-unit (IMU). The rotation-detector 1710 includes a three-axis-accelerometer 1711, a three-axis-gyroscope 1712 and a three-axis-second-magnetometer 1713. The accelerometer 1711 and gyroscope 1712 are each configured to generate new x-, y- and z-axis signal data at a rate of two hundred samples a second. The second-magnetometer generates new x-, y- and z-axis signal data at one hundred samples per second. The magnetometer samples are repeated in order to match the sample rate of the accelerometer 1711 and gyroscope 1712. The rotation-detector 1710 includes several sensors 1711, 1712, 1713 that track the orientation of the input device 105. As the user 101 rotates the input device 101, the change in orientation is converted into a rotation at step 605 shown in
Physical construction details of the input device 105 shown in
The first-hemisphere 1802 and the second-hemisphere 1803 provide an area-indicating-capacitance 1807 formed by the electrode 1804 of the first-hemisphere 1802 and the electrode 1805 of the second hemisphere 1803. The area-indicating-capacitance 1807 depends on the contact-area 109 of the user's hands in close proximity to the two electrodes 1804 and 1805. Counter-intuitively, the area-indicating-capacitance 1807 provides a good indication of the overall contact-area 109, even when the input device 105 has been rotated by an arbitrary amount.
It will be appreciated that the first-hemisphere 1802 and second-hemisphere 1803 cannot be covered in a conventional capacitive multitouch sensor, because the grid of wires required to implement such a sensor would make radio communication from the input device 105 impossible. Also included in the physical construction of the input device 105 is an inductive charging coil for charging the battery 1707 inductively. This has been omitted from
The area-indicating-capacitance 1807 shown in
The hand-area-sensor 1709 gives similar output regardless of the orientation of the input device 105. Its immunity to rotation may be understood in the following way. In any orientation of the input device 105, it is natural for the user 101 to manually rotate the input device 105 with a significant contact-area 109 of fingertips or palms on the first-hemisphere 1802 and the second-hemisphere 1803. In an uneven distribution of the same contact-area 109, the first variable capacitance 1902 is increased, and the second variable capacitance 1903 is correspondingly decreased. Although the value of C, given by the capacitance equation 1904, changes somewhat as a result of this new distribution, the difference is not usually noticed to the user 101. Therefore, the area-indicating-capacitance 1807 gives a useful indication of the contact-area 109, regardless of the orientation of the input device 105. In particular, the interpolation performed at step 608 makes it possible for the user 101 to obtain a desired effect, by covering more or less of the input device 105 with their hands 1201 and 1202. This simple hand-area-sensor 1709, in combination with the method of interpolation shown at step 608, permits a robust, reliable and low cost input device 105 to be manufactured.
In an embodiment, the electrodes 1804 and 1805 take a different form to that shown in
Using the embodiment shown in
The steps performed with the input device 105 shown in
Contents of input device RAM 1703 and FLASH 1704 during operation of step 2104 shown in
Input device RAM 1703 includes IMU signals 2202 comprising three-axis-accelerometer data samples 2203, three-axis-gyroscope data samples 2204 and three-axis-magnetometer data samples 2205. The input-device 105 generates gestural-data 407 by executing the input device firmware instructions 404 on the device-processor 1702. The gestural-data 407 includes hand-area-data 408, rotation-data 409 including the quaternion, Q, 410, and acceleration-data 411. Other data 2206 includes temporary variables used during the generation of the gestural-data 407.
The step 2104 of executing input device firmware instructions shown in
At step 2303 an iteration is performed of a sensor fusion algorithm. This has the effect of combining accelerometer samples 2203, gyroscope samples 2204 and magnetometer samples 2205 such that the orientation of the input device 105 is known with a high degree of accuracy. Sensor fusion is performed using Sebastian Madgewick's sensor fusion algorithm, available at http://x-io.co.uk/open-source-imu-and-ahrs-alqorithms. Each time step 2303 is performed, the orientation quaternion 410 is incrementally modified, so that, after a short period of initialisation, it continuously tracks the orientation of the input device 105 with respect to the Earth's gravitational and geomagnetic fields.
At step 2304 a question is asked as to whether there has been no rotation of the input device 105 for two minutes. This period of inactivity can be detected by analysing the rotation-data 409. The analysis includes measuring change magnitudes in the components of the orientation quaternion 410. If none of the quaternion's four components change by more than 0.05 in each five millisecond interval for two minutes, the question asked at step 2304 is answered in the affirmative. The input device 105 is then considered as being not in use, and control is directed to step 2307 to deactivate it. Alternatively, if significant rotations have occurred, the input device 105 is considered as being in use, and control is directed to step 2305.
At step 2305 the area-indicating-capacitance 1807 of the hand-area-sensor 1709 is measured. A Capacitance-to-Digital-Converter (CDC) for measuring capacitance is built in to the SOC 1701. The CDC generates a single value proportional to the area-indicating-capacitance 1807. Eight such CDC measurements are made, and then averaged, to reduce noise. At step 2306 the CDC value is converted into a floating point value by subtracting an offset and multiplying by a scaling factor. The offset removes the effect of the parasitic capacitance Cp 1901, and the scaling factor normalises the remaining capacitance range of about three picofarads to a range of zero to one. When the hand-area-data 408 takes a value of zero, this corresponds to a contact-area 109 of zero. When the hand-area-data 408 takes a value of one, this corresponds to the maximum contact-area 109 formed by enclosing the input device 105 in the palms of both hands 1201, 1202.
The hand-area-data 408, rotation-data 409, and acceleration-data 411 are combined into gestural-data 407 and supplied to the radio 1705 at step 2307. The radio 1705 transmits the gestural-data 407 to the receiver 107 in a single packet. Control is then directed to step 2301, and steps 2301 to 2307 are repeated two hundred times per second, in accordance with the sampling rate of the rotation-detector 1710, for as long as the input device 105 is in use.
When the input device 105 is not in use, control is directed to step 2308, where the device-processor 1702 and other components shown in
The steps of
Locomotion with the input device 105 is summarised in
User locomotion 2401 is achieved by the user 101 in response to their manual rotation and manipulation of the input device 105. Rotations 409 are translated into forwards 2401, backwards or strafing movements, and or rotations, according to the contact-area 109. The viewpoint 104 is adjusted according to these movements and rotations. The virtual environment 102 is then rendered from the perspective of the user's adjusted viewpoint 104, and displayed to the user 101 on the display 103.
In an embodiment, the input device 105 may be used to facilitate navigation of an aircraft. In
The input device 105 may be held more tightly in the user's hands 1201, 1202, covering a larger area 109 of the input device's surface. Under these conditions, a forward rotation 2504 causes the drone 2501 to fly directly upwards. A reverse rotation causes the drone 2501 to fly directly downwards.
Interpolation between the horizontal and vertical movement of the drone 2501 is performed in accordance with the surface area 109 of the input device 105 covered by the user's hands 1201, 1202, as shown by the calculations performed in
During such operations, the user 101, may view the drone 2501 directly by eye, or by wearing a headset in which images from a camera on the drone are supplied to the user 101, to provide a view from the drone's perspective. In each such case, the user 101 is immersed in a virtual environment provided either by imagining or electronically viewing the real world from the drone's point-of-view. An advantage of the input device 105, is that the psychological sense of immersion is increased beyond that possible using a conventional joystick remote control, because the rotations of the input device 105 are more directly associated with movements of the drone 2501.
As a result, the user 101 is able to navigate the three-dimensional environment occupied by the drone 2501, without the need to learn complex controls. The rotation-detector generates rotational gestural-data in response to the manual rotation 2502, 2504 of the input device 105, and additional gestural-data is generated in response to the area 109 of the user's hands supporting the input device during a manual rotation 2502, 2504. The gestural-data is then transmitted to the drone 2501. In an embodiment, the gestural-data is transmitted in two stages. In a first stage, the input device 105 transmits the gestural-data to an external-processing-device where it is transmitted with a more powerful radio transmitter, to the drone 2501. In an embodiment, the input device 105 transmits gestural-data directly to the drone 2501, which includes an external-processing-device to process the gestural-data and to update its flight electronics in accordance with the gestures 2502, 2504 made by the user 101.
Number | Date | Country | Kind |
---|---|---|---|
1701877.1 | Feb 2017 | GB | national |
1718258.5 | Nov 2017 | GB | national |