Virtual reality is becoming an increasingly popular display method, especially for computer gaming but also in other applications. This introduces new problems in the generation and display of image data as virtual reality devices must have extremely fast and high-resolution displays to create an illusion of reality. This means that a very large volume of data must be transmitted to the device from any connected host.
As virtual-reality display devices become more popular, it is also becoming desirable for them to be wirelessly connected to their hosts. This introduces considerable problems with the transmission of the large volume of display data required, as wireless connections commonly have very limited bandwidth. It is therefore desirable for as much compression to be applied to the display data as possible without affecting its quality, as reductions in quality are likely to be noticed by a user.
Finally, it is desirable for virtual-reality devices to be as lightweight as possible, since they are commonly mounted on the user's head. This limits the number of internal devices such as complex decompression circuits and sensors that can be provided.
The invention aims to mitigate some of these problems.
Accordingly, in one aspect, the invention provides a method at a host device for compressing display data forming an image for display on one or more displays of a wearable headset, the method comprising:
receiving from the wearable headset, information regarding a direction of movement of the wearable headset, including the one or more displays, the direction being between a trailing position and a leading position;
if the direction of the movement is on an arc, compressing the display data forming a trailing portion of the image relative to the display data forming a leading portion of the image, when displayed on the one or more displays that are moving with the wearable headset, wherein the leading portion and the trailing portion may be of any size smaller than the whole image and may change in size on a frame to frame basis; and
forwarding the display data forming the image from the host device to the wearable headset for display on the one or more displays thereof.
In one embodiment, the information further comprises a speed of the movement and compression of the display data forming at least the trailing portion of the image is performed if a speed of the movement is above a minimum threshold. Compression of the display data forming the whole of the image is preferably performed if a speed of the movement is above a minimum threshold. The compression of the display data forming a part of, or the whole of the image may be based on the speed of the movement above the minimum threshold. The compression of the display data forming a part of, or the whole of the image may be increased as the speed of the movement above the minimum threshold increases.
According to a second aspect, the invention provides a method at a host device for compressing display data forming an image for display on one or more displays of a wearable headset, the method comprising:
receiving from the wearable headset information regarding a speed of movement of the wearable headset, including the one or more displays;
if the speed of the movement is above a minimum threshold, compressing the display data by an amount based on the speed of the movement above the minimum threshold; and
forwarding the compressed display data from the host device to the wearable headset for display thereon.
In an embodiment, the compression of the display data forming the image may be increased as the speed of the movement above the minimum threshold increases.
Preferably, the information further comprises a direction of movement of the wearable headset, including the one or more displays, the direction being between a trailing position and a leading position, the method further comprises, if the direction of the movement is on an arc, compressing the display data forming a trailing portion of the image relative to the display data forming a leading portion of the image, when displayed on the one or more displays that are moving with the wearable headset.
In a preferred embodiment, the display data forming a trailing portion of the image is compressed by a higher compression factor than the display data forming a leading portion of the image. Preferably, compression of the display data is increased in portions across the image in the direction from the leading portion to the trailing portion of the image. The trailing portion may, in some cases, increase in size compared to the leading portion of the image as the speed of the movement above the minimum threshold increases.
In embodiment, the method comprises, at the host device:
determining, from the information, whether the movement is on an arc or linear;
determining, from the information, the speed of the movement of the wearable headset;
and
determining whether the speed of the movement is above the minimum threshold.
In another embodiment, the method comprises, at the wearable headset:
determining whether the movement of the wearable headset is on an arc or linear;
determining the speed of the movement of the wearable headset;
determining whether the speed of the movement is above the minimum threshold; and
sending the information to the host device, if the movement is on an arc and the speed is above the minimum threshold.
According to a third aspect, the invention provides a method at a wearable headset for displaying display data forming an image on one or more displays, the method comprising:
sensing movement of the wearable headset indicative of movement of the one or more displays;
determining whether the movement is on an arc or linear;
determining the speed of the movement of the wearable headset;
determining whether the speed of the movement is above a minimum threshold;
sending information regarding the speed and direction of the movement to a host device, if the movement is on an arc and the speed is above the minimum threshold;
receiving from the host device, the display data forming the image; and
displaying the image on one or more displays.
Preferably, sensing movement of the wearable headset comprises using a gyroscope and an accelerometer in the wearable headset.
In a further aspect, the invention may provide a host device and a wearable headset configured to perform the various appropriate steps of the method described above.
The wearable headset may be a virtual reality headset or an augmented reality set of glasses.
A system comprising a host device and a wearable headset connected to the host device may also be provided.
In another aspect, the invention provides a method of applying adaptive compression to display data according to sensor data indicating that a headset is in motion, the method comprising:
Detecting a movement of the headset
Analysing the movement to determine its direction and/or speed
Applying compression selectively to display data according to the results of the analysis
Transmitting the display data for display Decompressing and displaying the display data
The analysis may comprise determining the direction of the movement only and applying localised compression, in which the part of an image assumed to be moving out of the user's vision based on this movement is compressed to a greater degree than the remainder of the image. Alternatively or additionally, the analysis may comprise determining the speed of the movement and applying staged compression, in which compression is applied to a greater extent across the whole frame as the speed of the movement increases.
This above described methods are advantageous as they allows assumptions regarding user gaze to be used to improve compression without requiring additional sensors to be incorporated into a device. This provides compression benefits without increasing the expense and complexity of such devices.
Embodiments of the invention will now be more fully described, by way of example, with reference to the drawings, of which:
The host [11] incorporates, among other components, a processor [13] running an application which generates frames of display data using a graphics processing unit (GPU) on the host [11]. These frames are then transmitted to a compression engine [14], which carries out compression on the display data to reduce its volume prior to transmission. The transmission itself is carried out by an output engine [15], which controls the connection to the headset [12] and may include display and wireless driver software.
The headset [12] incorporates an input engine [17] for receiving the transmitted display data, which also controls the connection [16] to the host [11] as appropriate. The input engine [17] is connected to a decompression engine [18], which decompresses the received display data as appropriate. The decompression engine [18] is in turn connected to two display panels [19], one of which is presented to each of a user's eyes when the headset [12] is in use. When the display data has been decompressed, it is transmitted to the display panels [19] for display, possibly via frame or flow buffers to account for any unevenness in the rate of decompression.
The headset [12] also incorporates sensors [110] which detect user interaction with the headset [12]. There may be a variety of position, temperature, heartbeat, angle, etc. sensors [110], but the important sensors [110] for the purposes of this example are sensors used to detect the movement of the headset [12]. In this example, these comprise an accelerometer for determining the speed at which the headset [12] is moving, and a gyroscope for detecting changes in its angle and position in space. However, other methods can be used, such as an external camera which determines headset movement by detecting the movement of points of interest in the surroundings, or a wireless module that may be able to derive movement from a beamforming signal. In any case, the sensors [110] are connected to the host [11] and transmit data back to it to control the operation of applications on the host [11].
Specifically, the sensors [110] provide information to the processor [13], since the output of the sensors [110] will affect the display data being generated. For example, when the user moves the display data shown to him or her must change to match that movement to create the illusion of a virtual world. The sensor data, or a derivative of the sensed data, is also sent to the compression engine [14] according to embodiments of the invention.
As previously mentioned, the headset [11] incorporates two display panels [19], each presented to one of the eyes [24] of a user [23], when in use. These two panels may in fact be a single display panel which shows two images, but this will not affect the operation of these embodiments of the invention.
In
In any case, the sensors [21, 22] transmit sensor data to the host [11] as previously described, and the host [11] transmits image data for display on the display panels [19] as previously described.
Accordingly, localised relative compression is applied to the left part of the image shown on the display panel [19]. This is shown by the hatched area [32] on the panel shown in the Figure: since the user is looking to the right, the left-hand side of the image—i.e. the trailing side relative to the direction of the movement—will be in the user's peripheral vision, where the human eye has low acuity. He or she will therefore not be aware of any loss of quality if that part of the image is compressed more than the right-hand side [33] at which the user is assumed to be actually looking.
This compression could be applied whenever there is movement, or only after the speed of movement is above a minimum threshold. The speed of the movement could also be used to determine the level of compression used, such that as the speed of the movement increases an increased level of compression is used on the same area [32].
This allows higher levels of compression to be applied to the trailing part of the frame as compared to the leading part of the frame. Thus, either compression is applied to the trailing area whereas it was not used before, or a compression algorithm that allows greater loss of data may be used.
This method may be used in a similar way to the localised relative compression described in
In
In
In
As indicated by the fact that in the Figure the eye [24] is always shown gazing directly ahead, shown by the position of the pupil and the direction of the arrow connecting the eye [24] to the display panel [19], the actual direction of the user's [23] gaze is immaterial when this method is in use; the same level of compression is applied across the image.
At Step S71 the gyroscope [22] detects a rotational movement of the headset [12] to the right. If the headset [12] is in use, this will indicate that the user [23] is turning his or her head and therefore is likely to be looking in the direction of movement, as described above with reference to
As described in
The compression engine [14] receives the sensor data and analyses it at Step S73 to determine the direction of motion. Alternatively, this function may be carried out by a dedicated analysis engine, and analysis may take place before the generation of new display data, rather than both the application and the compression engine [14] performing their own analysis. Furthermore, the application could receive the sensor data and use it to generate instructions or derived data for the compression engine [14], containing information on the direction of movement and potentially predictions of future movements. In any case, the compression engine [14] determines the direction of movement.
In this example, the sensor data is produced by a gyroscope [22] which detects rotational movements. In other embodiments in which a less-specialised sensor or combination of sensors [110] is used, such as analysis of images from an external camera, the host [11] may also have to determine whether the movement is rotational—i.e. in an arc—or some other form of movement. This process might then only continue if the direction of movement is on an arc, i.e. corresponding to a user's head turning.
In the embodiment shown in
As previously mentioned, the speed of movement may be used in either of the embodiments shown in
In some embodiments, the output from sensors [110] on the headset [12] may be analysed in a processor on the headset [12] to determine speed and direction of movement. This processed data is then transmitted to the host for use in generation and compression of display data.
At Step S74, localised relative compression is applied to compress the display data forming a trailing portion [32] of the image relative to the display data forming the leading portion [33] of the image. This may, for example, involve exercising two compression algorithms such that in each row of pixels received from the application, the first third—comprising the left-hand side [32] of the image—are compressed with a lossy compression algorithm while the second two thirds—comprising the right-hand side [33] of the image—are compressed with a lossless compression algorithm. Alternatively, the compression engine [14] may receive the frame as tiles with location information or split a received frame into tiles with location information, allowing it to determine location of the tiles in the frame regardless of their order, allowing the left-hand tiles [32] of the image to be compressed with the lossy compression algorithm in parallel to the compression of the right-hand tiles [33] with the lossless compression algorithm.
Naturally, the lossless compression algorithm used for the right-hand side [33] of the image may be replaced with a lossy compression algorithm that nonetheless causes less data loss than the compression algorithm used for the left-hand side [32] of the image or no compression could be used for the right-hand side [33] of the image.
Furthermore, the speed of the movement could also be considered such that compression is not applied unless the movement is above a predetermined speed threshold. This would take into account the possibility that the user [23] is turning his or her head while keeping his or her eyes [24] focussed on a fixed point; this is only possible at slow speeds.
In the variations shown in
In any case, at Step S75 the compressed display data is sent to the output engine [15] and thence transmitted to the headset [12] to be received by the input engine [17]. Then, at Step S76, it is sent to the decompression engine [18], decompressed as appropriate, and displayed on the display panels [19].
At Step S81, the accelerometer [21] detects a movement [61] of the headset [12], in this example a movement to the right. It transmits data indicating this movement to the host [11] at Step S82.
As previously described, this data is used both for the generation of new display data and—according to this embodiment of the invention—to control compression. As such, the movement data is analysed at Step S83 to determine the speed of the movement [61] in a similar way to the movement data described in
In some embodiments, analysis to determine the speed of the movement may be carried out in a processor on the headset [12] and the determined speed transmitted to the host [12] for use in controlling compression.
In this example, the sensor data is produced by an accelerometer [21] which may not distinguish between straight and rotational movements. In other embodiments in which a less-specialised sensor or combination of sensors is used, such as analysis of images from an external camera, the host [12] may also have to determine whether the movement is rotational—i.e. in an arc—or some other form of movement. It may then amend the application of compression depending on the type of movement.
At Step S84, staged compression is applied. In
In any case, at Step S85 the compressed display data is transmitted to the headset [12] at Step S85 and then decompressed and displayed at Step S86.
This combination is especially useful because the user is in fact more likely to be looking in the direction of movement where the movement is fast; small and slow rotations of the head may be carried out with the eye fixed on a point, but this is unlikely to occur with fast movements.
At Step S101, the sensors [110] detect movement of the headset [12]. As previously described, the gyroscope [22] detects rotation and its direction while the accelerometer [21] detects the speed of the movement, and in this case both will be used for adaptive compression. The sensor data is transmitted to the host [11] and received by the compression engine [14] and application [13] as previously described at Step S102.
At Step S103, the compression engine [14]—or a connected analysis engine—analyses the received sensor data to determine the type, direction, and speed of the movement. It then applies adaptive compression at Step S104. As described in
As previously mentioned, different types of compression might be applied depending on the type of movement. For example, if at Step S103 the compression engine [14] or analysis engine determines that the movement is linear with no rotational component, the localised compression component of adaptive compression might be omitted and the process continue as described in
Thresholds could be used as appropriate. For example, there might be no compression or a low level of compression applied until the speed of the movement is above a minimum threshold, then only staged compression as described in
In any case, at Step S105 the compressed data is sent to the output engine [15] for transmission to the headset [12], where it is decompressed and displayed on the display panels [19] as appropriate at Step S106.
Due to the format of the Figures, all examples have been described in terms of side-to-side movement in two dimensions. This does not limit the range of movement to which the methods of the invention may be applied; changes in the compression area and level as herein described could also be applied to a trailing edge in vertical movement, or part of each of two trailing edges in diagonal movement, as appropriate.
Although only one particular embodiment has been described in detail above, it will be appreciated that various changes, modifications and improvements can be made by a person skilled in the art without departing from the scope of the present invention as defined in the claims. For example, hardware aspects may be implemented as software where appropriate and vice versa. Furthermore, the variations on localised relative compression described above with reference to
Number | Date | Country | Kind |
---|---|---|---|
1709237.0 | Jun 2017 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/GB2018/051567 | 6/8/2018 | WO | 00 |