The present application claims priority from British Patent Application no. 2316631.7 filed Oct. 31, 2023, the contents of which are incorporated herein by reference in its entirety.
The present specification relates to computer-implemented methods for controlling visual and/or aural representation of a virtual avatar in an extended reality environment.
An extended reality (or XR) environment may be a virtual reality (VR) environment, or an augmented reality (AR) environment, or a mixed-reality (MR) environment. Thus, the environment may be completely computer-generated or digital, or it may be only partially computer-generated, such that the environment includes real-world elements.
A user is often represented in an XR environment by a virtual avatar. A virtual avatar is a graphical representation of a user, or a user's chosen character, on a digital platform. A virtual avatar can have a two-dimensional form (e.g. an image or icon) or a three-dimensional form (e.g. a character in a 3D computer game).
The use of a virtual avatar rather than an image or video of the user can allow a user to maintain some anonymity in the digital world.
Aspects of the present disclosure are set out in the accompanying independent and dependent claims. Combinations of features from the dependent claims may be combined with features of the independent claims as appropriate and not merely as explicitly set out in the claims.
According to a first aspect of the present disclosure, there is provided a computer-implemented method, comprising obtaining at least one visual setting and/or at least one aural setting of an extended reality environment, obtaining a visual parameter and/or an aural parameter of a virtual avatar, wherein the virtual avatar is within the extended reality environment or has requested entry to the extended reality environment, comparing the visual parameter of the virtual avatar to the at least one visual setting of the extended reality environment, and/or comparing the aural parameter of the virtual avatar to the at least one aural setting of the extended reality environment, and based on the comparison, adjusting the visual parameter and/or the aural parameter of the virtual avatar.
The present disclosure therefore provides a computer-implemented method that improves the representation of a virtual avatar, or improves control over representation of virtual avatars, in an extended reality (XR) environment.
Throughout the present disclosure the term avatar and virtual avatar may be used interchangeably. The virtual avatar may be two-dimensional or three-dimensional.
The virtual avatar may be associated with a user. The user may be referred to as an individual, a person, or a real-world user.
Optionally, the extended reality environment may be an augmented reality (AR) environment.
Optionally, the extended reality environment may be a virtual reality (VR) environment.
Optionally, obtaining at least one visual setting and/or at least one aural setting of the extended reality environment may comprise determining at least one visual setting and/or at least one aural setting of the extended reality environment. Optionally, the setting may be referred to as a property.
Optionally, obtaining at least one visual setting and/or at least one aural setting of the extended reality environment may comprise retrieving (e.g. from a processor, user input device, or other resource) at least one visual setting and/or at least one aural setting of the extended reality environment.
Optionally, adjusting the visual parameter and/or the aural parameter of the virtual avatar mitigates potential visual conflict and/or aural conflict between the virtual avatar and the extended reality environment.
Adjusting the visual parameter and/or the aural parameter of the virtual avatar may therefore comprise managing potential visual and/or aural conflict between the virtual avatar and the extended reality environment by adjusting the visual parameter and/or the aural parameter accordingly.
Optionally, adjusting the visual parameter and/or the aural parameter of the virtual avatar increases or improves a contrast between the extended reality environment and the virtual avatar.
Adjusting the visual parameter and/or the aural parameter of the virtual avatar may therefore comprise managing or optimising a level of contrast (i.e., visual and/or aural contrast) between the extended reality environment and the virtual avatar by adjusting the visual parameter and/or the aural parameter accordingly.
Adjusting the visual parameter and/or the aural parameter of the virtual avatar may comprise conforming the virtual avatar to one or more constraints of the extended reality environment. The one or more constraints may be defined by the at least one obtained visual and/or aural setting.
Optionally, the method includes determining a number of virtual avatars within the extended reality environment and/or a number of virtual avatars that have requested entry to the extended reality environment.
Adjusting the visual parameter and/or the aural parameter of the virtual avatar may be at least partially based on the determined number of virtual avatars within or requesting entry to the extended reality environment.
Optionally, the method includes obtaining a visual parameter and/or an aural parameter of at least one other virtual avatar, wherein the at least one other virtual avatar is within the extended reality environment or has requested entry to the extended reality environment.
Adjusting the visual parameter and/or the aural parameter of the virtual avatar may be at least partially based on the visual parameter and/or an aural parameter of at least one other virtual avatar.
Optionally, obtaining the visual parameter comprises obtaining a specified range of permitted values for the visual parameter. Optionally, the specified range of permitted values is a user-specified range of permitted values.
Optionally, obtaining the aural parameter comprises obtaining a specified range of permitted values for the aural parameter. Optionally, the specified range of permitted values is a user-specified range of permitted values.
Adjusting the visual parameter of the virtual avatar may comprise adjusting the visual parameter of the virtual avatar within the specified range of permitted values.
Adjusting the aural parameter of the virtual avatar may comprise adjusting the aural parameter of the virtual avatar within the specified range of permitted values.
Optionally, the at least one visual setting of the extended reality environment comprises one or more of the following: at least one colour, or colour range, or colour theme or background colour of the environment; a frame rate or refresh rate; a size, or a scale or a dimension.
Optionally, the at least one aural setting of the extended reality environment comprises one or more of the following: a pitch, and/or volume and/or other aural property of a background music track; a maximum and/or a minimum avatar vocal volume; a maximum and/or a minimum avatar vocal pitch; a pitch, and/or volume and/or other aural property of one or more non-player characters.
Optionally, the visual parameter of the virtual avatar comprises any one or more of: a size, or a scale or a dimension of the virtual avatar; a colour; a blinking frequency; a pre-set visual emotional response.
Optionally, the aural parameter of the virtual avatar comprises any one or more of: vocal pitch; vocal volume; a pre-set aural emotional response.
Optionally, the at least one aural setting of the extended reality environment comprises a maximum volume setting and the aural parameter of the virtual avatar includes a vocal volume of the virtual avatar. In response to the vocal volume of the virtual avatar exceeding the maximum volume setting of the extended reality environment, adjusting the aural parameter may comprise reducing the vocal volume of the virtual avatar to below the maximum volume setting, or muting the virtual avatar until the vocal volume of the virtual avatar is below the maximum volume setting.
Optionally, the adjustment to the visual parameter and/or the aural parameter of the virtual avatar is applied to each instance of the extended reality environment. Thus, the adjustment to the visual parameter and/or the aural parameter of the virtual avatar may be detectable throughout the extended reality environment by all users in the extended reality environment.
Optionally, prior to adjusting the visual parameter and/or the aural parameter of the virtual avatar, the method comprises outputting an electronic notification to a user associated with the virtual avatar. The electronic notification may inform the user of the changes that will be applied to their avatar.
Optionally, the user has the option to deny permission for the adjustments to their avatar, or to propose alternative adjustments.
In some embodiments, the adjustments to the visual parameter and/or the aural parameter of the virtual avatar may be applied automatically without informing any user(s) in the extended reality environment.
Optionally, the at least one visual setting and/or the at least one aural setting are specified by a user. The user may be associated with a virtual avatar in the extended reality environment, which is not necessarily the virtual avatar to which the adjustments are being made.
Optionally, the adjustments to the visual parameter and/or the aural parameter of the virtual avatar may only detectable by the user. Thus, in some embodiments, the adjustments may only be visible and/or audible to the user that has specified the applicable setting(s) of the extended reality environment.
Optionally, prior to adjusting the visual parameter and/or the aural parameter of the virtual avatar, the method comprises outputting an electronic notification to the user (i.e the user that has specified the at least one visual setting and/or the at least one aural setting). The electronic notification may inform the user of the changes that will be applied to the virtual avatar.
Optionally, the user has the option to deny permission for the adjustments, or to propose alternative adjustments.
According to a second aspect of the present disclosure, there is provided a computing device comprising a processor and memory, the memory communicatively coupled to the processor, the memory comprising instructions which, when executed by the processor, cause the processor to carry out the method of any embodiment or example of the first aspect of this disclosure.
Optionally, the computing device may be, but is not limited to, a gaming console, or a desktop PC, or a laptop, or a mobile phone, or a smart TV, or a server.
According to a third aspect of the present disclosure, there is provided a computer program comprising instructions which when executed in a computerized system comprising at least one processor, cause the at least one processor to carry out the method of any embodiment or example of the first aspect of this disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of any embodiment or example of the first aspect of this disclosure.
Embodiments of this disclosure will be described hereinafter, by way of example only, with reference to the accompanying drawings in which like reference signs relate to like elements and in which:
Embodiments of this disclosure are described in the following with reference to the accompanying drawings.
The following detailed description is merely illustrative in nature and is not intended to limit the embodiments of the subject matter or the application and uses of such embodiments. As used herein, the words “exemplary” and “example” mean “serving as an example, instance, or illustration.” Any implementation described herein as exemplary or an example is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, or the following detailed description. Features are not shown to scale in the drawings unless it is explicitly stated otherwise.
The XR environment may be a virtual reality (VR) environment, or a mixed reality (MR) environment, or an augmented reality (AR) environment. In some embodiments, the XR environment may be a video game, or a metaverse, but it will be appreciated that the present disclosure is not limited to a particular type of XR environment.
At step 102, the method comprises obtaining at least one visual setting and/or at least one aural setting of the XR environment. In some embodiments, obtaining at least one visual setting and/or at least one aural setting of the XR environment comprises determining or measuring one or more visual and/or aural properties of the XR environment. Additionally or alternatively, obtaining at least one visual setting and/or at least one aural setting of the XR environment may comprise receiving or retrieving said at least one visual setting and/or at least one aural setting. In a non-limiting example, at step 102 at least one visual and/or aural setting of the XR environment is received from a user input, or retrieved from a storage means.
At step 104, the method comprises obtaining a visual parameter and/or an aural parameter of a virtual avatar. The virtual avatar is either within the XR environment or has requested entry to the XR environment. In some embodiments, step 104 comprises receiving or retrieving a user input defining the visual parameter and/or aural parameter of the virtual avatar. In some embodiments, step 104 comprises retrieving the visual parameter and/or aural parameter from a data file, cache, memory or other resource associated with said virtual avatar.
Optionally, the method then proceeds to step 105, wherein a number of virtual avatars within, and/or requesting entry to, the XR environment is determined. For example, the method may determine that ten virtual avatars need to be accommodated within the XR environment. It will be appreciated that any number of virtual avatars may be provided.
At step 106, the method comprises comparing the visual parameter of the virtual avatar to the at least one visual setting of the XR environment, and/or comparing the aural parameter of the virtual avatar to the at least one aural setting of the XR environment.
Optionally, the method then proceeds to step 108 wherein an electronic notification is output to a user. In some embodiments, the electronic notification is displayed on a computer device associated with the user, or the electronic notification may be virtually displayed to the user in the XR environment (e.g. using an XR head set). In some embodiments, the user is associated with the virtual avatar in question. In some embodiments, the user may be responsible for inputting or specifying the aural or visual setting of the XR environment. It will be appreciated that the user may be any user that is interacting with the XR environment.
The electronic notification 108 can be used as a warning, to notify the user that one or more adjustments will be made to the virtual avatar. The electronic notification 108 may seek permission from the user to make the suggested adjustments to the virtual avatar, based on the comparison in step 106.
In other embodiments, step 108 may be skipped, and the method can proceed from step 106 to step 110.
In step 110, the method comprises adjusting the visual parameter and/or the aural parameter of the virtual avatar, based on the comparison in step 106.
It will be appreciated that in step 110, based on the comparison, in some instances no adjustment may be required, such that the adjustment can be considered to be zero.
The adjustments may ensure that the virtual avatar conforms to the requirements or limitations of the XR environment. For example, the XR environment may have a muted colour tone. In comparison, the predefined parameters of a given virtual avatar may be such that the virtual avatar comprises a plurality of bright colours. In order to adapt the virtual avatar to the XR environment, step 106 may comprise adjusting the colours of the virtual avatar such that they are more muted.
In some embodiments the XR environment may be incapable of displaying the virtual avatar according to the predefined parameters.
In other embodiments, the XR environment may be capable of displaying the virtual avatar according to the predefined parameters, but doing so would result in a conflict between the virtual avatar and the XR environment. Taking the above example, the XR environment may be capable of generating the colours required for the virtual avatar, but doing so would result in a stark contrast between the rest of the XR environment and the virtual avatar that may reduce the immersivity and overall experience for the user(s) in the XR environment.
Optionally, the adjustment in step 110 may also be at least partially based on the number of virtual avatars determined in step 105 of the method. For example, the at least one virtual setting obtained in step 102 may include a size or dimension of the XR environment. The visual parameter of the virtual avatar in step 104 may include a size or dimension of the avatar (e.g. height, width, etc.). The method may determine in step 105 that ten virtual avatars need to be accommodated within the XR environment.
Optionally, step 105 may include obtaining or determining a visual parameter and/or an aural parameter of each virtual avatar within or requesting entry to the XR environment. In the above example this may include determining a size or dimension (e.g. height, width, etc.), of each of the ten virtual avatars
Thus, at step 110 the method may include adjusting the size or dimension of the virtual avatar to ensure that all ten of the virtual avatars will fit within the dimensions of the XR environment.
It will be appreciated that the present invention is not limited to adjusting the size or dimensions of the virtual avatar(s). One or more other visual and/or aural properties of the virtual avatar(s) may be adjusted to fit within the limitations of the XR environment.
In some embodiments, the adjustments made to the virtual avatar may improve accessibility of the XR environment. For example, the adjustments made may mitigate problems associated with sensory sensitivities, colour-blindness, visual impairments, etc.
An example of the adjustment that may be made to the visual parameter and/or the aural parameter of the virtual avatar is described in more detail below in connection with
In
The present disclosure provides a method for improving visual and/or aural representation of virtual avatars in an XR environment, which advantageously mitigates the problems described in connection with
In
In step 104, the method comprises obtaining the pre-set parameters that define the first virtual avatar 204 and the second virtual avatar 208, which in this embodiment includes the size of the virtual avatar and the colour(s) of the virtual avatar.
In step 106, these pre-set parameters are compared to the at least one visual setting of the XR environment. In the embodiment shown in
In step 110, to remedy the conflicts identified by the comparison, the size or scale of the second virtual avatar 208 is decreased in line with (i.e., to conform with) the size or scale setting of the XR environment. In addition, the colour of the second virtual avatar 208 is changed such that a contrast level between the avatar and the background is increased. Optionally, the colour of the second virtual avatar 208 is adjusted to achieve a pre-set or minimum contrast level. Figure shows the avatar 208′ following said changes.
Whilst
As shown in
In
In other embodiments, sliders may not be used. For example, the user may simply enter the numerical values for the minimum and/or maximum permitted values for a parameter.
Obtaining a user-specified range of permitted values for one or more visual and/or aural parameters of the virtual avatar means that, at step 110 of the method in
With regards to the possible visual and/or aural settings of the XR environment, which are not defined by the GUI in
In addition to ensuring that a virtual avatar fits within the constraints of an XR environment, the method of the present disclosure may also be used to moderate avatar behaviour in an XR environment. An embodiment of such a method 400 is shown in
Whilst the standard user setting for an XR environment may allow the user to control the volume of any music or other sounds generated by an XR environment, the standard settings may not allow the user to control the volume of other virtual avatars or users. This is addressed by the method 400 of the present disclosure.
At step 420 the method includes obtaining a user-specified maximum volume setting for an XR environment. The maximum volume setting may be set by a user that has sensory sensitivity, or if a user wants a relaxing experience and does not want to hear loud arguments or other loud noises from other virtual avatars in the same environment. In some embodiments, the user may be a moderator or administrator for the XR environment.
At step 422, the method comprises monitoring the volume of all virtual avatars within the XR environment. Said monitoring may be ongoing or continuous for the duration that said user (that specified the setting) is within the XR environment.
Monitoring the volume of the virtual avatars includes, at step 424, determining whether a volume of a virtual avatar exceeds the maximum volume threshold. In some embodiments, the processor carrying out steps 420 and 424 may receive data obtained from each microphone associated with each virtual avatar in the XR environment.
If at step 424 an output volume associated with a virtual avatar is determined to exceed the maximum volume threshold, then the virtual avatar may be muted for a given period of time (step 426). This period of time may be specified by the user that specified the maximum volume setting. After the period of time has elapsed, the method may unmute the virtual avatar and re-check that the volume is now below the maximum volume threshold. If the virtual avatar (i.e. the user associated with the virtual avatar) is still exceeding the maximum volume setting then the virtual avatar is re-muted. Optionally, if a virtual avatar is muted twice within a given period of time then the duration of the mute may be increased (not shown in
Alternatively, at step 426 the volume of the virtual avatar may be reduced to below the maximum volume threshold, as opposed to muted entirely. In some embodiments, at step 426 the virtual avatar's sound output may be attenuated for a given period of time.
Optionally, if a virtual avatar is a repeat offender, such that they exceed the maximum volume setting a predetermined number of times within a given time period, then the virtual avatar may be removed and/or banned from the XR environment (not shown in
In some embodiments, the muting of the virtual avatar at step 426 may only be applied to sound output to the user that specified the maximum volume setting. This may be particularly applicable if the user specified setting is due to a personal preference or sensory requirement.
In some embodiments, the muting of the virtual avatar may be applied to each instance of the XR environment, such that the virtual avatar is muted for all virtual avatars (and users) that are in the XR environment. This may be particularly applicable if the threshold setting is applied by a moderator or administrator responsible for moderating virtual avatar behaviour in the XR environment.
The method shown in
The method shown in
In an embodiment, a user having such sensory sensitivities can specify one or more aural and/or visual settings for the XR environment that would improve their experience of said environment. For example, if the user does not like noises above a certain pitch, the user could specify a suitable threshold pitch for the XR environment, which would apply to all sounds in the environment including all virtual avatars in said environment as well as any background noise, music, cut scenes, or NPC speech. As this requirement is particular to the user in question, in some embodiments this setting is only applied to sound output to said user. Thus, only the particular user's experience of the XR environment may be affected. This allows a user to control their own experience in the XR environment and adapt said environment to fit their needs without impacting other user's experience of the same environment. In other embodiments, the setting may be applied through the environment (i.e. to all current users).
The client computing device 503 may include, but is not limited to, a video game playing device (games console), a smart TV, a set-top box, a smartphone, laptop, personal computer (PC), USB-streaming device, etc. The client computing device 503 comprises, or is in communication with, at least one resource or device configured to obtain input data from the user. In this embodiment, the at least one resource comprises an extended reality display device 505, an input device or controller 504, and at least one sensor or camera 506. Optionally, only the extended reality display device 505 may be provided, or only the extended reality display device 505 and the input device or controller 504. Optionally, the at least one sensor or camera 506 may be integrated into the extended reality display device 505 or the input device or controller 504.
Information passes between the processing device 501 and the client computing device 503 in both directions, as illustrated in
The example computing device 600 includes a processing device 602, a main memory 604 (e.g., read-only memory (ROM), flash memory, dynamic random-access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 606 (e.g., flash memory, static random-access memory (SRAM), etc.), and a secondary memory (e.g., a data storage device 618), which communicate with each other via a bus 630.
Processing device 602 represents one or more general-purpose processors such as a microprocessor, central processing unit, or the like. More particularly, the processing device 602 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 602 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Processing device 602 is configured to execute the processing logic (instructions 622) for performing the operations and steps discussed herein.
The computing device 600 may further include a network interface device 608. The computing device 600 also may include a video display unit 610 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 612 (e.g., a keyboard or touchscreen), a cursor control device 614 (e.g., a mouse or touchscreen), and an audio device 616 (e.g., a speaker).
The data storage device 618 may include one or more machine-readable storage media (or more specifically one or more non-transitory computer-readable storage media) 628 on which is stored one or more sets of instructions 622 embodying any one or more of the methodologies or functions described herein. The instructions 622 may also reside, completely or at least partially, within the main memory 604 and/or within the processing device 602 during execution thereof by the computer system 600, the main memory 604 and the processing device 602 also constituting computer-readable storage media.
The various methods described above may be implemented by a computer program. The computer program may include computer code arranged to instruct a computer to perform the functions of one or more of the various methods described above. The computer program and/or the code for performing such methods may be provided to an apparatus, such as a computer, on one or more computer readable media or, more generally, a computer program product. The computer readable media may be transitory or non-transitory. The one or more computer readable media could be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or a propagation medium for data transmission, for example for downloading the code over the Internet. Alternatively, the one or more computer readable media could take the form of one or more physical computer readable media such as semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disc, and an optical disk, such as a CD-ROM, CD-R/W or DVD.
In an implementation, the modules, components and other features described herein can be implemented as discrete components or integrated in the functionality of hardware components such as ASICS, FPGAs, DSPs or similar devices.
A “hardware component” is a tangible (e.g., non-transitory) physical component (e.g., a set of one or more processors) capable of performing certain operations and may be configured or arranged in a certain physical manner. A hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be or include a special-purpose processor, such as a field programmable gate array (FPGA) or an ASIC. A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations.
Accordingly, the phrase “hardware component” should be understood to encompass a tangible entity that may be physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein.
In addition, the modules and components can be implemented as firmware or functional circuitry within hardware devices. Further, the modules and components can be implemented in any combination of hardware devices and software components, or only in software (e.g., code stored or otherwise embodied in a machine-readable medium or in a transmission medium).
Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout the description, discussions utilising terms such as “providing”, “calculating”, “computing,” “identifying”, “detecting”, “establishing”, “training”, “determining”, “storing”, “generating”, “checking”, “obtaining” or the like, refer to the actions and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Accordingly, there has been described a computer-implemented method, comprising obtaining at least one visual setting and/or at least one aural setting of an extended reality environment, obtaining a visual parameter and/or an aural parameter of a virtual avatar, wherein the virtual avatar is within the extended reality environment or has requested entry to the extended reality environment, comparing the visual parameter of the virtual avatar to the at least one visual setting of the extended reality environment, and/or comparing the aural parameter of the virtual avatar to the at least one aural setting of the extended reality environment, and based on the comparison, adjusting the visual parameter and/or the aural parameter of the virtual avatar. The virtual avatar may be adjusted within a user-specified range of permitted values for the parameter.
Although particular embodiments of this disclosure have been described, it will be appreciated that many modifications/additions and/or substitutions may be made within the scope of the claims.
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other implementations will be apparent to those of skill in the art upon reading and understanding the above description. Although the disclosure has been described with reference to specific example implementations, it will be recognised that the disclosure is not limited to the implementations described but can be practiced with modification and alteration within the scope of the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than a restrictive sense. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
Number | Date | Country | Kind |
---|---|---|---|
2316631.7 | Oct 2023 | GB | national |