The technology disclosed herein relates generally to the field of content filtering of computer-generated content creating an extended reality, and in particular to means and methods for rendering Extended Reality, XR, content to a user.
Extended reality (XR) is a term referring to all real-and-virtual environments and to human—machine interactions generated by computer technology and wearables. Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) are examples on XRs, where the “X” of XR is a variable indicating any current or future spatial computing technology.
A person may be completely immersed into an XR environment and might in such situations run the risk of getting a cognitive overload, in which the person gets too much information and/or too many tasks simultaneously and becomes unable to process the information. He may then, depending on the physical environment he is in, be prone to injuries or accidents. It may therefore be desirable or even necessary to remove or filter content of digital media (computer graphics, sound etc.) in XR. Such removal or filtering is often cumbersome, and failure thereof results in provision of irrelevant content, which in turn gives a poor user experience. In some use cases, XR glasses are needed in order to present necessary information for the user and in these use cases the risk of the cognitive overload is high, as well as the risk for injuries and accidents.
Classic information filters, such as spam filters, are inadequate in XR environments, which are highly dynamic. The classic information filters are also inadequate when it comes to providing the information of interest for the user at a certain point in time.
There is a need for improving safety and user experience in XR environments.
An objective of embodiments herein is to address and improve various aspects for personal safety in XR experience, by preventing, for instance, accidents and cognitive overload from happening to the user during an XR event. This objective and others are achieved by the methods, devices, computer programs and computer program products according to the appended independent claims, and by the embodiments according to the dependent claims.
This objective and others are accomplished by using, inter alia, various policies, and classification of content as basis in order to dynamically determine what information to present to the user.
According to a first aspect there is presented a method for rendering Extended Reality, XR, content to a user. The method is performed in a device and comprises receiving, in the device, XR content; determining, in a policy entity, the XR content to be rendered based on one or more policies; classifying, in a classification service, the XR content; and rendering the XR content in an XR environment based on the one or more policies and the classification of the XR content.
In an aspect related to the above, a current cognitive load of the user is determined and taken into account when determining whether to render particular XR content.
According to a second aspect there is presented a computer program for rendering Extended Reality, XR, content to a user. The computer program comprises computer code which, when run on processing circuitry of a device, causes the device to perform a method according to the first aspect.
According to a third aspect there is presented a computer program product comprising a computer program as above, and a computer readable storage medium on which the computer program is stored.
According to a fourth aspect there is presented a device for providing Extended Reality, XR, content to a user. The device is configured to: receive XR content; determine which XR content to render based on one or more policies; classify the XR content; and render XR content in an XR environment based on the one or more policies and the classification of the XR content.
According to a fifth aspect there is presented a method for rendering Extended Reality, XR, content to a user. The method is performed in a system and comprises receiving XR content; determining the XR content to be rendered based on one or more policies; classifying the XR content; and rendering the XR content in an XR environment based on the one or more policies and the classification of the XR content.
According to a sixth aspect there is presented a system for rendering Extended Reality, XR, content to a user. The system is configured to: receive XR content; determine the XR content to be rendered based on one or more policies; classify the XR content; and render the XR content in an XR environment based on the one or more policies and the classification of the XR content.
Advantageously, these aspects enable an XR experience accounting for, and reducing, risks of personal injuries. By taking into account and using factors such as, for instance, various policies and classifications, cognitive load, proximity to objects and direction of user in the information filtering, the personal safety during XR events is improved. Further, XR content may be altered and/or displaced before rendering it if a policy is updated, or the currently displayed content may be adapted if a corresponding policy is updated. These aspects enable decisions to be made dynamically based on various factors, which factors may change during the XR experience.
Advantageously, these aspects are applicable in various scenarios, and in particular in virtual environments or in a real environment with augmented content presented as an overlay. In such scenarios information is presented in e.g., XR glasses and the risk of cognitive overload is high. This may, for instance, be the case when a user is unable or unsuited to manually interact with the content in the XR environment, e.g., when driving a car or performing tasks wherein both hands are occupied.
Other objectives, features and advantages of the enclosed embodiments will be apparent from the following detailed disclosure, from the attached dependent claims as well as from the drawings.
Generally, all terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to “a/an/the element, apparatus, component, means, module, action, etc.” are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, module, action, etc., unless explicitly stated otherwise. The actions of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.
The inventive concept is now described, by way of example, with reference to the accompanying drawings, in which:
The inventive concept will now be described more fully hereinafter with reference to the accompanying drawings, in which certain embodiments of the inventive concept are shown. This inventive concept may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided by way of example so that this disclosure will be thorough and complete, and will fully convey the scope of the inventive concept to those skilled in the art. Like numbers refer to like elements throughout the description. Any action or feature illustrated by dashed lines should be regarded as optional.
Briefly, in contrast to the existing XR content filtering solutions, the present disclosure provides means and methods for determining the degree of cognitive load as a criterion for adapting information overlayed XR glasses, e.g., in order to avoid accidents. Further, methods and means are provided for dynamically decide what overlay information should be shown in a user's Field of View (FoV), peripheral view and/or non-XR devices in dependence on factors such as, for instance, policies, cognitive load and information priority. Still further, methods and means are provided for classifying unknown content, to alter or displace content before rendering it or adapting the currently displayed content if a policy is updated.
In a generalized set of embodiments, illustrated at uppermost, right part of
The classification service 24 may be utilized to define how not yet consumed content should be classified. In the case where the content classification is unknown, the content payload (e.g., a 3D item, image, a text, an audio recording or a video) is sent to be categorized by the classification service 24. The classification service 24 returns the most likely categories and their respective confidence value to the policy engine 22.
Based on the obtained information, the policy engine 22 decides what content is allowed to be rendered in the user-perceived reality and how they may be rendered (e.g., in FoV or in peripheral vision). The policy engine 22 may filter content to only render content deemed prioritized while not rendering unwanted information. The policy engine 22 may also instruct the XR rendering component 26 to alter and move content to reduce impact on user's cognitive load.
The policy engine 22 comprises a set of rules which e.g., may have a minimum required confidence level for one or several content categories and will only render content if the confidence level exceeds these thresholds. The set of rules is continuously updated and may be defined manually by the user, defined by historical interactions, current context and cognitive load, or any combinations thereof.
The policy engine 27 may store content tags and confidence values from the classification service to speed up future classification of content
The system 300 thus comprises a classification service 24, a policy engine 22, a monitoring service 28 and an XR rendering component 26. The system 300 may be a single device comprising some or all of these entities 24, 22, 28, 26. In other embodiments, the system 300 is distributed, wherein some or all entities are located in, for instance, a cloud environment. The mentioned entities 24, 22, 28, 26 of the system 300 will be described more in detail in the following.
The classification service 24 comprises one or more classification tools for determining a respective appropriate category for different types of incoming content. The classification service 24 may, for instance, comprise machine learning algorithms for text, image, speech and/or object detection. These algorithms may be based on techniques such as convolutional neural networks (CNN), or other types of Deep Neural Networks (DNN), such as Long Short Term Memory Networks (LSTM), Recurrent Neural Networks (RNN), Autoencoders and Random Forest. The classification service 24 may apply several different algorithms or the same algorithm trained with different datasets, in order to increase reliability and/or credibility of the classification. The algorithms take the content data or a part of the content data as input and provide as output which class(es) the particular content may adhere to. The output class(es) may in turn be classified to more specific classes by additional algorithms.
The classification service 24 may use one or more of the above-described tools and set a list of categories and confidence values for each category based, for instance, on a weighted result.
An example on how to implement the classification service 24 is a convolutional neural network, which receives an image and a number of classes with a corresponding trust level. For instance, (Cat—0.8, Dog—0.1, Wolf 0.1), wherein the image comprising a cat has a trust level of 0.8 etc. The combination of classes is fed to another classification algorithm, e.g., Random Forest, which determines what age the content is suitable for. The second algorithm outputs a second classification: Child Friendly 0.99. The combination of (Child Friendly—0.99, Cat—0.8, Dog—0.1, Wolf—0.1) may then be supplied to the user device.
The classification service 24 may comprise one or more searchable databases 21 in which content tags are stored together with classifications and confidence values. The confidence values may not only reflect the probability of content being of a certain category but also how high confidence the information source, which provided the classification has. The classification service 24 may supply a content tag to search the database 21 for any stored categorizations.
In various embodiments, the database 21 may be updated by results of the machine learning algorithms described earlier. The database 21 may contain both global knowledge but also user-specific entries, such as personal data. The database 21 may be updated by human interactions, e.g., a user or another trusted human operator manually submitting classifications and trust level.
The classification service 24 may additionally (or alternatively) implement Non-fungible token (NFT)-based classification, utilizing a blockchain capable of handling NFTs for categorization. The content of the blockchain is publicly available and is useful when a user device 27 in an XR environment 29 is presented with content. In this context the objects/vendor(s) present the unique identity of the NFT associated with the content the object/vendor intends to deliver. The classification service 24 may utilize the blockchain in order to locate a picture or video and metadata bound to the NFT identity. In this context, the metadata may consist of classification data or tags. The classification service 24 may also verify who created an asset and whom is currently in possession of it. An exemplary scenario may be that a design bureau creates, advertises and sells the asset to a company. When they sell the asset in the shape of an NFT they transfer it to a wallet owned by the buying company. Allowing the classification service to verify who is in possession of the asset to be presented gives further possibilities to assess the credibility of the owning entity.
If the classification service 24 fails to detect what class the content belongs to, the content may be sent to a human operator for manual classification of the content. The classification service 24 may have the address (for instance email addresses or Internet Protocol, IP, addresses) of one or several devices able to receive requests for categorizing content. In some embodiments, this may also be the preferred classification method, where a parent or other trusted part must allow content for children in the XR environment 29.
As is clear by the various examples given, the classification service 24 may be implemented in many different ways. Besides the described examples of using machine learning based classification, database-based classification, NFT-based classification and manual classification (and combinations thereof), various other methods may be used.
Next, the policy engine 22 is described in more detail. The policy engine 22 is a component with a function to decide if and how the content should be rendered for the user. A configuration of the policy engine 22 may have a permanent part comprising user prerequisites such as gender, age, etc.; a semi-permanent part comprising user preferences such as shopping list, preferred ads, etc.; and a temporary part which is a continuously updated part in order to reflect the susceptibility to new content and the current cognitive load of the user. These factors may be combined for keeping and continuously updating a current set of rules for how to handle incoming content.
As another, more specific example of user preferences that may be stored in the semi-permanent part, cognitive load can be mentioned. The settings for the cognitive load may set such that they prevent e.g., non-relevant content to be shown to a service technician that currently is in a stressful situation. Yet another example is the use of an in-car head up display, on which content to be shown may be based on speed, darkness, time, etc. based on set user preferences. Still another example is adapting XR user interface according to settings: a user with XR glasses sees a restaurant, and the content presented to her may comprise menu, ranking, price indications, waiting time. The content may be adapted dynamically, e.g., if the user is short of time (which can be decided based on an available calendar/schedule), only dishes that can be provided rapidly are shown. Another example is use in a hospital; e.g., during an operation one or more mistakes are made, upon which the cognitive load of the surgeon most likely increases, then the policy engine may prioritize what information to show for the supposedly highly stressed (cognitive overload) personnel.
From the above it is realized that the various aspects of the present teachings may be implemented for and adapted to various situations.
Input to the policy engine 22 comprises both external overlay content, data from cognitive load monitoring service and data from classification service. Such data is processed by the policy engine 22 for various purposes, for instance: to determine overlay content priority and if said content may be modified by reducing and/or displacing the content; to analyze the cognitive load of the user and susceptibility to new content; and to evaluate historical data and determine whether present context or certain task may impact a user's focus.
The policy engine 22 contains a set of rules comprising one or several rules which applies to one or several content categories. The rules may comprise one or several categories, a rendering decision and optionally a threshold. The threshold may indicate a minimum confidence level (e.g., between 0 and 1) that the content classification needs to fulfill for the rendering decision to apply. As an example, the user only wants coffee offers to be shown if the confidence value is above 0.8. If the confidence value is below this set threshold, the coffee offers will not be shown to the user, even if the category/vendor is approved (whitelisted, described later).
As another example, the category “child-friendly” may be used in order to filter out content that is unsuitable for children and may be set with a 0.99 threshold value to ensure that no false negatives get through, i.e., to ensure that the children do not get to see the unsuitable content.
As another, more advanced example, the policy may be set in dependence on, for instance, the amount of content currently rendered for the user, its priority, the user's cognitive load and an assessed concentration needed for a certain task. Exemplary policy may then be set to:
The process of dynamic rendering will enable a user, having both hands occupied, like a service technician, surgeon etc., to have relevant information available to execute a task in a secure and efficient manner. The policy manages the balance/ratio score related to input data, user cognitive load and context/task type.
The policy engine 22 may additionally store one or several white-/gray-/blacklists for previously seen content and/or objects. This allows the policy engine 22 to “remember” content known to be good or bad, which speeds up future rendering. Graylists may be used to list vendors which have previously provided content, but which are not yet classified as being known as good (i.e., on whitelists) or being known as bad (i.e., on blacklists). These lists may be updated based on output from the classification service 24.
Next, the monitoring service 28 arranged to monitor the cognitive load is described in more detail. The monitoring service 28 interprets the current state and context of the user based on data that it receives, e.g., from a XR rendering component 26. The cognitive load can be seen as how preoccupied the user is, either with XR content or with real-life objects, or both. The user device 27 may be arranged to measure and/or register various data relating to the user, e.g., by tracking the users' pupil diameter, gaze etc. using, for instance, sensors. The XR rendering component 26 may then supply these values to the monitoring service 28, and the monitoring service 28 may utilize these values for determining if the users' cognitive load is above a threshold.
Susceptibility to new content is related to the cognitive load and is a measure on how much content a user is able to handle in a certain situation. The susceptibility may also be measured by sensors, such as e.g., camera and velocity sensor. The monitoring service 28 may supply data on the users' cognitive load to an algorithm and determine if the user is in a situation in which distracting content is unsuitable, for instance when driving a car.
The monitoring service 28 continuously supplies values indicating current cognitive load and current content susceptibility to the policy engine 22, which in turn updates its temporary configuration and thereby its set of rules accordingly.
Finally, the XR rendering component 26 is described in more detail. The XR rendering component 26 is responsible for presenting the overlay content in a manner decided by the policy engine 22. The XR rendering component 26 receives content from the policy engine 22 and may also receive instructions to alter, displace or remove currently rendered content. It may also track certain real time data indicating cognitive load (e.g., data such as pupillary diameter and number of gaze fixations) susceptibility to new material (e.g., data on environment and velocity).
The system 300 is described to comprise a classification service 24, a policy engine 22, a monitoring service 28 and an XR rendering component 26 but may be implemented in various ways. As noted earlier, the system 300 may be a single device comprising some or all of these entities 24, 22, 28, 26, or the system 300 may be distributed, wherein some or all entities are located in, for instance, a cloud environment.
In a preferred embodiment, the entities 22, 24, 26, 28 of the system 300 are placed within a single device.
In other embodiments, the system 300 may be a distributed system, wherein the entities 22, 24, 26, 28 may be distributed over several devices controlled by the same user.
For instance, as a particular example the classification service 24 may be embodied as a service within a smartphone, the cognitive load monitoring service 28 and policy engine 22 may be embodied within a smartwatch and the XR rendering component 26 may be a placed in a pair of XR glasses. In such an embodiment, the policy engine 22 may additionally comprise a set of currently active user devices, such as a smartphone or a smartwatch. Content which is not rendered for the user may instead be displayed on a screen of a non-XR user device. This may, for instance, be an alternative when the user's cognitive load is high and new content should not be presented in front of the user.
From the description hitherto, the various embodiments can be summarized as comprising three different phases: receival, classification and rule fulfillment. During all of these phases, the XR rendering component may measure indication of the user's cognitive load (pupil diameter, gaze etc.) and supply these to cognitive load monitoring service 28. The monitoring service 28 returns an indication of the current cognitive load and current susceptibility to new content. This is weighed in as a factor in the rule fulfillment phase.
The flow starts in box B1, and continues to box B2, in which a policy engine 22 receives content and (in this case) metadata from an object. From box B2, there are two similar optional flows shown, a first optional flow from box B2 to boxes B3, B4, and a second optional flow from box B2 to boxes B5, B6.
In the first optional flow, in box B3, a content tag is received or calculated, e.g., by hashing the content. From box B3, flow continues to box B4, in which the content tag is checked against any blacklist(s) and/or whitelist(s).
In the second optional flow, in box B5, a classification and/or signature is extracted. From box B5, flow continues to box B6, in which the classification and/or signature are checked against any blacklist(s) and/or whitelist(s).
Flows then continues to three different flows, representing three different cases. In a first of these flows, the flow starting in box B7, all attributes are whitelisted. In this case, flow continues to box B10, from which flow continues to Start 4 (of
In a second of these flows, the flow starting in box B8, the attributes are nor blacklisted neither entirely whitelisted, instead, they may be seen as greylisted. Flow continues to box B13, in which the content and content tag are sent to the classification service, for being classified. Flow then continues to box B14, from which flow continues to Start 2 (of
In a third of these flows, the flow starting in box B9, at least one attribute is blacklisted. Flow continues to box B15, and the content is not rendered for the user. Flow then continues to box B16, from which flow returns to box B1, wherein receival of new content can be made.
In box C4, the content is supplied to a classification algorithm (described earlier).
In box C10, the content tag is searched for in the database/blockchain (described earlier).
The flows from C4 and C10 both continue to box C9, wherein it is determined whether or not the content is classified with high enough confidence, for instance determining if a threshold requirement is met.
If the outcome of determination box C9 is that the content is indeed classified with high enough confidence, flow continues to box C11, in which category of content and confidence rating is provided to the user device 27. From box C11, flow continues to box C12, which is an optional box if there is a database, then the database is updated with the newly classified content and its category. Flow ends in box C8, wherein a new flow may begin in box D1 (of
If the outcome of determination box C9 is instead that the content is not classified with high enough confidence, flow continues to box C5. In box C5 it is determined whether there are other, alternative classification mechanisms. If there is, flow returns to box C3 for attempting these alternative classifications. If the outcome of determination box C5 is “No”, i.e., that there are no further classification alternatives, flow continues to box C6, in which the user device is informed that the content is unknown. Flow may then continue to an optional box C7, in which the classification service 24 requests the user to make a manual classification of the object. Flow ends in box C8, wherein a new flow may begin in box D1 (of
If, in decision box D4, the content is blacklisted, then flow continues to box D13, and the content is not rendered to the user. From box D13, flow continues to an optional box D10, in which any white- and/or blacklists are updated with content tag or content signature or the vendor's/object's public key. Flow then ends in box D14, from which a new flow may begin in box B1 (
If, in decision box D4, the content is not on a blacklist, flow continues to box D5. In box D5, the content is checked against a dynamic rule set to see if (and how) any rule applies to the content. The rule set may be changed dynamically in dependence on, for instance, cognitive load, user context etc. The monitoring service 28 may continuously determine data related to the current environment and the state of the user and provide the data to policy engine 22. From box D5, flow continues to decision box D6, in which it is determined if there is or is not a rule for the content. If there is, flow continues to decision box D11, in which it is determined whether the content meets rule threshold(s) (e.g., a confidence threshold) set for the particular rule(s). If, for example, the confidence level for the content indeed exceeds the corresponding rule threshold, i.e., the outcome of decision box D11 is “yes” then flow continues to optional box D8, in which the content may be altered if a particular rule requires it. Flow then continues to box D9, wherein the content is rendered. Flow may go via box D10 (already described) before it ends in box D14, from which a new flow may begin in box B1 (
If, in decision box D11, the outcome is “no”, i.e., the confidence level for the rule is not met, then flow continues to box D12 and the content is not rendered to the user. Flow may go via the optional box D10 (already described) before it ends in box D14, from which a new flow may begin in box B1 (
If, in decision box D6, the outcome is “no”, i.e., there is no rule applying for the content, flow continues to decision box D7, wherein it is determined if the content is typically accepted, i.e., where the fallback decision is to accept the content, if nothing else has been specified. If no, flow continues to box D13, i.e., the content is not rendered for the user, and flow may go via box D10 (already described) before it ends in box D11, from which a new flow may begin in box B1 (
Upon starting the policy updating, box E1, a user device collects, in box E2, cognitive data. The data is preferably real-time data, such as user's pupil diameter, gaze, pulse, heart rate etc. Flow continues to box E3, wherein the data is supplied to the monitoring service 28. Next, in box E4, the user device collects sensor values which are indicative for susceptibility to new content. Examples of such sensor values may, for instance, be movement speed, camera feed for environment (a camera placed such as to capture the environment), data for determining cognitive load, such as user context (time of day, task being performed, amount of data that is displayed etc.), biological data (heart rate, pupil size, motion patterns, ventilation rate, breathing rate, etc.), movements/task of user (sitting in a sofa, walking, running, driving a car, etc.) etc. Flow continues to box E5, wherein the monitoring service 28 provides, to the policy engine 22, values indicative for the cognitive load of the user and her susceptibility to new content. These values are based on the data collected in box E4 (as exemplified in relation to box E4). Next, in box E6, the policy engine 22 compares, for instance, susceptibility to new material with the user's current cognitive load. Flow continues to decision box E7, wherein the policy engine 22 determines whether there is a need for one or more rules in the dynamic rule set to be updated. Next, in box E8, the policy engine 22 is updated. The dynamic rule set may, for instance, comprise changing thresholds, requiring content to be altered before rendering, only allowing certain type of content or blocking all content.
The method 10 comprises:
The method 10 provides a number of advantages. For instance, the method 10 enables an XR experience that accounts for, and reduces, risks of personal injuries, as have been exemplified earlier. By accounting for and using factors such as, for instance, various policies and classifications, cognitive load, proximity to objects and direction of user in the information filtering, the personal safety during XR events is improved. Further, XR content may be altered and/or displaced before rendering it if a policy is updated, or the currently displayed content may be adapted if a corresponding policy is updated. These aspects enable decisions to be made dynamically based on various factors (examples of which have been given earlier), which factors may change during the XR experience.
Further, the method 10 may be used in a number of scenarios, and in particular in virtual environments or in a real environment with augmented content presented as an overlay. In such scenarios, a high amount of information may be presented in e.g., XR glasses and the risk of cognitive overload is then high. For instance, when a user is unable or unsuited to manually interact with the content in the XR environment, e.g., when driving a car or performing tasks wherein both hands are occupied, the method 10 reduces risk of personal injuries happening by, for instance, rendering less amount of content.
In an embodiment, the determining 12 of the XR content is performed dynamically during an XR event in the XR environment. This has been described earlier, and such embodiments enable adapting to an updated situation, such as the one exemplified in above paragraph. Continuing this example: if the user stops the car, the XR content may be updated to suit this new situation, in which the car does not move.
In an embodiment, the method 10 comprises monitoring 15, in the monitoring service 28, data relating to the user. The monitoring 15 may be performed after the rendering 14, and comprises in an embodiment:
In variations of the above embodiment, the determining 12 is, at least partly, made based on the data provided by the monitoring service 28.
In various embodiments, the rendering 14 comprises determining whether to render particular XR content based on determined current cognitive load of the user.
In variations of the above embodiment, the current cognitive load of the user is determined continuously or intermittently, the cognitive load being determined based on one or more of: user location, user activity level, biometrics data, movement pattern of user and user's type of activity.
In various embodiments, the determining 12 is based on one or more of: trustworthiness of the XR content, user reaching a threshold for user cognitive load, threshold for user susceptibility to XR content in view of current context and type of task, user's location, user's type of activity, user preferences, historical behaviour of user interacting with other users, white-/blacklisted content, white-/blacklisted content metadata, white-/blacklisted cryptographical additions to the content, estimated relevance to the user, activity level of user, user being stationary, at rest or in motion, user being in a vehicle, user being outdoors or indoors, time of day. It is noted that any combination of these factors (as well as others) may be applied.
In various embodiments, the classifying 13 comprises determining one or more of: a category to which the XR content belongs, a confidence value for at least one specific category and a recommendation for future content with same origin.
In various embodiments, the classification service 24 comprises one or more of: an image detection algorithm, a text detection algorithm, an object detection algorithm, a database of content and respective related classification, non-fungible tokens with metadata stored on a blockchain, a speech detection algorithm.
In various embodiments, the device 20 is or is included in a user device selected among: XR glasses, smart lenses, smart phone, tablet, laptop, hologram projectors, any content overlay-enabled screen, head-up display, see-through display.
A device 20 is also disclosed, the device 20 being suitable for providing Extended Reality, XR, content to a user. The device 20 may be configured to perform any of the embodiments of the method 10 as described herein. In an embodiment the device 20, for instance the user device 27, is configured to:
In an embodiment, the device 20 is configured to determine the XR content dynamically during an XR event in the XR environment.
In various embodiments, the device 20 is configured to:
In a variation of the above embodiment, the device 20 is configured to determine, at least partly, based on the data provided by the monitoring service 28.
In various embodiments, the device 20 is configured to determine whether to render particular XR content, based on a determined current cognitive load of the user.
In various embodiments, the device 20 is configured to determine the current cognitive load of the user continuously or intermittently, wherein the cognitive load is determined based on one or more of: user location, user activity level, biometrics readouts, movement pattern of user and user's type of activity.
Particularly, the processing circuitry 110 is configured to cause the device 20 to perform a set of operations, or actions, as disclosed above. For example, the storage medium 130 may store the set of operations, and the processing circuitry 110 may be configured to retrieve the set of operations from the storage medium 130 to cause the device 20 to perform the set of operations. The set of operations may be provided as a set of executable instructions. The processing circuitry 110 is thereby arranged to execute methods as herein disclosed.
The storage medium 130 may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory.
The device 20 may further comprise a communications interface 120 for communications with other entities, functions, nodes, and devices, over suitable interfaces. As such the communications interface 120 may comprise one or more transmitters and receivers, comprising analogue and digital components.
The processing circuitry 110 controls the general operation of the device 20 e.g., by sending data and control signals to the communications interface 120 and the storage medium 130, by receiving data and reports from the communications interface 120, and by retrieving data and instructions from the storage medium 130. Other components, as well as the related functionality, of the device 20 are omitted in order not to obscure the concepts presented herein.
In the example of
In an embodiment, at least one step is performed in a cloud computing environment. For instance, in some embodiments one or both of determining 204 and classifying 206 are performed in such a cloud computing environment.
A system 300 for rendering Extended Reality, XR, content to a user is also provided. The system 300 is configured to perform any of the embodiments of the method 200 as described. For instance, in one embodiment, one or both of determining and classification are performed in a cloud computing environment.
The system 200 may be configured to perform any of the steps of the method 10 in the device 20, but in a distributed manner. These embodiments are not repeated here, as several examples on how to implement the device has been given, and examples on which entities may be external to the device 20.
The inventive concept has mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the inventive concept, as defined by the appended patent claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/SE2021/051315 | 12/27/2021 | WO |