A display of a user device may display a user interface (e.g., a graphical user interface). A user interface may permit interactions between a user of the user device and the user device. In some cases, the user may interact with the user interface to operate and/or control the user device to produce a desired result. For example, the user may interact with the user interface of the user device to cause the user device to perform an action. Additionally, the user interface may provide information to the user
Some implementations described herein relate to a system for a headless user interface architecture associated with an application. The system may include one or more memories and one or more processors communicatively coupled to the one or more memories. The one or more processors may be configured to receive, from a user device, a request to access information associated with the application. The request may include user device data indicating one or more characteristics associated with a particular use of the user device. The one or more processors may be configured to provide, as input to a machine learning model, the user device data. The machine learning model may be trained based on historical data associated with historical usage of the application by one or more of the user device or other user devices. The one or more processors may be configured to receive, as an output from the machine learning model, a target environment, of a plurality of target environments, associated with the user device. The one or more processors may be configured to identify a target user interface of a plurality of user interfaces associated with the information associated with the application. The target user interface may correspond to the target environment. The one or more processors may be configured to transmit, to the user device, user interface data corresponding to the target user interface.
Some implementations described herein relate to a method for a headless user interface architecture of an application. The method may include receiving, by a system having one or more processors and from a user device, user device data indicating one or more characteristics associated with a particular use of the user device. The method may include determining, by the system and based on the one or more characteristics, a target environment, of a plurality of target environments, associated with the particular use of the user device. The method may include identifying, by the system, a target user interface, of a plurality of target user interfaces associated with the application, wherein the target user interface may correspond to the target environment. The method may include transmitting, by the system and to the user device, user interface data indicating the target user interface.
Some implementations described herein relate to a user device. The user device may include one or more memories and one or more processors communicatively coupled to the one or more memories. The one or more processors may be configured to transmit, to a system, a request to access information associated with an application. The request may include user device data indicating one or more characteristics associated with the user device. The one or more characteristics may correspond to a target environment, of a plurality of target environments, associated with a particular use of the user device. The one or more processors may be configured to receive, from the system, a target user interface, of a plurality of target user interfaces associated with the application. The target user interface may correspond to the target environment. The one or more processors may be configured to displaying the target user interface on a display of the user device.
The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.
Different technologies may enable users to have different experiences with applications operating on user devices. For example, extended reality, such as virtual reality (VR), augmented reality (AR), and mixed reality, provide users with immersive experiences via a particular application. However, in many cases, the different technologies implement specific user interfaces (UIs). Additionally, a particular user device may only be configured to employ a specific technology (e.g., VR or AR). Accordingly, applications often have generated different versions of the application corresponding to different technologies. However, to generate, store, manage, and/or operate multiple versions of the same application utilizes an excess amount of computing, network, and/or storage resources. Accordingly, it is desirable for a system to enable a single version of an application via which different UIs may be launched, thereby conserving computing, network, and/or storage resources.
Some implementations described herein provide a system that may determine a target environment (e.g., a standard web view environment, a VR environment, an AR environment, or a voice-based environment) associated with a user device or use of an application on the user device based on one or more characteristics received from the user device. The system then may identify a target UI (e.g., a standard web view UI, a VR UI, an AR UI, or a voice-based UI) that corresponds to the target environment, and the system may transmit the target UI to the user device in connection with a use of the application. In this manner, the UI architecture of the application is not attached to a specific UI and/or environment. Such headless architecture enables the system to dynamically provide a particular UI to the user device based on certain characteristics. Accordingly, multiple versions of the application are not needed, thereby conserving computing, networking, and/or storage resources that would otherwise be necessary for the multiple versions. Additionally, when the characteristics change (e.g., the use of the user device changes), the system may efficiently utilize computing and networking resources to quickly change the UI provided to the user device.
As shown in
As shown by reference number 110, the processing system may determine a target environment associated with the user device based on the user device data. A target environment refers to an environment in which the application is to operate on the user device (e.g., a standard web view for a web browser, a VR environment, an AR environment, or a voice-based environment). For example, if a characteristic is that a user device type is a VR headset, then the processing system may determine the target environment to be a VR environment. As another example, if a characteristic is that the user device type is smart glasses, then the processing system may determine the target environment to be an AR environment. As another example, if a characteristic is a global variable associated with a standard web view (e.g., webkit), then the processing system may determine the target environment to be a standard web view environment.
In some scenarios, the processing system may rely on multiple characteristics to determine the target environment. For example, a characteristic may be a global variable associated with a VR environment and/or an AR environment (e.g., navigation.xr). If another characteristic is that a screen orientation of the user device is a landscape orientation, then the processing system may determine the target environment to be a VR environment. If another characteristic is that the screen orientation is a portrait orientation, then the processing system may determine the target environment to be an AR environment. The processing system may analyze the multiple characteristics based on a hierarchy or ranking of the characteristics. For example, the ranking may be the user device type first, the global variable second, and the screen orientation third. If the processing system is able to determine the target environment from the user device type (e.g., if the user device type is smart glasses), then the processing system does not need to analyze any other characteristics indicated in the user device data. However, if the user device type is associated with more than one target environment, such as with a mobile device, then the processing system may proceed to analyze the next ranked characteristic(s).
In some implementations, the processing system may use a machine learning model to determine the target environment, as described in more detail in connection with
As shown in
In some scenarios, the user may desire to have a different UI than the one determined and transmitted by the processing system. As shown in
As shown in
As described above, the processing system that may determine a target environment (e.g., a standard web view environment, a VR environment, an AR environment, or a voice-based environment) associated with a user device or use of an application on the user device based on one or more characteristics received from the user device. The processing system then may identify a target UI (e.g., a standard web view UI, a VR UI, an AR UI, or a voice-based UI) that corresponds to the target environment, and the system may transmit the target UI to the user device to be displayed with the use of the application. In this manner, the UI architecture of the application is headless (e.g., is not attached to a specific UI and/or environment), which enables the system to dynamically provide a particular UI to the user device based on certain characteristics. Accordingly, multiple versions of the application are not needed, thereby conserving computing, networking, and/or storage resources that would otherwise be necessary for the multiple versions. Additionally, when the characteristics change (e.g., the use of the user device changes), the system may efficiently utilize computing and networking resources to quickly change the UI provided to the user device.
As indicated above,
As shown by reference number 205, a machine learning model may be trained using a set of observations. The set of observations may be obtained from training data (e.g., historical data), such as data gathered during one or more processes described herein. In some implementations, the machine learning system may receive the set of observations (e.g., as input) from the processing system 301 and/or the user device 330, as described elsewhere herein.
As shown by reference number 210, the set of observations includes a feature set. The feature set may include a set of variables, and a variable may be referred to as a feature. A specific observation may include a set of variable values (or feature values) corresponding to the set of variables. In some implementations, the machine learning system may determine variables for a set of observations and/or variable values for a specific observation based on input received from the processing system 201. For example, the machine learning system may identify a feature set (e.g., one or more features and/or feature values) by extracting the feature set from structured data, by performing natural language processing to extract the feature set from unstructured data, and/or by receiving input from an operator.
As an example, a feature set for a set of observations may include a first feature of a user device type, a second feature of a global variable, a third feature of a screen orientation, and so on. As shown, for a first observation, the first feature may have a value of “Mobile Phone”, the second feature may have a value of “navivation.xr”, the third feature may have a value of “portrait”, and so on. These features and feature values are provided as examples, and may differ in other examples.
As shown by reference number 215, the set of observations may be associated with a target variable. The target variable may represent a variable having a numeric value, may represent a variable having a numeric value that falls within a range of values or has some discrete possible values, may represent a variable that is selectable from one of multiple options (e.g., one of multiples classes, classifications, or labels) and/or may represent a variable having a Boolean value. A target variable may be associated with a target variable value, and a target variable value may be specific to an observation. In example 200, the target variable is a target environment associated with the user device.
The target variable may represent a value that a machine learning model is being trained to predict, and the feature set may represent the variables that are input to a trained machine learning model to predict a value for the target variable. The set of observations may include target variable values so that the machine learning model can be trained to recognize patterns in the feature set that lead to a target variable value. A machine learning model that is trained to predict a target variable value may be referred to as a supervised learning model.
As shown by reference number 220, the machine learning system may train a machine learning model using the set of observations and using one or more machine learning algorithms, such as a regression algorithm, a decision tree algorithm, a neural network algorithm, a k-nearest neighbor algorithm, a support vector machine algorithm, or the like. After training, the machine learning system may store the machine learning model as a trained machine learning model 225 to be used to analyze new observations.
As an example, the machine learning system may obtain training data for the set of observations based on historical data associated with historical usage of the application by one or more of the user device or other user devices. The processing system 201 may provide, as inputs to the machine learning system, input data indicating user device types, global variables, and/or screen orientations.
As shown by reference number 230, the machine learning system may apply the trained machine learning model 225 to a new observation, such as by receiving a new observation and inputting the new observation to the trained machine learning model 225. As shown, the new observation may include a first feature of user device type, which has a value of “VR Headset,” a second feature of a global variable, which has a value of “navigation.xr,” a third feature of a screen orientation, which has a value of “landscape,” and so on, as an example. The machine learning system may apply the trained machine learning model 225 to the new observation to generate an output (e.g., a result). The type of output may depend on the type of machine learning model and/or the type of machine learning task being performed. For example, the output may include a target environment.
As an example, the trained machine learning model 225 may predict a value of “VR” for the target variable of target environment for the new observation, as shown by reference number 235. Based on this prediction, the machine learning system may perform a first automated action, and/or may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action), among other examples. The first automated action may include, for example, identifying, obtaining, and/or transmitting, to a user device, a target UI corresponding to the target environment.
In some implementations, the trained machine learning model 225 may be re-trained using feedback information. For example, feedback may be provided to the machine learning model. The feedback may be associated with actions performed based on the recommendations provided by the trained machine learning model 225 and/or automated actions performed, or caused, by the trained machine learning model 225. In other words, the recommendations and/or actions output by the trained machine learning model 225 may be used as inputs to re-train the machine learning model (e.g., a feedback loop may be used to train and/or update the machine learning model). For example, the feedback information may be the change request received from the user device and/or if the change request is received within a time threshold (e.g., less than 30 seconds or 1 minute) of transmitting the target UI. Based on the change request, the processing system may determine that the incorrect target environment was determined, and may re-train the model using the different target environment corresponding to the different UI in the change request.
In this way, the machine learning system may apply a rigorous and automated process to determine target environments associated with user devices. The machine learning system enables recognition and/or identification of tens, hundreds, thousands, or millions of features and/or feature values for tens, hundreds, thousands, or millions of observations, thereby increasing accuracy and consistency and reducing delay associated with determining target environments associated with user devices relative to requiring computing resources to be allocated for tens, hundreds, or thousands of operators to manually determine target environments associated with user devices using the features or feature values.
As indicated above,
The cloud computing system 302 may include computing hardware 303, a resource management component 304, a host operating system (OS) 305, and/or one or more virtual computing systems 306. The cloud computing system 302 may execute on, for example, an Amazon Web Services platform, a Microsoft Azure platform, or a Snowflake platform. The resource management component 304 may perform virtualization (e.g., abstraction) of computing hardware 303 to create the one or more virtual computing systems 306. Using virtualization, the resource management component 304 enables a single computing device (e.g., a computer or a server) to operate like multiple computing devices, such as by creating multiple isolated virtual computing systems 306 from computing hardware 303 of the single computing device. In this way, computing hardware 303 can operate more efficiently, with lower power consumption, higher reliability, higher availability, higher utilization, greater flexibility, and lower cost than using separate computing devices.
Computing hardware 303 may include hardware and corresponding resources from one or more computing devices. For example, computing hardware 303 may include hardware from a single computing device (e.g., a single server) or from multiple computing devices (e.g., multiple servers), such as multiple computing devices in one or more data centers. As shown, computing hardware 303 may include one or more processors 307, one or more memories 308, and/or one or more networking components 309. Examples of a processor, a memory, and a networking component (e.g., a communication component) are described elsewhere herein.
The resource management component 304 may include a virtualization application (e.g., executing on hardware, such as computing hardware 303) capable of virtualizing computing hardware 303 to start, stop, and/or manage one or more virtual computing systems 306. For example, the resource management component 304 may include a hypervisor (e.g., a bare-metal or Type 1 hypervisor, a hosted or Type 2 hypervisor, or another type of hypervisor) or a virtual machine monitor, such as when the virtual computing systems 306 are virtual machines 310. Additionally, or alternatively, the resource management component 304 may include a container manager, such as when the virtual computing systems 306 are containers 311. In some implementations, the resource management component 304 executes within and/or in coordination with a host operating system 305.
A virtual computing system 306 may include a virtual environment that enables cloud-based execution of operations and/or processes described herein using computing hardware 303. As shown, a virtual computing system 306 may include a virtual machine 310, a container 311, or a hybrid environment 312 that includes a virtual machine and a container, among other examples. A virtual computing system 306 may execute one or more applications using a file system that includes binary files, software libraries, and/or other resources required to execute applications on a guest operating system (e.g., within the virtual computing system 306) or the host operating system 305.
Although the processing system 301 may include one or more elements 303-312 of the cloud computing system 302, may execute within the cloud computing system 302, and/or may be hosted within the cloud computing system 302, in some implementations, the processing system 301 may not be cloud-based (e.g., may be implemented outside of a cloud computing system) or may be partially cloud-based. For example, the processing system 301 may include one or more devices that are not part of the cloud computing system 302, such as device 400 of
Network 320 may include one or more wired and/or wireless networks. For example, network 320 may include a cellular network, a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a private network, the Internet, and/or a combination of these or other types of networks. The network 320 enables communication among the devices of environment 300.
The user device 330 may include one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with a headless UI architecture associated with an application, as described elsewhere herein. The user device 330 may include a communication device and/or a computing device. For example, the user device 330 may include a wireless communication device, a mobile phone, a user equipment, a laptop computer, a tablet computer, a desktop computer, a gaming console, a set-top box, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, a head mounted display, or a virtual reality headset), or a similar type of device.
The UI database 340 may include one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with a headless UI architecture associated with an application, as described elsewhere herein. The UI database 340 may include a communication device and/or a computing device. For example, the UI database 340 may include a data structure, a database, a data source, a server, a database server, an application server, a client server, a web server, a host server, a proxy server, a virtual server (e.g., executing on computing hardware), a server in a cloud computing system, a device that includes computing hardware used in a cloud computing environment, or a similar type of device. As an example, the UI database 340 may store various target UIs corresponding to different target environments, as described elsewhere herein.
The number and arrangement of devices and networks shown in
Bus 410 may include one or more components that enable wired and/or wireless communication among the components of device 400. Bus 410 may couple together two or more components of
Memory 430 may include volatile and/or nonvolatile memory. For example, memory 430 may include random access memory (RAM), read only memory (ROM), a hard disk drive, and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory). Memory 430 may include internal memory (e.g., RAM, ROM, or a hard disk drive) and/or removable memory (e.g., removable via a universal serial bus connection). Memory 430 may be a non-transitory computer-readable medium. Memory 430 stores information, instructions, and/or software (e.g., one or more software applications) related to the operation of device 400. In some implementations, memory 430 may include one or more memories that are coupled to one or more processors (e.g., processor 420), such as via bus 410.
Input component 440 may enable device 400 to receive input, such as user input and/or sensed input. For example, input component 440 may include a touch screen, a keyboard, a keypad, a mouse, a button, a microphone, a switch, a sensor, a global positioning system sensor, an accelerometer, a gyroscope, and/or an actuator. Output component 450 enables device 400 to provide output, such as via a display, a speaker, and/or a light-emitting diode. Communication component 460 enables device 400 to communicate with other devices via a wired connection and/or a wireless connection. For example, communication component 460 may include a receiver, a transmitter, a transceiver, a modem, a network interface card, and/or an antenna.
Device 400 may perform one or more operations or processes described herein. For example, a non-transitory computer-readable medium (e.g., memory 430) may store a set of instructions (e.g., one or more instructions or code) for execution by processor 420. Processor 420 may execute the set of instructions to perform one or more operations or processes described herein. In some implementations, execution of the set of instructions, by one or more processors 420, causes the one or more processors 420 and/or the device 400 to perform one or more operations or processes described herein. In some implementations, hardwired circuitry is used instead of or in combination with the instructions to perform one or more operations or processes described herein. Additionally, or alternatively, processor 420 may be configured to perform one or more operations or processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
The number and arrangement of components shown in
As shown in
As further shown in
As further shown in
As further shown in
Although
As shown in
As further shown in
As further shown in
Although
The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise forms disclosed. Modifications may be made in light of the above disclosure or may be acquired from practice of the implementations.
As used herein, the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The hardware and/or software code described herein for implementing aspects of the disclosure should not be construed as limiting the scope of the disclosure. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be used to implement the systems and/or methods based on the description herein.
As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like.
Although particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of” a list of items refers to any combination and permutation of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiple of the same item. As used herein, the term “and/or” used to connect items in a list refers to any combination and any permutation of those items, including single members (e.g., an individual item in the list). As an example, “a, b, and/or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c.
No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, or a combination of related and unrelated items), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).