The disclosure generally relates to machine learning. More particularly, the subject matter disclosed herein relates to improvements to the detection of interactions with a device using machine learning.
Devices (e.g., virtual reality (VR) devices, augmented reality (AR) devices, communications devices, medical devices, appliances, machines, etc.) may be configured to make determinations about interactions with the devices. For example, a VR or AR device may be configured to detect human-device interactions, such as specific hand gestures (or hand poses). The device may use information associated with the interactions to perform an operation on the device (e.g., changing a setting on the device). Similarly, any device may be configured to estimate different interactions with the device and perform operations associated with the estimated interactions.
To solve the problem of accurately detecting interactions with a device, a variety of machine learning (ML) models have been applied. For example, convolutional neural network- (CNN-) based models and transformer-based models have been applied.
One issue with the above approaches is that the accuracy of estimating interactions (e.g., hand poses) may be reduced in some situations due to self-occlusion, camera distortion, three-dimensional (3D) ambiguity of projection, etc. For example, self-occlusion may commonly occur in hand-pose estimation, where one part of a user's hand may be occluded by (e.g., covered by) another part of the user's hand from the viewpoint of the device. Thus, the accuracy of estimating the hand pose and/or distinguishing between similar hand gestures may be reduced.
To overcome these issues, systems and methods are described herein for improving an accuracy of a device to estimate interactions with the device by using a machine learning model with a pre-sharing mechanism, two-dimensional (2D) feature map extraction, and/or a dynamic-mask mechanism.
The above approaches improve on previous methods because accuracy may be improved, and better performance may be achieved in mobile devices having limited computing resources.
Some embodiments of the present disclosure provide for a method for using an estimation model having 2D feature extraction between a backbone and an estimation-model encoder.
Some embodiments of the present disclosure provide for a method for using an estimation model having pre-sharing weights in an encoder layer of a Bidirectional Encoder Representations from Transformers (BERT) encoder of the estimation-model encoder.
Some embodiments of the present disclosure provide for a method for using an estimation model having 3D hand joints and mesh points estimated by applying camera intrinsic parameters to one or more BERT encoders, along with hand tokens from a previous BERT encoder, as inputs to the one or more BERT encoders. For example, camera intrinsic parameters may be applied, along with hand tokens from a fourth BERT encoder, as inputs to a fifth BERT encoder. While embodiments involving hands and hand joints are discussed herein, it will be appreciated that the embodiments and techniques described are applicable, without limit, to any mesh or model, including those of various other body parts.
Some embodiments of the present disclosure provide for a method for using an estimation model with data generated based on 2D feature map.
Some embodiments of the present disclosure provide for a method for using an estimation model with a dynamic-mask mechanism.
Some embodiments of the present disclosure provide for a method for using an estimation model trained with a data set generated based on 2D-image rotation and rescaling that is projected to 3D in an augmentation process.
Some embodiments of the present disclosure provide for a method for using an estimation model trained with two optimizers.
Some embodiments of the present disclosure provide for a method for using an estimation model having BERT encoders with more than four (e.g., twelve) encoder layers.
Some embodiments of the present disclosure provide for a method for using an estimation model having hyper-parameters that are mobile-friendly by using fewer transformers or smaller transformers in each BERT encoder than would be used in a large device with more computing resources.
Some embodiments of the present disclosure provide for a device on which an estimation model may be implemented.
According to some embodiments of the present disclosure, a method of estimating an interaction with a device includes configuring a first token and a second token of an estimation model according to one or more first features of a 3-dimensional (3D) object, applying a first weight to the first token to produce a first-weighted input token and applying a second weight that is different from the first weight to the second token to produce a second-weighted input token, and generating, by a first encoder layer of an estimation-model encoder of the estimation model, an output token based on the first-weighted input token and the second-weighted input token.
The method may further include receiving, at a backbone of the estimation model, input data corresponding to the interaction with the device, extracting, by the backbone, the one or more first features from the input data, receiving, at a two-dimensional (2D) feature extraction model, the one or more first features from the backbone, extracting, by the 2D feature extraction model, one or more second features associated with the one or more first features, the one or more second features including one or more 2D features, receiving, at the estimation-model encoder, data generated based on the one or more 2D features, generating, by the estimation model, an estimated output based on the output token and the data generated based on the one or more 2D features, and performing an operation based on the estimated output.
The data generated based on the one or more 2D features may include an attention mask.
The first encoder layer of the estimation-model encoder may correspond to a first BERT encoder of the estimation-model encoder, and the method may further include concatenating a token, associated with an output of the first BERT encoder, with at least one of camera intrinsic-parameter data, three-dimensional (3D) hand-wrist data, or bone-length data to generate concatenated data, and receiving the concatenated data at a second BERT encoder.
The first BERT encoder and the second BERT encoder may be included in a chain of BERT encoders, the first BERT encoder and the second BERT encoder may be separated by at least three BERT encoders of the chain of BERT encoders, and the chain of BERT encoders may include at least one BERT encoder having more than four encoder layers.
A data set used to train the estimation model may be generated based on two-dimensional (2D) image rotation and rescaling that is projected to three dimensions (3D) in an augmentation process, and a backbone of the estimation model may be trained using two optimizers.
The device may be a mobile device, the interaction may be a hand pose, and the estimation model may include hyperparameters including at least one of an input feature dimension that is about equal to 1003/256/128/32 for estimating 195 hand-mesh points, an input feature dimension that is about equal to 2029/256/128/64/32/16 for estimating 21 hand joints, a hidden feature dimension that is about equal to 512/128/64/16 (4H, 4L) for estimating 195 hand-mesh points, or a hidden feature dimension that is about equal to 512/256/128/64/32/16 (4H, (1, 1, 1, 2, 2, 2)L) for estimating 21 hand joints.
The method may further include generating a 3D scene including a visual representation of the 3D object, and updating the visual representation of the 3D object based on the output token.
According to other embodiments of the present disclosure, a method of estimating an interaction with a device includes receiving, at a two-dimensional (2D) feature extraction model of an estimation model, one or more first features corresponding to input data associated with an interaction with the device, extracting, by the 2D feature extraction model, one or more second features associated with the one or more first features, the one or more second features including one or more 2D features, generating, by the 2D feature extraction model, data based on the one or more 2D features, and providing the data to an estimation-model encoder of the estimation model.
The method may further include receiving, at a backbone of the estimation model, the input data, generating, by the backbone, the one or more first features based on the input data, associating a first token and a second token of the estimation model with the one or more first features, applying a first weight to the first token to produce a first-weighted input token and applying a second weight that is different from the first weight to the second token to produce a second-weighted input token, calculating, by a first encoder layer of the estimation-model encoder, an output token based on receiving the first-weighted input token and the second-weighted input token as inputs, and generating, by the estimation model, an estimated output based on the output token and the data generated based on the one or more 2D features, and performing an operation based on the estimated output.
The data generated based on the one or more 2D features may include an attention mask.
The estimation-model encoder may include a first BERT encoder including a first encoder layer, and the method may further include concatenating a token, corresponding to an output of the first BERT encoder, with at least one of camera intrinsic-parameter data, three-dimensional (3D) hand-wrist data, or bone-length data to generate concatenated data, and receiving the concatenated data at a second BERT encoder.
The first BERT encoder and the second BERT encoder may be included in a chain of BERT encoders, the first BERT encoder and the second BERT encoder may be separated by at least three BERT encoders of the chain of BERT encoders, and the chain of BERT encoder may include at least one BERT encoder having more than four encoder layers.
A data set used to train the estimation model may be generated based on 2D-image rotation and rescaling that is projected to three dimensions (3D) in an augmentation process, and a backbone of the estimation model may be trained using two optimizers.
The device may be a mobile device, the interaction may be a hand pose, and the estimation model may include hyperparameters including at least one of an input feature dimension that is about equal to 1003/256/128/32 for estimating 195 hand-mesh points, an input feature dimension that is about equal to 2029/256/128/64/32/16 for estimating 21 hand joints, a hidden feature dimension that is about equal to 512/128/64/16 (4H, 4L) for estimating 195 hand-mesh points, or a hidden feature dimension that is about equal to 512/256/128/64/32/16 (4H, (1, 1, 1, 2, 2, 2)L) for estimating 21 hand joints.
The method may further include calculating, by a first encoder layer of the estimation-model encoder, an output token, generating a 3D scene including a visual representation of the interaction with the device, and updating the visual representation of the interaction with the device based on the output token.
According to other embodiments of the present disclosure, a device configured to estimate an interaction with the device includes a memory, and a processor communicably coupled to the memory, wherein the processor is configured to receive, at a two-dimensional (2D) feature extraction model of an estimation model, one or more first features corresponding to input data associated with an interaction with the device, generate, by the 2D feature extraction model, one or more second features based on the one or more first features, the one or more second features including one or more 2D features, and send, by the 2D feature extraction model, data generated based on the one or more 2D features to an estimation-model encoder of the estimation model.
The processor may be configured to receive, at a backbone of the estimation model, the input data, generate, by the backbone, the one or more first features based on the input data, associate a first token and a second token of the estimation model with the one or more first features, apply a first weight to the first token to produce a first-weighted input token and applying a second weight that is different from the first weight to the second token to produce a second-weighted input token, calculate, by a first encoder layer of the estimation-model encoder, an output token based on receiving the first-weighted input token and the second-weighted input token as inputs, generate, by the estimation model, an estimated output based on the output token and the data generated based on the one or more 2D features, and perform an operation based on the estimated output.
The data generated based on the one or more 2D features may include an attention mask.
The estimation-model encoder may include a first BERT encoder including a first encoder layer, and the processor may be configured to concatenate a token, corresponding to an output of the first BERT encoder, with at least one of camera intrinsic-parameter data, three-dimensional (3D) hand-wrist data, or bone-length data to generate concatenated data, and receive the concatenated data at a second BERT encoder.
In the following section, the aspects of the subject matter disclosed herein will be described with reference to exemplary embodiments illustrated in the figures, in which:
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. It will be understood, however, by those skilled in the art that the disclosed aspects may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail to not obscure the subject matter disclosed herein.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment disclosed herein. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” or “according to one embodiment” (or other phrases having similar import) in various places throughout this specification may not necessarily all be referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments. In this regard, as used herein, the word “exemplary” means “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not to be construed as necessarily preferred or advantageous over other embodiments. Additionally, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Also, depending on the context of discussion herein, a singular term may include the corresponding plural forms and a plural term may include the corresponding singular form. Similarly, a hyphenated term (e.g., “two-dimensional,” “pre-determined,” “pixel-specific,” etc.) may be occasionally interchangeably used with a corresponding non-hyphenated version (e.g., “two dimensional,” “predetermined,” “pixel specific,” etc.), and a capitalized entry (e.g., “Counter Clock,” “Row Select,” “PIXOUT,” etc.) may be interchangeably used with a corresponding non-capitalized version (e.g., “counter clock,” “row select,” “pixout,” etc.). Such occasional interchangeable uses shall not be considered inconsistent with each other.
Also, depending on the context of discussion herein, a singular term may include the corresponding plural forms and a plural term may include the corresponding singular form. It is further noted that various figures (including component diagrams) shown and discussed herein are for illustrative purpose only, and are not drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, if considered appropriate, reference numerals have been repeated among the figures to indicate corresponding and/or analogous elements.
The terminology used herein is for the purpose of describing some example embodiments only and is not intended to be limiting of the claimed subject matter. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood that when an element or layer is referred to as being on, “connected to” or “coupled to” another element or layer, it can be directly on, connected or coupled to the other element or layer or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on,” “directly connected to” or “directly coupled to” another element or layer, there are no intervening elements or layers present. Like numerals refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
The terms “first,” “second,” etc., as used herein, are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.) unless explicitly defined as such. Furthermore, the same reference numerals may be used across two or more figures to refer to parts, components, blocks, circuits, units, or modules having the same or similar functionality. Such usage is, however, for simplicity of illustration and ease of discussion only; it does not imply that the construction or architectural details of such components or units are the same across all embodiments or such commonly-referenced parts/modules are the only way to implement some of the example embodiments disclosed herein.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this subject matter belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, the term “module” refers to any combination of software, firmware and/or hardware configured to provide the functionality described herein in connection with a module. For example, software may be embodied as a software package, code and/or instruction set or instructions, and the term “hardware,” as used in any implementation described herein, may include, for example, singly or in any combination, an assembly, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The modules may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, but not limited to, an integrated circuit (IC), system on-a-chip (SoC), an assembly, and so forth.
Aspects of embodiments of the present disclosure may be used in augmented reality (AR) or virtual reality (VR) devices for high-accuracy 3D hand-pose estimation from a single camera so as to provide hand pose information in human-device interaction processes. Aspects of embodiments of the present disclosure may provide for accurate hand-pose estimation including 21 hand joints and hand meshes in 3D from a single RGB image and in real-time for human-device interaction.
Referring to
The device 100 may correspond to the electronic device 601 of
Referring still to
The 2D feature-extraction model 115 may be located between the backbone 110 and the estimation-model encoder 120. The 2D feature-extraction model 115 may extract 2D features associated with the input data 12. The 2D feature-extraction model 115 may provide data generated based on the 2D features to the estimation-model encoder 120 to improve the accuracy of the estimation-model encoder 120, as is discussed in further detail below.
The estimation model 101 may process the input data 12 to generate an estimated output 32. The estimated output 32 may include a first estimation-model output 32a and/or a second estimation-model output 32b. For example, the first estimation-model output 32a may include an estimated 3D hand-joint output, and the second estimation-model output 32b may include an estimated 3D hand-mesh output. In some embodiments, the estimated output 32 may include 21 hand joints and/or a hand mesh with 778 vertices in 3D. The device 100 may use the estimated 3D hand-joint output to perform an operation associated with a gesture corresponding to the estimated 3D hand-joint output. The device 100 may use the estimated 3D hand-mesh output to present a user of the device with a virtual representation of the user's hand.
In some embodiments, the device 100 may generate a 3D scene including a visual representation of a 3D object (e.g., the estimated 3D hand-joint output and/or the estimated 3D hand-mesh output). The device 100 may update the visual representation of the 3D object based on an output token (see
In some embodiments, the estimation model 101 may be trained using two optimizers to improve accuracy. For example, the training optimizers may include Adam with weight decay (AdamW) and stochastic gradient descent with weight decay (SGDW). In some embodiments, the estimation model 101 may be trained with GPUs for AR and/or VR device applications.
In some embodiments, a data set used to train the estimation model 101 may be generated based on 2D-image rotation and rescaling that is projected to 3D in an augmentation process, which may improve the robustness of the estimation model 101. That is, the data set used for training may be generated using 3D-perspective hand-joint augmentation or 3D-perspective hand-mesh augmentation.
In some embodiments, the estimation model 101 may be configured to be mobile friendly by using parameters (e.g., hyper-parameters, including input feature dimensions and/or hidden feature dimensions) to provide real-time model performance (e.g., greater than 30 frames per second (FPS)) from limited computing resources. For example, each BERT encoder of the estimation-model encoder 120 in a mobile-friendly (or small model) design may include fewer transformers and/or smaller transformers than a large-model design, such that the small model may still achieve real-time performance with fewer computational resources.
For example, a first small-model version of an estimation model 101 may have the following parameters for estimating 195 hand-mesh points (or vertices). A backbone parameter size (in millions (M)) may be equal to about 4.10; an estimation-model encoder parameter size (M) may be equal to about 9.13; a total parameter size (M) may be equal to about 13.20; an input feature dimension may be equal to about 1003/256/128/32; a hidden feature dimension (head number, encoder layer number) may be equal to about 512/128/64/16 (4H, 4L); and a corresponding FPS may be equal to about 83 FPS.
A second small-model version of an estimation model 101 may have the following parameters for estimating 21 hand joints. A backbone parameter size (M) may be equal to about 4.10; an estimation-model encoder parameter size (M) may be equal to about 5.23; a total parameter size (M) may be equal to about 9.33; an input feature dimension may be equal to about 2029/256/128/64/32/16; a hidden feature dimension (head number, encoder layer number) may be equal to about 512/256/128/64/32/16 (4H, (1, 1, 1, 2, 2, 2)L); and a corresponding FPS may be equal to about 285 FPS.
In some embodiments, mobile-friendly versions of the estimation model 101 may have a reduced number of parameters and floating-point operations (FLOPs) by shrinking the number of encoder layers and applying a VAN instead of HRNet-w64 in a BERT encoder block, which enables high-accuracy hand-pose estimation in mobile devices in real time.
Referring to
Referring to
Referring to
Referring to
In some embodiments, extracted features from the backbone output data 22 may be provided as inputs to the first encoder layer 301 of the first BERT encoder 201. In some embodiments the extracted features from the backbone output data 22 may be provided as inputs to one or more linear operations LO (operations provided by linear layers) and positional encoding 251 before being provided to the first encoder layer 301. An output of the first encoder layer 301 may be provided to an input of a second encoder layer 302, and an output of the second encoder layer 302 may be provided to an input of a third encoder layer 303. That is, the first BERT encoder 201 may include a chain of encoder layers with L encoder layers total. In some embodiments, an output of the L-th encoder layer may be provided as an input to one or more linear operations LO before being sent to an input of the second BERT encoder 202. As discussed above, the estimation-model encoder 120 may include a chain of BERT encoders (e.g., including the first BERT encoder 201, the second BERT encoder 202, a third BERT encoder 203, etc.). In some embodiments, each BERT encoder in the chain of BERT encoders may include L encoder layers.
As an overview, the structure of the estimation-model encoder 120 (see
Referring to
The attention mechanism 310 may include a first multiplication function 312, a second multiplication function 318, a scaling function 314, and a softmax function 316. The first attention-mechanism inputs 41 and the second attention-mechanism inputs 42 may be provided to the first multiplication function 312 and the scaling function 314 to produce a normalized score 315. The normalized score 315 and an attention map AM may be provided to the softmax function 316 to produce an attention score 317. The attention score 317 and the third attention-mechanism inputs 43 may be provided to a second multiplication function 318 to produce an attention-mechanism output 321.
As an overview,
Referring to the hand joint HJ structure of
Referring to
Referring to
Referring to
Referring to
Referring to
The processor 620 may execute software (e.g., a program 640) to control at least one other component (e.g., a hardware or a software component) of the electronic device 601 coupled with the processor 620 and may perform various data processing or computations.
As at least part of the data processing or computations, the processor 620 may load a command or data received from another component (e.g., the sensor module 676 or the communication module 690) in volatile memory 632, process the command or the data stored in the volatile memory 632, and store resulting data in non-volatile memory 634. The processor 620 may include a main processor 621 (e.g., a CPU or an application processor (AP)), and an auxiliary processor 623 (e.g., a GPU, an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor 621. Additionally or alternatively, the auxiliary processor 623 may be adapted to consume less power than the main processor 621, or execute a particular function. The auxiliary processor 623 may be implemented as being separate from, or a part of, the main processor 621.
The auxiliary processor 623 may control at least some of the functions or states related to at least one component (e.g., the display device 660, the sensor module 676, or the communication module 690) among the components of the electronic device 601, instead of the main processor 621 while the main processor 621 is in an inactive (e.g., sleep) state, or together with the main processor 621 while the main processor 621 is in an active state (e.g., executing an application). The auxiliary processor 623 (e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera module 680 or the communication module 690) functionally related to the auxiliary processor 623.
The memory 630 may store various data used by at least one component (e.g., the processor 620 or the sensor module 676) of the electronic device 601. The various data may include, for example, software (e.g., the program 640) and input data or output data for a command related thereto. The memory 630 may include the volatile memory 632 or the non-volatile memory 634.
The program 640 may be stored in the memory 630 as software, and may include, for example, an operating system (OS) 642, middleware 644, or an application 646.
The input device 650 may receive a command or data to be used by another component (e.g., the processor 620) of the electronic device 601, from the outside (e.g., a user) of the electronic device 601. The input device 650 may include, for example, a microphone, a mouse, or a keyboard.
The sound output device 655 may output sound signals to the outside of the electronic device 601. The sound output device 655 may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or recording, and the receiver may be used for receiving an incoming call. The receiver may be implemented as being separate from, or a part of, the speaker.
The display device 660 may visually provide information to the outside (e.g., a user) of the electronic device 601. The display device 660 may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. The display device 660 may include touch circuitry adapted to detect a touch, or sensor circuitry (e.g., a pressure sensor) adapted to measure the intensity of force incurred by the touch.
The audio module 670 may convert a sound into an electrical signal and vice versa. The audio module 670 may obtain the sound via the input device 650 or output the sound via the sound output device 655 or a headphone of an external electronic device 602 directly (e.g., wired) or wirelessly coupled with the electronic device 601.
The sensor module 676 may detect an operational state (e.g., power or temperature) of the electronic device 601 or an environmental state (e.g., a state of a user) external to the electronic device 601, and then generate an electrical signal or data value corresponding to the detected state. The sensor module 676 may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor.
The interface 677 may support one or more specified protocols to be used for the electronic device 601 to be coupled with the external electronic device 602 directly (e.g., wired) or wirelessly. The interface 677 may include, for example, a high-definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface.
A connecting terminal 678 may include a connector via which the electronic device 601 may be physically connected with the external electronic device 602. The connecting terminal 678 may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (e.g., a headphone connector).
The haptic module 679 may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or an electrical stimulus which may be recognized by a user via tactile sensation or kinesthetic sensation. The haptic module 679 may include, for example, a motor, a piezoelectric element, or an electrical stimulator.
The camera module 680 may capture a still image or moving images. The camera module 680 may include one or more lenses, image sensors, image signal processors, or flashes. The power management module 688 may manage power supplied to the electronic device 601. The power management module 688 may be implemented as at least part of, for example, a power management integrated circuit (PMIC).
The battery 689 may supply power to at least one component of the electronic device 601. The battery 689 may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell.
The communication module 690 may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device 601 and the external electronic device (e.g., the electronic device 602, the electronic device 604, or the server 608) and performing communication via the established communication channel. The communication module 690 may include one or more communication processors that are operable independently from the processor 620 (e.g., the AP) and supports a direct (e.g., wired) communication or a wireless communication. The communication module 690 may include a wireless communication module 692 (e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module 694 (e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network 698 (e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct, or a standard of the Infrared Data Association (IrDA)) or the second network 699 (e.g., a long-range communication network, such as a cellular network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single IC), or may be implemented as multiple components (e.g., multiple ICs) that are separate from each other. The wireless communication module 692 may identify and authenticate the electronic device 601 in a communication network, such as the first network 698 or the second network 699, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module 696.
The antenna module 697 may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device 601. The antenna module 697 may include one or more antennas, and, therefrom, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network 698 or the second network 699, may be selected, for example, by the communication module 690 (e.g., the wireless communication module 692). The signal or the power may then be transmitted or received between the communication module 690 and the external electronic device via the selected at least one antenna.
Commands or data may be transmitted or received between the electronic device 601 and the external electronic device 604 via the server 608 coupled with the second network 699. Each of the electronic devices 602 and 604 may be a device of a same type as, or a different type from, the electronic device 601. All or some of operations to be executed at the electronic device 601 may be executed at one or more of the external electronic devices 602, 604, or 608. For example, if the electronic device 601 should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device 601, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request and transfer an outcome of the performing to the electronic device 601. The electronic device 601 may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, or client-server computing technology may be used, for example.
Referring to
Referring to
Referring to
Referring to
Embodiments of the subject matter and the operations described in this specification may be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification may be implemented as one or more computer programs, i.e., one or more modules of computer-program instructions, encoded on computer-storage medium for execution by, or to control the operation of data-processing apparatus. Alternatively or additionally, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, which is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. A computer-storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial-access memory array or device, or a combination thereof. Moreover, while a computer-storage medium is not a propagated signal, a computer-storage medium may be a source or destination of computer-program instructions encoded in an artificially-generated propagated signal. The computer-storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices). Additionally, the operations described in this specification may be implemented as operations performed by a data-processing apparatus on data stored on one or more computer-readable storage devices or received from other sources.
While this specification may contain many specific implementation details, the implementation details should not be construed as limitations on the scope of any claimed subject matter, but rather be construed as descriptions of features specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments may also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment may also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described herein. Other embodiments are within the scope of the following claims. In some cases, the actions set forth in the claims may be performed in a different order and still achieve desirable results. Additionally, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.
As will be recognized by those skilled in the art, the innovative concepts described herein may be modified and varied over a wide range of applications. Accordingly, the scope of claimed subject matter should not be limited to any of the specific exemplary teachings discussed above, but is instead defined by the following claims.
This application claims the priority benefit under 35 U.S.C. § 119(e) of U.S. Provisional Application No. 63/337,918, filed on May 3, 2022, the disclosure of which is incorporated by reference in its entirety as if fully set forth herein.
Number | Date | Country | |
---|---|---|---|
63337918 | May 2022 | US |