This disclosure generally relates to computer modeling systems, and more specifically to a system and method for learning and modeling the movement of soft-tissue on a 3-dimensional computer model of a body or object, such as a human, animated character, computer avatar, or the like.
In computer graphic applications, the accurate and life-like modeling of bodies, such as human bodies, has been a long-standing goal, and a key component for realistic character animation in video games, films, and other computer-modeling applications. For example, highly realistic 3D meshes representing the body of a person that look and behave as the human body does in the computer application are highly desirable. Such models must be able to represent different body shapes, deform naturally with pose changes, and incorporate non-linear surface dynamics that mimic the behavior and movement of soft skin in the outer shell of the body. For example, in a computer game application, such as an NFL Football simulation game, the models for different players would represent the body shapes typical of players for different positions. For example, the model for a quarterback would typically have a smaller and slender body shape as compared to the model for a defensive lineman, would have a bigger and stockier body shape. Ideally, the models for the different body shapes would behave differently for a given motion. For example, when simulating a jump, the slender body shape of a quarterback model should not have much soft-tissue motion as compared with the larger body shape of a defensive lineman model, whose muscles and overall outer body shapes would be expected to bounce upon landing back on the ground.
In interactive applications, such as computer games or other real-time modeling of body movement, there is often an additional goal of simplicity and efficiency to provide real-time responses, often requiring control of the body model using only its skeletal movement or pose, with the animation of the surface around the skeletal body modeled as a function of the skeletal pose. Some computer animation methods define the body surface as a kinematic function of skeletal pose that blends rigid transformations of skeletal bones but do not provide an efficient approach to model the nonlinear soft-tissue dynamics and thus are not as compelling.
What is needed, are more complex transformations that incorporate actual body surface data into the model, including the nonlinear dynamics of the body's surface, caused by the oscillation of soft tissue under fast skeletal motion and that can be used in existing efficient vertex-based animation pipelines suited for interactive applications.
According to various embodiments of the present invention, systems and methods for the learning and modeling of soft-tissue dynamics in a three-dimensional computer model of a body or object are provided.
According to one embodiment, a system comprises a surface skinning module for adding skin surface elements to a frame of skeletal input representative of a pose of the body. The system also includes a soft-tissue regression module configured to add nonlinear soft-tissue dynamics to the skin surface elements and provide an output mesh representative of the body at the pose in the skeletal input. In this embodiment, the soft-tissue regression module comprises a neural network trained from observations to predict 3-dimensional offsets.
In alternative embodiments the body may correspond to a human body, an animal body, a character in a movie, a character in a video game, or an avatar. For example, the avatar may represent a customer.
According to another embodiment, the system further comprises an autoencoder module configured to reduce by two or more orders of magnitude the dimensionality of a plurality of three-dimensional offsets for a plurality of vertices in the skin surface elements. In this embodiment, the autoencoder module comprises a combination of linear and non-linear activation functions. In one embodiment, the autoencoder module comprises at least three layers, wherein at least two non-successive layers comprise non-linear activation functions.
According to an aspect of various embodiments, the neural network may be trained from a set of observations in a set of three-dimensional input meshes representative of a plurality of poses for a reference body. The autoencoder module may also be trained from a set of observations in a set of three-dimensional input meshes representative of a plurality of poses for a reference body.
According to an aspect of various embodiments, the neural network in the soft-tissue regression module is trained to predict 3-dimensional offsets from velocities and accelerations derived from prior frames of the skeletal input. According to another aspect of various embodiments, the soft-tissue regression module is configured to add the nonlinear soft-tissue dynamics to the skin surface elements using the output of the one or more activation functions.
According to an alternative embodiment, the computer-based modeling may be include adding skin surface elements to a frame of skeletal input representative of a pose of the body. The dimensionality of a plurality of three-dimensional offsets for a plurality of vertices in the skin surface elements is reduced by two or more orders of magnitude by applying at least one non-linear activation function. The resulting output mesh representative of the body at the pose in the skeletal input is provided.
According to this embodiment, nonlinear soft-tissue dynamics may be added to the skin surface elements. For example, adding the nonlinear soft-tissue dynamics may include a neural network trained from observations to predict 3-dimensional offsets.
According to another embodiment, the reducing step comprises applying at least three layers of activation functions, wherein at least two non-successive layers comprise non-linear activation functions.
According to another embodiment, the body corresponds to a human body, an animal body, a character in a movie, a character in a video game, or an avatar. For example, the avatar may represent a customer.
The figures depict various example embodiments of the present disclosure for purposes of illustration only. One of ordinary skill in the art will readily recognize form the following discussion that other example embodiments based on alternative structures and methods may be implemented without departing from the principles of this disclosure and which are encompassed within the scope of this disclosure.
The above and other needs are met by the disclosed methods, a non-transitory computer-readable storage medium storing executable code, and systems for 3D modeling of bodies and similar shapes in computer applications, including, for example, motion capture applications, biomechanics and ergonomics design and simulation, education, business, virtual and augmented reality shopping, and entertainment applications, including animation and computer graphics for digital movies, interactive gaming and videos, human, animal, or character simulations, virtual and augmented reality applications, robotics, and the like.
The Figures and the following description describe certain embodiments by way of illustration only. One of ordinary skill in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein. Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures.
The systems and methods according to the various embodiments described enrich existing vertex-based models, for example for human-body modeling, such as LBS or SMPL. One example of such vertex-based models is described in SMPL: A Skinned Multi-Person Linear Model by Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black, incorporated herein by reference. See ACM Trans. Graphics (Proc. SIGGRAPH Asia) 34, 6 (2015), 248:1-248:16. According to one embodiment, a method regresses dynamic blendshapes to add nonlinear soft-tissue dynamics to the traditional piece-wise rigid meshes. A neural network-based solution for real-time nonlinear soft-tissue regression is provided to enrich skinned 3D animated sequences. The neural network is trained to predict 3D offsets from joint angle velocities and accelerations, as well as earlier dynamic components. A loss function is tailored to learn soft-tissue deformations. The per-vertex rigidity is computed and leveraged to obtain a better-behaved minimization problem. For higher efficiency, in one embodiment, a novel autoencoder is provided for dimensionality reduction of the 3D vertex displacements that represent nonlinear soft-tissue dynamics in 3D mesh sequences. In one embodiment, the autoencoder is used to reduce the dimensionality of the per-vertex 3D offsets by two or more orders of magnitude. In alternative embodiments, the autoencoder can reduce the dimensionality in both a preset or a configurable manner, including dynamically changeable manner adaptable to the particular needs for the given embodiment.
After applying the described method, the resulting subspace for soft-tissue dynamics overcomes existing methods, such as those based on Principal Components Analysis (“PCA”), for example as described in SMPL (above) or Dyna, (Gerard Pons-Moll, Javier Romero, Naureen Mahmood, and Michael J Black. 2015. Dyna: A model of dynamic human shape in motion. ACM Transactions on Graphics, (Proc. SIGGRAPH) 34, 4 (2015)). The resulting system better captures the nonlinear nature of soft-tissue dynamics.
According to one embodiment, nonlinear soft-tissue real-time dynamics in 3D mesh sequences are animated with a data-driven method based on just skeletal motion data. In one embodiment, skeletal motion data from the Carnegie Mellon University Mocap Database was used (CMU. 2003. CMU: Carnegie-Mellon Mocap Database. In http://mocap.cs.cmu.edu). In another embodiment, the “Total Capture” data set was used. See Matthew Trumble, Andrew Gilbert, Charles Malleson, Adrian Hilton, and John Collomosse. 2017. Total Capture: 3D Human Pose Estimation Fusing Video and Inertial Sensors. In BMVC17. The description of both of these datasets is incorporated herein by reference. In alternative embodiments, different skeletal motion data sets may be used within the scope of the invention for learning, training, or benchmarking among other functions.
According to one embodiment the body surface of a target body, such as virtual football player in a game, a character in a movie, a virtual shopper avatar in an online store or the like, is defined as a kinematic function of skeletal pose. To accomplish this, first linear blend skinning (LBS) methods are used to blend rigid transformations of skeletal bones. This technique, which is limited to a single human shape, attaches an underlying kinematic skeleton into a 3D mesh, and assigns a set of weights to each vertex that define how the vertices move with respect the skeleton. Despite being largely used in video games and films, LBS has two significant limitations: first, articulated areas often suffer from unrealistic deformations such as bulging or candy wrap effect; second, resulting animations are piece-wise rigid and therefore lack surface dynamics. LBS deformation artifacts have been addressed by different solutions, including dual quaternion [Kavan et al. 2008], implicit skinning [Vaillant et al. 2013] and example-based methods [Kry et al. 2002; Le and Deng 2014; Lewis et al. 2000; Wang and Phillips 2002], but these solutions ignore the LBS shortcomings due to shape and motion dynamics addressed in various embodiments of the present invention.
Scanning-based models have more recently been adopted with the availability of 3D capturing systems. Using 3D scans of a body, data-driven models use scanning and registration methods that are more accurate [Bogo et al. 2014, 2017; Budd et al. 2013; Huang et al. 2017]. Allen et al. [2002] described how to deform an articulated model into a set of scans in different poses, and then to predict new poses by mesh interpolation. Different statistical body models such as SCAPE [Anguelov et al. 2005] and the follow up works of Hasler et al. [2009], Hirshberg et al. [2012] and Chen et al. [2013] were described. These models learned from 3D scans are based on triangle transformations, which are more expensive to compute than vertex-based models and require more computing power. While capable of representing changes due to pose and shape, these models cannot cope with deformations due to non-rigid surface dynamics. More recently, Loper et al. [2015] proposed SMPL, a vertex-based method that computes pose and shape blendshapes that generate articulated 3D meshes by adding vertex displacements to a template mesh. Similarly, data-driven models capable of coping with some human body dynamics have been proposed, such as for example, Dyna [Pons-Moll et al. 2015]. Dyna models shape, pose and soft-tissue dynamics learned from thousands of 4D scans. However, as SCAPE, Dyna is based on triangle deformations which hinders the implementation of its method in existing vertex-based pipelines such as LBS. DMLP, an extension of SMPL [Loper et al. 2015], also models dynamics. However, the solution relies on a PCA subspace that hinders the learning of nonlinear deformations. In contrast, in some embodiments of the present invention, animations with soft-tissue dynamics using skeletal data are provided from publicly available MoCap datasets [CMU 2003; Trumble et al. 2017]. In some embodiments, an autoencoder is provided to build a richer nonlinear subspace that significantly reduces the dimensionality of dynamic blendshapes to improve over prior approaches.
Further, a strong limitation of these prior data-driven models is the inherent difficulty to represent deformations far from the training set. Physics-based models overcome this limitation but are significantly more complex and usually require a volumetric representation of the model. For example, Kadlecek et al. [2016] compute a fully physics-based subject-specific anatomical model, including bones, muscle and soft-tissue; Kim et al. [2017] combine data-driven with physics-based models to create a layered representation that can reproduce soft-tissue effects. These physics-based approaches fit the model to captured 4D scans to find subject-specific physical parameters. The use of layered representations consisting of a skeleton that drives physics-based soft-tissue deformations has been proposed in earlier works [Capell et al. 2002]. Liu et al. [2013] propose a pose-based plasticity model to obtain skinned deformation around joints. Hahn et al. [2012; 2013] enrich standard LBS animations by simulating the deformation of fat and muscles in the nonlinear subspace induced by its rig. Xu and Barbic̆ [2016] use secondary Finite Element Method (FEM) dynamics to efficiently add soft-tissue effects. Subspaces of deformations have also been explored for both characters [Kim and James 2012; Kry et al. 2002] and cloth [De Aguiar et al. 2010]. According to one embodiment, an enriched skinned model is provided with motion-dependent deformations of soft-tissues to simulate the body dynamics. However, instead of physics-based algorithms, which are computationally expensive, the soft-tissue deformations are automatically learned with a neural network trained purely from observations and can, for example, be produced in real-time applications without significant lag or delay.
Referring now to
Referring back to
Referring now also to
The nonlinear soft-tissue dynamics are then solved as a nonlinear regression 203. Modeling soft-tissue dynamics involves capturing the nonlinear interplay of surface displacements, velocities, and accelerations, with skeletal pose, velocity and acceleration. In one embodiment, this complex nonlinear function is modeled using a neural network. The neural network outputs the current dynamic soft-tissue displacement Δt, and it takes as input the skeletal pose of the current frame θt and a number of previous frames, such as for example the two prior frames θt-1 and θt-2, to capture skeletal velocity and acceleration. In addition, the neural network takes also as input the compact soft-tissue displacements of a corresponding number of previous frames, such as for example the two previous frames Δt-1 and Δt-2, to capture soft-tissue velocity and acceleration. In alternative embodiments, different numbers of previous frames may be used to derive skeletal and soft-tissue velocity and acceleration. Alternatively, the number of previous frames used to derive velocity and acceleration may be dynamically and adaptively modified at runtime depending on the specific application.
Referring back to
In one embodiment, the skinning module 123 includes a data-driven vertex-based linear skinning model. For example, in one embodiment, an SMPL-based model may be used as further described by Loper et al. (2015) (incorporated herein by reference). In an SMPL-based model, corrective blendshapes may be learned from thousands of 3D body scans and may be used to fix well-known skinning artifacts such as bulging. Formally, SMPL defines a body model surface M=M(β, θ) as:
M(β,θ)=W(
(β,θ)=
where W(
According to another aspect of this embodiment, the vertices of T are deformed such that the resulting posed reproduces realistic soft-tissue dynamics. Following SMPL additive blendshape formulations, a set of per-vertex 3D offsets is determined as Δ={δi}V−1 i=0 (which we refer to as dynamic blendshape) that added to the template model
(β,θ,γ)=
where Md(γ)=Δ is a function that regresses the per-vertex offsets Δ given a history of previous frames motion and dynamics γ, as further described below. Unlike the use of corrective blendshapes mentioned in DMPL [Loper et al. 2015], the blendshapes according to this embodiment do not rely on a linear PCA subspace and generalize to arbitrary skeletal motions.
Further, unlike DMPL, this embodiment uses a nonlinear subspace, which is easier to train, allows real-time interactions, and has been successfully applied to existing motion capture datasets.
According to one embodiment, the dynamic blendshapes enable the computation of skin deformations resulting from interactions between the human body and external objects, such as cloth. These deformations are relevant, for example, in virtual try-on applications, such as online or remote e-commerce applications or garment design applications, where it is beneficial to have a realistic virtual fit of garments on a customer, for example using a model or avatar.
For example, according to one embodiment, a customer using an online shopping platform wants to get a preview of the fit of a garment before making a purchase decision. Dynamic blendshapes produce the soft-tissue deformations resulting from cloth-body contact.
According to this embodiment, to compute the interaction between body and cloth, a conservative contact potential is defined and forces generated by the dynamic motion of the skin on the cloth are computed as gradients of this potential. The per-vertex displacements caused by these forces are computed by integrating the resulting accelerations. For example, in every animation or simulation frame, a signed distance field of the body surface is computed with a small offset delta. For each cloth simulation node, the distance field is queried and a penetration value d is obtained. If the penetration is positive, a potential
is defined. Then, forces on cloth nodes and surface vertices are computed as
For each simulation node or cloth vertex, with mass m, its acceleration correction is computed as
Finally, a position correction
is computed by second-order integration of the acceleration, where dt is the simulation time step.
Referring back to Eq. 3, according to one embodiment, a supervised learning method is used to learn Md(γ), using a neural network. Ground truth annotated data for training the neural network may be obtained from observations, manual annotation or physical simulations. According to one embodiment, as training data, recent methods in 4D capture [Bogo et al. 2017; Budd et al. 2013; Huang et al. 2017; Pons-Moll et al. 2015] that accurately fit and deform a 3D mesh template to reconstruct human performances, may be used. For example, in one embodiment, the publicly available aligned 4D scans dataset of Dyna [Pons-Moll et al. 2015], which captures highly detailed surface deformations at 60 fps, is used as training data for the neural network. Assuming that such 4D scans reproduce the captured surface with negligible error, the soft-tissue dynamic component can be extracted by fitting a shape and pose parametric model defined in Eq. 1 to the scans, and subsequently evaluating the differences between the fitted model and the 4D scan [Kim et al. 2017]. To this end, we find the parameters β, θ by minimizing the following:
where unpose (·) is the inverse of the SMPL skinning function that puts the mesh in rest pose, and removes pose and shape corrective blendshapes; Mi(·) is the ith vertex of the mesh; wi is a weight that is set to high values in rigid parts; and S∈V×3 is a matrix of vertices of the captured scan. Unlike other approaches, such as Kim et al. [2017], according to this embodiment, the minimization is performed at the unposed state. This achieves better results than minimizing the difference at the pose state, because ultimately the fit has to be unposed to compute the ground truth dynamic blendshape. If the minimization happens at the pose state, it is likely that despite achieving a close fit, when the scan S is unposed unrealistic deformations appear if the joint positions were not correctly fitted, as illustrated in
Δt=unpose(M(β,θt))−unpose(M(St,θt)) [Eq. 5]
Δt ∈V×3, are due to soft tissue deformation, i.e. the dynamic blendshapes. Such blendshapes, together with the extracted θt and β are our ground truth data that we use for training the regressor Md(γ) from Eq. 3
For a data-driven body model, an initial dimensionality
reduction step may be used to reduce the complexity of data representation. For example, Principal Component Analysis (“PCA”) methods, such as those described in Anguelov et al. 2005; Feng et al. 2015; Loper et al. 2015; Pons-Moll et al. 2015, provide a linear method that reproduces changes due to shape in a lower space. Similar linear models can be used for other applications, such as, cloth simulation, e.g., De Aguiar et al. 2010, skinning, e.g., James and Twigg 2005 and Kavan et al. 2010; and physics-based simulations, e.g., Barbic̆ and James 2005.
However, such PCA-based linear methods cannot properly represent soft tissue deformations in detail given the high non-linearity nature of the dynamic soft tissue data stored in Δ. Therefore, in one embodiment an autoencoder is used to provide a nonlinear method that has shown to perform better than PCA-based methods in dimensionality reduction capabilities in different fields as illustrated in Hinton and Salakhutdinov 2006. Autoencoders according to various embodiments of the invention approximate an identity mapping by coupling an encoding block with a decoding block to learn a compact intermediate representation, which may be referred to as the latent space. Particularly, each block consists of a neural network, with different hidden layers and non-linear operators. After training the neural network, a forward pass of the encoder converts the input to a compact representation. For example, 6890·3 is input to the encoder 401. The encoder 401 in this embodiment includes three layers with linear, nonlinear, and linear activation functions, respectively. In alternative embodiments different numbers of layers with other combinations of linear and nonlinear activation functions may be used. The encoder 401 outputs a vector Δ∈
100 achieving a several orders of magnitude dimensionality reduction. As further explained below, due to the nonlinear activation functions in the layers of the encoder 401, we obtain a latent space capable of better reproducing the complexity of soft-tissue dynamics.
According to another aspect of one embodiment, a neural network is provided that automatically learns from observations, such as for example 4D scans, the function Md(γ)=Δ as shown in Eq. 3. In particular, in one embodiment, Md(γ) is parameterized by γ={Δt-1, Δt-2, θt, θt-1, θt-2}, where Δt-1, Δt-2 are the predicted dynamic blendshapes of previous frames. While in this embodiment two frames are used for illustration, any number of previous frames may be used in alternative embodiments. Notice that Δt ∈6890·3 is a prohibitively expensive size for an efficient neural network input, and therefore the dimensionality of the vectorized input is reduce using an autoencoder as illustrated in
According to another aspect of one embodiment, a neural network training method is provided. As described above, the dynamic blendshapes Δt and the pose and shape parameters (β, θt) are extracted from a given known set of 4D scans S={St}t-1T. Then a single-layer neural network is trained to learn to regress Δt from γ. In one embodiment, each neuron in the network uses a Rectified Linear Unit (ReLU) activation function, which provides a fast converging non-linear operator. In addition, a history of the previous dynamic components is fed to the network to predict the current dynamic blendshape in order to learn a regressor that understands second order dynamics. The blendshape predictions according to this embodiment are much more stable and produce an overall realistic nonlinear behavior of the soft tissue simulations.
Another aspect of embodiments for training neural networks according to the invention includes an appropriate loss function. In one embodiment it is desirable to minimize the euclidean distance between vertices of a ground truth dynamic blendshape ΔGT={δiGT}i=1V and the predicted dynamic blendshapes Δ. To do so, the following 2-norm is minimized:
Loss=Σi=1V∥wirig·(δiGT−δi)∥2 [Eq. 6]
where wirig is the ith vertex rigidity weight, inversely proportional to the vertex stiffness. By adding such weights, we enforce the optimizer to prioritize the learning on the non-rigid areas, such as breast and belly, over almost rigid areas, such as the head. We precompute wirig automatically from data, also using the input 4D scans, as
where {dot over (v)}i,t is the velocity of the ith vertex of the ground truth blendshape ΔiGT, and T the number of frames.
Thus, according to one embodiment, to process a pose model parameterized by |θ|=75 DOFs, and an autoencoder latent space of 100 dimensions, a single-layer neural network takes an input vector γ∈350(100+100+75+75=350) and produces an output vector Δ∈
11670(3890·
3=11670). In this embodiment, the neural network includes √{square root over (|γ|·|Δ|)}=2689 neurons in the hidden layer.
One embodiment of the present invention was qualitatively and quantitatively evaluated at the different stages of the system and method illustrated by this disclosure, including an autoencoder and a soft-tissue regressor. The inventors further generated a video of a simulation generated using one embodiment of the invention that shows compelling enriched animations with realistic soft-tissue effects. For training and testing both the autoencoder and the soft-tissue regressor in this experimental embodiment, the 4D dataset provided in the original Dyna paper [Pons-Moll et al. 2015] was used.
The performance of an autoencoder according to one embodiment was evaluated for dynamic blendshapes by leaving ground truth sequences 50002_running_on_spot and 50004_one_leg_jump out of the training set.
According to this embodiment,
To illustrate the qualitative evaluation of the embodiments described above,
The soft-tissue regression methodology according to the embodiments described above was also evaluated. A quantitative evaluation using a leave-one-out cross-validation strategy on the 4D scan dataset was performed. The autoencoder and the regressor were trained on all except one sequence of the Dyna dataset [Pons-Moll et al. 2015], and the embodiments of the regression method was trained on the discarded sequence. These 4D scan datasets do not provide much pose redundancy across sequences (i.e. each sequence is a significantly different motion). Therefore, leaving one sequence out of the training set potentially affects the generalization capabilities of the learned model. Despite this, the tested embodiment provided robust predictions of soft-tissue dynamics on unseen motions. For comparison, SMPL, another vertex-based skinning method, was tested and compared with the embodiments of the present invention.
The soft-tissue regression results according to the embodiments of the invention were also evaluated both visually comparing to ground truth scans and by creating new animations from just skeletal MoCap sequences.
As those in the art will understand, a number of variations may be made in the disclosed embodiments, all without departing from the scope of the invention, which is defined solely by the appended claims. It should be noted that although the features and elements are described in particular combinations, each feature or element can be used alone without the other features and elements or in various combinations with or without other features and elements. The methods or flow charts provided may be implemented in a computer program, software, or firmware tangibly embodied in a computer-readable storage medium for execution by a general-purpose computer, a GPU, a processor, or the like.
Examples of computer-readable storage mediums include a read only memory (ROM), a random-access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks.
Suitable processors include, by way of example, a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a graphics processing unit (GPU), a plurality of microprocessors, CPUs, GPUs, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine in any combination and number.
One or more processors in association with software in a computer-based system may be used to implement methods of training and modeling in real-time autoencoders and regressors, including neural networks, according to various embodiments, as well as data models for soft-tissue simulations according to various embodiments, all of which improves the operation of the processor and its interactions with other components of a computer-based system. The system according to various embodiments may be used in conjunction with modules, implemented in hardware and/or software, such as a cameras, a video camera module, a videophone, a speakerphone, a vibration device, a speaker, a microphone, a television transceiver, a keyboard, a Bluetooth module, a radio unit, a liquid crystal display (LCD) display unit, an organic light-emitting diode (OLED) display unit, a digital music player, a media player, a video game player module, an Internet browser, and/or any wireless local area network (WLAN) module, or the like.
The following references include those cited above and are provided for background and are incorporated herein by reference for all purposes:
Number | Date | Country | |
---|---|---|---|
Parent | PCT/ES2018/070326 | Apr 2018 | US |
Child | 17076660 | US |