The present disclosure relates generally to data processing systems. More particularly, the present disclosure relates to implementing systems and methods for generating a refined three dimensional (“3D”) model using radar and optical camera data.
Clothing shoppers today are confronted with the dilemma of having an expansive number of choices of clothing style, cut and size and not enough information regarding their size and how their unique body proportions will fit into the current styles.
The present disclose generally concerns systems and methods for generating a refined 3D model (which may, for example, be used to assist shoppers with garment fit). The methods comprise: constructing, by a processing device (e.g., a handheld scanner device and/or a computer remote from the handheld scanner device), a subject point cloud using at least optical camera data acquired by scanning a subject; using, by the processing device, radar depth data to modify the subject point cloud to represent an occluded portion of the subject's real surface; generating, by the processing device, a plurality of reference point clouds using (1) a first 3D model of a plurality of 3D models that represents an object belonging to a general object class or category to which the subject belongs and (2) a plurality of different setting vectors; identifying, by the processing device, a first reference point cloud from the plurality of reference point clouds that is a best fit for the subject point cloud; obtaining a principal setting vector associated with the first reference point cloud; and/or transforming the first 3D model into the refined 3D model using the setting vector. The radar information is used to better fit the refined 3D model to match a shape of the subject.
In some scenarios, the methods also comprise: adding at least one physical feature to the subject's real surface to the first 3D model for facilitating an improved creation of the subject point cloud; obtaining a 3D surface model from the refined 3D model; synthesizing the subject's appearance by morphing the 3D surface model in accordance with at least one of the optical camera data and the radar depth data; and/or outputting the synthesized subject's appearance from the processing device.
In those or other scenarios, the methods further comprise: obtaining physical measurement results for given characteristics of the subject using the setting vectors; generating refined metrics by refining the obtained physical measurement results using radar data associated with labeled regions of the refined 3D model; identifying at least one object (e.g., a garment) which fits on the subject based on the refined metrics; and/or outputting information specifying the at least one object.
In those or other scenarios, the methods comprise: obtaining physical measurement results for given characteristics of the subject using the setting vectors; generating refined metrics by refining the obtained physical measurement results based on geometries of labeled regions of the refined 3D model; identifying at least one garment with fits on the subject based on the refined metrics; and/or outputting information identifying the at least one garment.
In those or other scenarios, the methods may additionally involve saving and archiving the candidate vectors for later processing and retrieval. The set of vectors from the session can be ordinal ranked by goodness of fit and the list saved as a member of the class. This list of vectors can be used as a basis to support new and future scans of the same individual or used to facilitate scanning of individuals believed to belong to that member class.
The present solution will be described with reference to the following drawing figures, in which like numerals represent like items throughout the figures.
It will be readily understood that the components of the embodiments as generally described herein and illustrated in the appended figures could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of various embodiments, as represented in the figures, is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The present solution may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the present solution is, therefore, indicated by the appended claims rather than by this detailed description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Reference throughout this specification to features, advantages, or similar language does not imply that all the features and advantages that may be realized with the present solution should be or are in any single embodiment of the present solution. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present solution. Thus, discussions of the features and advantages, and similar language, throughout the specification may, but do not necessarily, refer to the same embodiment.
Furthermore, the described features, advantages and characteristics of the present solution may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the present solution can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the present solution.
Reference throughout this specification to “one embodiment”, “an embodiment”, or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present solution. Thus, the phrases “in one embodiment”, “in an embodiment”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
As used in this document, the singular form “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise. Unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art. As used in this document, the term “comprising” means “including, but not limited to”.
There are many systems available for producing a 3D surface scan or point cloud from a complex real-world object. An un-labeled 3D surface is useful for rendering static objects, but is less useful for measurement and interactive display of complex objects comprised of specific regions or sub-objects with known function and meaning. The present solution generally concerns systems and methods for interactively capturing 3D data using a hand-held scanning device with multiple sensors and providing semantic labeling of regions and sub-objects. A resulting model may then be used for deriving improved measurements of the scanned object, displaying functional aspects of the scanned object, and/or determining the scanned object's geometric fit to other objects based on the functional aspects.
In the retail applications, the purpose of the present solution is to establish a set of body measurements by using optical and radar technology to create a 3D point cloud representation of a clothed individual. The garment is transparent to the radar signal which operates at GHz frequencies so the incident radar signal reflects off the body and provides a range measurement to the underlying surface. This feature of the radar is used to determine additional distance information so that it is possible to obtain an accurate set of measurements of the individual. If the distance to the body and the distance to the garment is known, then a collection of accurate measurements can be obtained by taking into account the range differences.
Illustrative System
Referring now to
In
An illustrative architecture for the handheld scanner system 100 is shown in
As shown in
The handheld scanner system 100 is powered by a rechargeable battery 406 or other power supply 408. The rechargeable battery 406 can include, but is not limited to, a high energy density, and/or a lightweight battery (e.g., a Lithium Polymer battery). The rechargeable battery 406 can be interchangeable to support long-term or continuous operation. The handheld scanner system 100 can be docked in a cradle (not shown) when not in use. While docked, the cradle re-charges the rechargeable battery 406 and provides an interface for wired connectivity to external computer equipment (e.g., computing device 106). The handheld scanner system 100 supports both a wired and a wireless interface 410, 412. The housing 104 includes a physical interface 410 which allows for power, high-speed transfer of data over network 108 (e.g., Internet or Intranet), as well as device programming or updating. The wireless interface 412 may include, but is not limited to, a 802.11 interface. The wireless interface 412 provides a general operation communication link to exchange measurement data (radar and image data) to auxiliary computer equipment 106 (e.g., an external host device) for rendering of the image to the display of an operator's terminal. For manufacturing and testing purposes, an RF test port may be included for calibration of the RF circuitry.
The handheld scanner system 100 utilizes two modes of measurement, namely an optical module 414 mode of measurement and a radar module 416 mode of measurement. The data from both modules 414, 416 is streamed into a Central Processing Unit (“CPU”) 418. At the CPU 418 or on the camera module, the optical and radar data streams are co-processed and synchronized. The results of the co-processing are delivered to a mobile computing device or other auxiliary computer equipment 106 (e.g., for display) via the network 108. A Digital Signal Processor (“DSP”) 420 may also be included in the handheld scanner system 100.
Subsequent measurement extraction can operate on the 3D data and extracted results can be supplied to a garment fitting engine. Alternatively, the optical data is sent to the radar module 416. The radar module 416 interleaves the optical data with the radar data, and provides a single USB connection to the auxiliary computer equipment 106. The optical data and radar data can also be written to an external data store 110 to buffer optical data frames.
An electronic memory 422 temporarily stores range information from previous scans. The stored data from prior scans can augment processing with current samples as the radar module 416 moves about the subject to obtain a refined representation of the body and determine body features. In some scenarios, Doppler signal processing or Moving Target Indicator (“MTI”) algorithms is used here to obtain the refined representation of the body and determine body features. Doppler signal processing and MTI algorithms are well known in the art, and therefore will not be described herein. Any known Doppler signal processing technique and/or MTI algorithm can be used herein. The present solution is not limited in this regard. The handheld scanner system 100 allows the host platform to use both the optical and radar system to determine two surfaces of an individual (e.g., the garment surface and the wearer's body surface). The radar module 416 may also parse the optical range data and use this information to solve for range solutions and eliminate ghosts or range ambiguity.
The optical module 414 is coupled to one or more cameras 424 and/or sensors 450. The cameras include, but are not limited to, a 3D camera, an RGB camera, and/or a tracking camera. The sensors include, but are not limited to, a laser, and/or an infrared system. Each of the listed types of cameras and/or sensors are known in the art, and therefore will not be described in detail herein.
The 3D camera is configured such that the integrated 3D data structure provides a 3D point cloud (garment and body), regions of volumetric disparity (as specified by an operator), and a statistical representation of both surfaces (e.g., a garment's surface and a body's surface). Such 3D cameras are well known in the art, and are widely available from a number of manufacturers (e.g., the Intel RealSense™ 3D optical camera scanner system).
The optical module 414 may maintain an inertial state vector with respect to a fixed coordinate reference frame and with respect to the body. The state information (which includes orientation, translation and rotation of the unit) is used along with the known physical offsets of the antenna elements with respect to the center of gravity of the unit to provide corrections and update range estimates for each virtual antenna and the optical module. The inertial state vector may be obtained using sensor data from an optional on-board Inertial Measurement Unit (“IMU”) 426. IMUs are well known in the art, and therefore will not be described herein. Any known or to be known IMU can be used herein without limitation.
The capabilities of such systems routinely achieve millimeter accuracy and resolution at close distances and increase to centimeter resolution at further distances. Despite their excellent resolution, obtaining body dimensionality of a clothed individual is limited by any obstruction such as a garment. Camera systems which project a pattern on the subject provide adequate performance for this application.
As shown in
It is noted that a desired waveform is a Linear Frequency Modulated (“LFM”) chirp pulse. However, other waveforms may be utilized. To achieve high range resolution, the radar module 416 is a broadband system. In this regard, the radar module 416 may include, but is not limited to, a radar module with an X/Ku-band operation. The LFM system includes a delayed replica of the transmission burst to make a comparison with the return pulse. Due to the fact that the operator using the handheld scanner system 100 cannot reliably maintain a fixed separation from the subject, a laser range finder, optical system or other proximity sensor can aid in tracking this separation to the subject's outer garment. This information may be used to validate the radar measurements made using the LFM system and compensate the delay parameters accordingly. Since the optical 3D camera 424 or laser 450 cannot measure to the body which is covered by a garment, the Ultra Wide Band (“UWB”) radar module 416 is responsible for making this measurement.
With the illustrated UWB radar module 416, the waveform generator 502 emits a low power non-ionizing millimeter wave (e.g., operating between 58-64 GHz) which passes through clothing, reflects off the body, and returns a scattered response to the radar receiving aperture. To resolve the range, the UWB radar module 416 consists of two or more antenna elements 428, 430 having a known spatial separation. In this case, one or more antenna pairs are used. Apertures 204 are used with associated transmitting elements 428. However, different arrangements are possible to meet both geometric and cost objectives. In the case of multiple transmit apertures 204, each element takes a turn as the emitter, and other elements are receivers.
A single aperture 204 can be used for both transmitting and receiving. A dual aperture can also be used to achieve high isolation between transmit and receive elements for a given channel. Additionally, the antennas 206 can be arranged to transmit with specific wave polarizations to achieve additional isolation or to be more sensitive to a given polarization sense as determined by the target.
The waveform emitted in the direction of the body is an LFM ramp which sweeps across several Gigahertz of bandwidth. The waveform can be the same for all antenna pairs or it can be changed to express features of the reflective surface. The bandwidth determines the unambiguous spatial resolution achievable by the radar module 416. Other radar waveforms and implementations can be used, but in this case an LFM triangular waveform is used.
Referring now to
For all combinations of antenna pairs 428, 430, a range determination can be made to the subject via the process of trilateration (for a pair) or multilateration (for a set) of elements. Referring to
As the operator scans the individual, a display 800 is updated indicating the regions of coverage, as illustrated in
In the illustrative application, the handheld scanner system 100 will allow large volumes of fully clothed customers to be rapidly scanned. A significant benefit of this technology is that the handheld scanner system 100 will not be constrained in a fixed orientation with respect to the subject, so challenging measurements can be made to areas of the body which might otherwise be difficult to perform with a fixed structure. Additionally, the combination of two spatial measurement systems working cooperatively can provide a higher fidelity reproduction of the dimensionality of the individual.
While the handheld scanner system 100 is described herein in the context of an exemplary garment fitting application, it is recognized that the handheld scanner system 100 may be utilized to determine size measurements for other irregularly shaped objects and used in other applications that utilize size measurements of an irregularly shaped object.
Referring now to
Computing device 106 may include more or less components than those shown in
Some or all the components of the computing device 106 can be implemented as hardware, software and/or a combination of hardware and software. The hardware includes, but is not limited to, one or more electronic circuits. The electronic circuits can include, but are not limited to, passive components (e.g., resistors and capacitors) and/or active components (e.g., amplifiers and/or microprocessors). The passive and/or active components can be adapted to, arranged to and/or programmed to perform one or more of the methodologies, procedures, or functions described herein.
As shown in
At least some of the hardware entities 914 perform actions involving access to and use of memory 912, which can be a RAM, and/or a disk driver. Hardware entities 914 can include a disk drive unit 916 comprising a computer-readable storage medium 918 on which is stored one or more sets of instructions 920 (e.g., software code) configured to implement one or more of the methodologies, procedures, or functions described herein. The instructions 920 can also reside, completely or at least partially, within the memory 912 and/or within the CPU 906 during execution thereof by the computing device 106. The memory 912 and the CPU 906 also can constitute machine-readable media. The term “machine-readable media”, as used here, refers to a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions 920. The term “machine-readable media”, as used here, also refers to any medium that is capable of storing, encoding or carrying a set of instructions 920 for execution by the computing device 106 and that cause the computing device 106 to perform any one or more of the methodologies of the present disclosure.
In some scenarios, the hardware entities 914 include an electronic circuit (e.g., a processor) programmed for facilitating the generation of 3D models using data acquired by the handheld scanner system 100. In this regard, it should be understood that the electronic circuit can access and run one or more applications 924 installed on the computing device 106. The software application(s) 924 is(are) generally operative to: facilitate parametric 3D model building; store parametric 3D models; obtain acquired data from the handheld scanner system 100; use the acquired data to identify a general object class or category in which a scanned object belongs; identify a 3D model that represents an object belonging to the same general object class or category as the scanned object; generate point clouds; modify point clouds; compare point clouds; identify a first point cloud that is a best fit for a second point cloud; obtain a setting vector associated with the first point cloud; set a 3D model using the setting vector; obtain a full 3D surface model and region labeling from the set 3D model; obtain metrics for certain characteristics of the scanned object from the setting vector; refine the metrics; and/or present information to a user. An illustration showing these operations of the computing device is provided in
Illustrative Method For Generating A Refined 3D Model
Referring now to
Method 1100 comprises a plurality of operations shown in blocks 1102-1144. These operations can be performed in the same or different order than that shown in
As shown in
The synthetic 3D models represent different objects (e.g., a living thing (e.g., a human (male and/or female) or animal), a piece of clothing, a vehicle, etc.) to be sensed using the handheld scanner system. Methods for building synthetic 3D models are well known in the art, and therefore will not be described herein. Any known or to be known method for building synthetic 3D models can be used herein without limitation. For example, in some scenarios, the present solution employs a Computer Aided Design (“CAD”) software program. CAD software programs are well known in the art, and therefore will not be described herein. Additionally or alternatively, the present solution uses a 3D modeling technique described in a document entitled “A Morphable Model For The Synthesis Of 3D Faces” which was written by Blanz et al. (“Blanz”). An illustrative synthetic 3D model of a human 1200 generated in accordance with this 3D modeling technique is shown in
Each synthetic 3D model has a plurality of changeable characteristics. These changeable characteristics include, but are not limited to, an appearance, a shape, an orientation, and/or a size. The characteristics can be changed by setting or adjusting parameter values (e.g., a height parameter value, a weight parameter value, a chest circumference parameter value, a waist circumference parameter value, etc.) that are common to all objects in a given general object class or category (e.g., a human class or category). In some scenarios, each parameter value can be adjusted to any value from −1.0 to 1.0. The collection of discrete parameter value settings for the synthetic 3D model is referred to herein as a “setting vector”. The synthetic 3D model is designed so that for each perceptually different shape and appearance of the given general object class or category, there is a setting vector that best describes the shape and appearance. A collection of setting vectors for the synthetic 3D model describes a distribution of likely shapes and appearances that represent the distribution of those appearances and shapes that are likely to be found in the real world. This collection of setting vectors is referred to herein as a “setting space”.
A synthetic 3D model may also comprise a plurality of sub-objects or regions (e.g., a forehead, ears, eyes, a mouth, a nose, a chin, an arm, a leg, a chest, hands, feet, a neck, a head, a waist, etc.). Each meaningful sub-object or region is labeled with a key-value pair. The key-value pair is a set of two linked data items: a key (which is a unique identifier for some item of data); and a value (which is either the data that is identified or a pointer to a location where the data is stored in a datastore). The key can include, but is not limited to, a numerical sequence, an alphabetic sequence, an alpha-numeric sequence, or a sequence of numbers, letters and/or other symbols.
Referring again to
Next in 1108, an operator of the handheld scanner system scans an object (e.g., an individual) in a continuous motion. Prior to the scanning, additional physical features may be added to the subject's real surface to facilitate better and faster subsequent creation of a subject point cloud using scan data. For example, the subject may wear a band with a given visible pattern formed thereon. The band is designed to circumscribe or extend around the object. The band has a width that allows (a) visibility of a portion of the object located above the band and (b) visibility of a portion of the object located below the band. The visible pattern of the band (e.g., a belt) facilitates faster and more accurate object recognition in point cloud data, shape registration, and 3D pose estimation. Accordingly, the additional physical features improve the functionality of the implementing device.
As a result of this scanning, the handheld scanner system performs operations to acquire radar depth data (e.g., via radar module 416 of
The acquired data is used in 1109 to construct a 3D point cloud of the subject. The term “point cloud”, as used herein, refers to a set of data points in some coordinate system. In a 3D coordinate system, these points can be defined by X, Y and Z coordinates. More specifically, optical keypoints are extracted from optical data. In one scenario, these keypoints can be extracted using, but not limited to, ORB features. The keypoints are indexed and used to concatenate all scanned data into a 3D point cloud of the subject. These keypoints are also used to (continuously) track the position (rotation and translation) of the handheld scanner system 100 relative to a fixed coordinate frame. Since the number and quality of keypoints depends on the scan environment, it may occur that the handheld scanner system 100 cannot collect enough keypoints and fail to construct the 3D point cloud. To resolve this issue, an additional layer can be applied to the subject's real surface to guarantee the amount and type of keypoints. This layer can be some patterns (on a belt) worn by or projected on the subject.
In 1110, the acquired data is used to identify a general object class or category (e.g., human) in which the scanned object belongs. For example, image processing is used to recognize an object in an image and extract feature information about the recognized object (e.g., a shape of all or a portion of the recognized object and/or a color of the recognized object) from the image. Object recognition and feature extraction techniques are well known in the art, and therefore will not be described herein. Any known or to be known object recognition and feature extraction technique can be used herein without limitation. The extracted feature information is then compared to stored feature information defining a plurality of general object classes or categories (e.g., a human, an animal, a vehicle, etc.). When a match exists (e.g., by a certain degree), the corresponding general object class or category is identified in 1110. The present solution is not limited to the particulars of this example. The operations of 1110 can be performed by the handheld scanner system 100 and/or the remote computing device (e.g., computing device 106 of
Next in 1112, an initial 3D model is identified from the plurality of parametric 3D models that represents an object belonging to the same general object class or category (e.g., human) of the scanned object. Methods for identifying a 3D model from a plurality of 3D models based on various types of queries are well known in the art, and will not be described herein. Any known or to be known method for identifying 3D models can be used herein without limitation. One such method is described in a document entitled “A Search Engine For 3D Models” which was written by FunkHouser et al. (“FunkHouser”). The operations of 1112 can be performed by the handheld scanner system 100 and/or the remote computing device (e.g., computing device 106 of
The initial 3D model and a plurality of different setting vectors are used in 1114 to generate a plurality of reference point clouds. Methods for generating point clouds are well known in the art, and therefore will not be described herein. Any known or to be known method for generating point clouds can be used herein without limitation. An illustrative reference point cloud 1200 is shown in
Thereafter in 1116, a partial or whole point cloud of the scanned object (“subject point cloud”) is constructed using the acquired optical camera depth data. An illustrative subject point cloud is shown in
The subject point cloud can be optionally modified using the radar depth data as shown by 1118, the color data as shown by 1120, and/or the spectral data as also shown by 1120. Methods for modifying point clouds using various types of camera based data are well known in the art, and therefore will not be described herein. Any known or to be known method for modifying point clouds using camera based data can be used herein without limitation. The operations of 1116-1120 can be performed by the handheld scanner system 100 and/or the remote computing device (e.g., computing device 106 of
1118 is an important operation since it improves the accuracy of the subject point cloud. As noted above, a radar can “see through” the clothes since GHz frequencies reflect of the water in the skin. Radar depth measurements are registered with optical camera data to identify spots or areas on the subject being scanned to which the radar distance was measured from the device. As a result, the final scanned point cloud comprises such identified radar spots besides the optical point cloud. The radar distances at these spots can then be used to modify the optical point cloud obtained with something covering at least a portion of the object (e.g., clothing). Since the modified cloud better represents the subject's real shape (e.g., human body without clothing) compared to the optical cloud (with clothing), it improves the process of identifying and morphing the reference point cloud to obtain a better refined 3D model as illustrated in 1122-1132, and thus better fit the subject's real shape. The difference between the optical distances and radar distances in labeled regions can be further utilized to obtained more accurate metrics for certain characteristics of the scanned object (e.g., chest circumference, and/or waist circumference) as illustrated in 1134.
In 1122, the subject point cloud is then compared to the plurality of reference point clouds. A reference point cloud is identified in 1124 based on results of these comparison operations. The identified reference point cloud (e.g., reference point cloud 1200 of
Once the best fit reference point cloud is identified, method 1100 continues with 1126 of
In 1130, a full 3D surface model and region labeling is obtained from the refined 3D model. Surface modeling techniques are well known in the art, and therefore will not be described herein. Any known or to be known surface modeling technique can be used herein without limitation. In some scenarios, a CAD software program is employed in 1130 which has a surface modeling functionality. An illustrative surface model is shown in
The scanned object's appearance is synthesized in 1132. The synthetization is achieved by fitting and mapping the 3D surface module to the optical data and optionally radar data from the subject point clouds (optical and/or radar clouds). The phrase “fitting and mapping” as used herein means finding correspondence between points in the 3D surface model and points in the subject point cloud and optionally modifying the 3D surface module to best fit the subject point clouds. Metrics for certain characteristics of the scanned object (e.g., height, weight, chest circumference, and/or waist circumference) are obtained in 1134 from the setting vector for the refined 3D model. The metrics may optionally be refined (a) using radar data associated with labeled regions of the refined 3D model, and/or (b) based on the geometries of regions labeled in the refined 3D model, as shown by 1136. The synthesized appearance of the scanned object is presented to the operator of the handheld scanner system and/or to a user of another computing device (e.g., computing device 106 of
The synthesized scanned object's appearance and/or metrics (unrefined and/or refined) can optionally be used in 940 to determine the scanned object's geometric fit to at least one other object (e.g., a shirt or a pair of pants) or to identify at least one other object which fits on the scanned object. In optional 1142, further information is presented to the operator of the handheld scanner system and/or to a user of another computing device (e.g., computing device 106 of
The present solution is not limited to the particulars of
The following operations of the above described method 1100 are considered novel: using the object class to select a 3D parametric model from a plurality of 3D parametric models; finding a setting vector that fits the 3D parametric model to the sensed unstructured data using the setting vector's association with a reference point cloud found by a spatial hash; using a neighborhood of found setting vectors to further refine the shape and appearance of the 3D parametric model; and/or returning an appearance model as well as semantic information for tagged regions in the 3D parametric model.
Although the present solution has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In addition, while a particular feature of the present solution may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Thus, the breadth and scope of the present solution should not be limited by any of the above described embodiments. Rather, the scope of the present solution should be defined in accordance with the following claims and their equivalents.
The present application claims the benefit of U.S. Provisional Patent Application having Ser. No. 62/647,114 and filing date Mar. 23, 2018. The forgoing U.S. Provisional Patent Application is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62647114 | Mar 2018 | US |