This disclosure generally relates to robotics, and in particular relates to hardware and software for smart robotic sensing.
Artificial intelligence (AI) is the intelligence of machines or software. AI technology is widely used throughout industry, government and science, such as advanced web search engines, recommendation systems, understanding human speech, self-driving cars, generative or creative tools, and competing at the highest level in strategic games.
Robotics is an interdisciplinary branch of electronics and communication, computer science and engineering. Robotics involves the design, construction, operation, and use of robots. The goal of robotics is to design machines that can help and assist humans. Robotics integrates fields of mechanical engineering, electrical engineering, information engineering, mechatronics engineering, electronics, biomedical engineering, computer engineering, control systems engineering, software engineering, mathematics, etc. The field of robotics develops machines that can automate tasks and do various jobs that a human might not be able to do. Certain robots require user input to operate, while other robots function autonomously.
Touch is a sensing modality that may provide rich information about object properties and interactions with the physical environment. Humans and robots both benefit from using touch to perceive and interact with the surrounding environment. However, no existing systems may provide rich, multimodal digital touch-sensing capabilities while retaining the form factor of a human finger. The embodiments disclosed herein may improve the digitization of touch with technological advances embodied in an artificial finger-shaped sensor with enhanced sensing capabilities. In particular embodiments, the artificial fingertip may comprise high-resolution sensors (e.g., approximately 8.3 million taxels) that respond to omnidirectional touch, capture multimodal signals, and use on-device artificial intelligence to process the data in real time. For example, in particular embodiments, evaluations show that the artificial fingertip can resolve spatial features as small as 7 um, sense normal and shear forces with a resolution between, e.g., 1 mN and 1.3 mN respectively, perceive vibrations up to, e.g., 9-11 kHz, sense odor, and even sense heat. Furthermore, the on-device AI neural-network accelerator may act as a peripheral nervous system on a robot and mimic the reflex-arc found in humans. These results demonstrate the embodiments disclosed herein may digitize touch with enhanced performance. The embodiments disclosed herein may be applied in fields including robotics (industrial, medical, agricultural, and consumer-level), virtual reality and telepresence, prosthesis, and e-commerce. Although this disclosure describes digitizing particular modality in a particular manner, this disclosure contemplates digitizing any suitable modality in any suitable manner.
In particular embodiments, a system for touch digitization may comprise a silicone hemispherical dome comprising a surface comprising a reflective silver-film layer. The system may additionally comprise an omnidirectional optical system comprising a lens comprising a plurality of lens elements and an image sensor configured to generate image data from data captured by the lens. In particular embodiments, a first lens element of the plurality of lens elements may be in direct contact with the hemispherical dome without airgap. The lens may be configured to capture scattering of internal incident light generated by the reflective silver-film layer. The system may also comprise one or more non-image sensors disposed underneath the omnidirectional optical system. The system may further comprise one or more processors and a non-transitory memory coupled to the processors comprising instructions executable by the processors. In particular embodiments, the processors may be operable when executing the instructions to access the image data from the omnidirectional optical system and sensing data from the one or more non-image sensors and generate the touch digitization based on the accessed image and sensing data by one or more machine-learning models.
In particular embodiments, an artificial fingertip for touch digitization may comprise a silicone hemispherical dome. The artificial fingertip may also comprise an omnidirectional optical system comprising a lens comprising a plurality of lens elements and an image sensor configured to generate image data from data captured by the lens. In particular embodiments, a first lens element of the plurality of lens elements may be in direct contact with the silicone hemispherical dome without airgap. The artificial fingertip may further comprise one or more non-image sensors disposed underneath the omnidirectional optical system.
Certain embodiments disclosed herein may provide one or more technical advantages. A technical advantage of the embodiments may include improved sensitivity to input stimuli by using dynamic lighting with variable wavelength and direction for reconstruction of touch surface topology signals (images) as the system disclosed herein moves beyond the traditional Lambertian scattering paradigm towards a controlled degree scattering surface along with a ground up approach on optimizing the material properties for the highest sensitivity to spatial features, which is achieved by developing a new process for chemically growing a thin-film layer of silver film onto the fingertip surface. Another technical advantage of the embodiments may include high spatial resolution in capturing minute details on the surface of the fingertip, which allows for bypassing traditional methods of capturing normal and shear forces through the use of markers. The embodiments disclosed herein utilize a 3-region solid immersion hyperfisheye lens design for touch digitization systems in a non-airgap configuration to a PDMS material. Due to the design and specific requirements for non-human imaging, the embodiments disclosed herein are able to maintain a superior performance of the modulation transfer function (MTF) through the entire field of the lens, thereby ensuring spatial performance along the tip, side and edges of the fingertip. In addition, the omnidirectional surface using a single camera and lens system may achieve full field view of over 200 degrees. Another technical advantage of the embodiments may include high temporal resolution based on interchangeable and modular electronic stack-up for touch digitization (vision capture, multi-modal capture, on-device AI processing), which combines all electronics and sensing elements into a familiar size and shape of a human thumb. Each individual system of the stack-up can be interchanged with a newer design to reduce the hardware development lifecycle, furthermore, components in the stack-up can be excluded to further reduce the size of the sensor based on individual requirements of the application. Another technical advantage of the embodiments may include digitizing touch signals for on-device AI processing for providing human-like reflex arc actions to robotic manipulators as a neural network processor is inside the fingertip for direct inference on input data and a processor can provide direct outputs to control a secondary device such as a robotic end effector. Certain embodiments disclosed herein may provide none, some, or all of the above technical advantages. One or more other technical advantages may be readily apparent to one skilled in the art in view of the figures, descriptions, and claims of the present disclosure.
The embodiments disclosed herein are only examples, and the scope of this disclosure is not limited to them. Particular embodiments may include all, some, or none of the components, elements, features, functions, operations, or steps of the embodiments disclosed herein. Embodiments according to the invention are in particular disclosed in the attached claims directed to a method, a storage medium, a system and a computer program product, wherein any feature mentioned in one claim category, e.g. method, can be claimed in another claim category, e.g. system, as well. The dependencies or references back in the attached claims are chosen for formal reasons only. However any subject matter resulting from a deliberate reference back to any previous claims (in particular multiple dependencies) can be claimed as well, so that any combination of claims and the features thereof are disclosed and can be claimed regardless of the dependencies chosen in the attached claims. The subject-matter which can be claimed comprises not only the combinations of features as set out in the attached claims but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any of the embodiments and features described or depicted herein can be claimed in a separate claim and/or in any combination with any embodiment or feature described or depicted herein or with any of the features of the attached claims.
Touch is a sensing modality that may provide rich information about object properties and interactions with the physical environment. Humans and robots both benefit from using touch to perceive and interact with the surrounding environment. However, no existing systems may provide rich, multimodal digital touch-sensing capabilities while retaining the form factor of a human finger. The embodiments disclosed herein may improve the digitization of touch with technological advances embodied in an artificial finger-shaped sensor with enhanced sensing capabilities. In particular embodiments, the artificial fingertip may comprise high-resolution sensors (e.g., approximately 8.3 million taxels) that respond to omnidirectional touch, capture multimodal signals, and use on-device artificial intelligence to process the data in real time. For example, in particular embodiments, evaluations show that the artificial fingertip can resolve spatial features as small as 7 um, sense normal and shear forces with a resolution between, e.g., 1 mN and 1.3 mN respectively, perceive vibrations up to, e.g., 9-11 kHz, sense odor, and even sense heat. Furthermore, the on-device AI neural-network accelerator may act as a peripheral nervous system on a robot and mimic the reflex-arc found in humans. These results demonstrate the embodiments disclosed herein may digitize touch with enhanced performance. The embodiments disclosed herein may be applied in fields including robotics (industrial, medical, agricultural, and consumer-level), virtual reality and telepresence, prosthesis, and e-commerce. Although this disclosure describes digitizing particular modality in a particular manner, this disclosure contemplates digitizing any suitable modality in any suitable manner.
In particular embodiments, a system for touch digitization may comprise a silicone hemispherical dome comprising a surface comprising a reflective silver-film layer. The system may additionally comprise an omnidirectional optical system comprising a lens comprising a plurality of lens elements and an image sensor configured to generate image data from data captured by the lens. In particular embodiments, a first lens element of the plurality of lens elements may be in direct contact with the silicone hemispherical dome without airgap. The lens may be configured to capture scattering of internal incident light generated by the reflective silver-film layer. The system may also comprise one or more non-image sensors disposed underneath the omnidirectional optical system. The system may further comprise one or more processors and a non-transitory memory coupled to the processors comprising instructions executable by the processors. In particular embodiments, the processors may be operable when executing the instructions to access the image data from the omnidirectional optical system and sensing data from the one or more non-image sensors and generate the touch digitization based on the accessed image and sensing data by one or more machine-learning models.
In particular embodiments, an artificial fingertip for touch digitization may comprise a silicone hemispherical dome. The artificial fingertip may also comprise an omnidirectional optical system comprising a lens comprising a plurality of lens elements and an image sensor configured to generate image data from data captured by the lens. In particular embodiments, a first lens element of the plurality of lens elements may be in direct contact with the hemispherical dome without airgap. The artificial fingertip may further comprise one or more non-image sensors disposed underneath the omnidirectional optical system.
Of all human senses, touch may be the most critical in how humans interact with the world. Touch may enable humans to measure forces and recognize object properties, e.g., shape, weight, density, textures, friction, elasticity. Touch may also play an important role both in social relationships and in cognitive development. By comparison, the earliest efforts to impart that sense to robots came no closer than crude approximations. No solutions have emerged for digitizing touch with the same rich sensorial spectrum humans take for granted. Toward the advancement of robotic in-hand manipulation, the embodiments disclosed herein may mimic familiar features of the human hand: fingers. The touch digitization in the embodiments disclosed herein may enable intelligent systems to discern significantly higher levels of physical information during environmental interaction.
Digitizing touch may depend on two features including one temporal in nature and one spatial. Temporal features may process information from basic signals of time variation, whereas spatial features may process a discrete multidimensional array of temporal signals. The embodiments disclosed herein combine these methods within a unified platform to improve the capabilities of touch digitization, i.e., a modular, finger-shaped, multimodal tactile sensor with on-device artificial intelligence (AI) capabilities and superhuman performance.
In particular embodiments, the artificial fingertip may comprise a multimodal modular sensor. The multimodal modular sensor may comprise a main body mechanical housing. As an example and not by way of limitation, the main body mechanical housing may have a similar shape and size to the human thumb. In particular embodiments, the silicone hemispherical dome, the omnidirectional optical system, the one or more non-image sensors, the one or more processors, and the non-transitory memory may be disposed in the housing. The multimodal modular sensor may also comprise a soft silicone solid body fingertip. As an example and not by way of limitation, the soft silicone may be comprised of a polydimethysiloxane (PDMS) material. In other words, the silicone hemispherical dome may be based on a Polydimethylsiloxane (PDMS) material. The solid boy fingertip may comprise a chemically grown metallic silver reflective layer and an outer layer to protect the reflective layer. The multimodal modular sensor may additionally comprise multimodal electronics. The multi-modal electronics may extend past the traditional limits of the vision-based modality, which may be limited by the capture rate of the CMOS. Instead, the optical system disclosed herein may capture at a variable frames per second (e.g., 240 fps, depending on the capability of the CMOS) whereas the multimodal inclusion may allow for capturing temporal data up to 10,000 Hz. This is a system level contribution using off-the-shelf components but assembling them in a manner which allows for directly sampling or capturing signals due to input stimuli on the fingertip surface. In particular embodiments, the multimodal electronics may be enabled by a custom over molding and molding technology by placing the electronics directly into the mold, injecting silicone, and placing the mold under vacuum to ensure the liquid silicone can entire all portholes and crevices around the off-the-shelf sensors.
In particular embodiments, the one or more non-image sensors may comprise one or more of an inertial measurement unit (IMU) sensor, a microphone, an environmental sensor, a gas sensor, a pressure sensor, or a temperature sensor. The multimodal modular sensor may comprise inertial measurement units (IMU) in the main housing for processing electronics. The IMU may be rigidly coupled to the enclosure and fingertip. The IMU may be used for sensing vibrations, rotations, and position of the fingertip. The multimodal modular sensor may also comprise MEMS based microphones in the multimodal sensing printed circuit board (PCB), which may be over molded directly to the soft silicone solid body fingertip. In one example embodiment, there may be two microphones with a top porthole, and two with a bottom porthole. Two microphones may be digital, and two microphones may be analog which differ in their sensing frequency ranges to cover a larger bandwidth. These microphones may provide surface audio textures similar to what human's fingertip would perceive when scratching the surface of an object, sampling object-object or object-environment interactions.
In particular embodiments, the multimodal modular sensor may additionally comprise an environmental sensor in the main housing. Internal air flow may be provided by a micro-sized inlet fan, which creates airflow through the device. The environmental sensor may be capable of sampling local air for air pressure, temperature, moisture and humidity. The multimodal modular sensor may also comprise a gas sensor in the main housing. Internal air flow may be provided by a micro-sized inlet fan which creates airflow through the device. The gas sensor may identify chemical compounds and compositions of local air samples to provide object state information and for classification of different objects. The multimodal modular sensor may further comprise pressure/temperature sensor in the multimodal sensing PCB, which may be over molded directly to the soft silicone solid body fingertip. The pressure/temperature sensor may measure absolute values of compression force being applied to the fingertip and capture heat gradients caused by a heat source on the fingertip and heat flow due to the metallic reflective layer.
In particular embodiments, the multimodal modular sensor may comprise processing electronics. The system may further comprise a stack-up comprising a plurality printed circuit boards for the one or more processors, the omnidirectional optical system, and a data transfer system, wherein the plurality printed circuit boards share a common electrical interface and connector stack. In other words, the processing electronics may be in a stack-up that combines in a limited space five separate and unique PCB's which share a common electrical interface and connector stack. The processing electronics may comprise a microprocessor, a neural-network accelerator, an image capture system, and a data transfer system.
In particular embodiments, the one or more processors may comprise one or more of a microprocessor or an accelerator. The microprocessor may be responsible for acquiring all the data from the sensors, lightweight processing, and configuration of processing and sampling parameters. In particular embodiments, the one or more machine-learning models may comprise one or more neural-network models. Correspondingly, the one or more processors may comprise one or more neural-network accelerators which are configured for accelerating real-time inference on the accessed image and sensing data by the one or more neural-network models. The neural-network accelerator may have direct access to the microprocessor to obtain a selectable stream of data, whether this be image data or non-image data. The neural-network accelerator may provide neural-network acceleration which can be uploaded based on offline training to perform real-time inference on the input data streams. The neural-network accelerator may also have direct access to providing output control signals to a secondary device. In one example embodiment, the secondary device may comprise a robotic end effector. As an example and not by way of limitation, the secondary device may be an individual finger on a robotic hand, which may allow for rapid reflexes of the finger based on input data. The image capture system may be responsible for capturing image frames from the complementary metal-oxide semiconductor (CMOS) and configuring the sampling parameters towards different frames per second and resolutions. The data transfer system may provide a USB interface to a host computer which combines all multi-modal information and image/vision information over a single connector, this also powers the device.
The embodiments disclosed herein may be used in a variety of applications. As an example and not by way of limitation, these applications may include medical applications such as prosthetics, palpation, sensing and localization (e.g., breast, testicular cancers, etc.), remote surgery, etc. As another example and not by way of limitation, these applications may include advertisements such as e-commerce, capturing the feeling of clothes, understanding the feeling of skin and the application of cosmetics, etc. As yet another example and not by way of limitation, these applications may include agriculture such as fruit/vegetable picking, food quality determination, food handling, etc. As yet an example and not by way of limitation, these applications may include haptics such as using artificial fingertips to capture the physical word accurately.
In deploying a high-end modular research platform for investigating touch and its digitization, the embodiments disclosed herein introduce a novel approach. The platform disclosed herein, identified as an artificial fingertip, may belong to the family of vision-based tactile sensors.
The platform disclosed herein may have an elastomer that serves as the touch-sensing interface, with a subcutaneous camera that measures the deformation of the elastomer though structured light. But in addition to that ability, the platform disclosed herein may be capable of sensing multiple tactile modes (see
To effectively capture the nuances in touch interactions with the world, the artificial fingertip disclosed herein may be sensitive in both temporal and spatial domains. These domains may be obtained as modal signals encompassed within the artificial fingertip through visual, audio, vibration, pressure, heat and gas sensing. Commonly, conventional vision-based touch sensors using off-the-shelf imaging systems may be bound by slow visual capture rates which reduce the amount of sequential information resultant from frame encoding time and therefore limit the temporal nature of non-static touch interactions encountered during manipulation. Increasing the temporal frequency of the visual system may not be with benefit without an increase in spatial resolution for dynamic movements. The embodiments disclosed herein encompass the touch digitization system into the form of an artificial fingertip with similar geometry to a human finger. The surface of the fingertip may encode touch information from depressions in a reflective layer of which internal light reflections are captured by the camera. A visual tactile system may be utilized to resolve the minimum possible spatial features presented by an object interacting with the fingertip at high temporal rates. The artificial fingertip disclosed herein may advance through the spatial, temporal and multimodal performance through the embodiment of a modular platform for research into the digitization of touch. To achieve the high spatial and temporal performance demonstrated by the artificial fingertip disclosed herein, the embodiments disclosed herein leverage methodological breakthroughs in five different subsystems: elastomer interface, optical system, illumination system, multimodal sensing, and on-device AI processing.
When the reflective fingertip surface layer is subject to impression stimuli, the surface layer material properties may directly relate to the spatial resolving capabilities. The embodiments disclosed herein developed a design-of-experiment technique to identify these 6 material parameters which affect sensor sensitivity to input stimuli: Rg (fingertip radius), Tc (surface reflective coating layer thickness), Tg (surface layer thickness), h (height), Ec (coating Young's modulus), and Eg (fingertip volume Young's modulus). If Tc and Tg are too thick or exhibit low compliance a low pass filter effect may be evident on discrete object edges. Similarly, if the object is rich in spatial information and fractal dimension, the fingertip surface may resolve less features due to local gradients from material compliance. Particular embodiments may avoid specifying any constraints on the coating thickness layer to best capture small input stimuli while maintaining a suitable parameter range for the general size of the fingertip. Developing this layer onto the fingertip surface may involve manual hand painting, airbrushing or dip-coating techniques. However, while producing a touch image, they may be far from optimal and result in large coating thickness, and inconsistent yield from manufacturing variance. The embodiments disclosed herein solve this issue by developing a new chemical deposition technique for the growing of a silver thin film directly onto the surface of the fingertip which produces coating thicknesses far smaller than previous methods and thus achieve better sensitivity.
Common visuo-tactile sensors may capture input stimuli at a planar surface, use multiple cameras that are difficult to integrate and process together, or default to common off the shelf cameras optimized for human-centered imaging which result in downgraded optical performance in touch. Particular embodiments may refrain from the use of standard image-sensor features such as automatic exposure control, automatic white balance, and automatic focus which are designed for responding to changes in the natural environment as the fingertip chamber disclosed herein may be an enclosed and controlled environment. For modeling an isotropic representation of similar dimensions to a human fingertip, a new approach may be required to optimize the capture of the hemispherical surface. In optimizing for input stimuli from the touch interaction layer, the imaging system disclosed herein may not limit the performance within the finite element method simulation of the material properties. Hence, for example, particular embodiments may determine the optical system requirements to best suit capturing images related to tactile sensing with a CMOS pixel size of 1.1 um. Parameters may be chosen for converging spot size to increase spatial resolution, intentionally allowing chromatic aberration, introducing shallow depth of field to allow for defocus proportional to object indentation depth, and removing anti-reflective coatings to allow capture and interpretation of reflections and scattering inside the fingertip. However, such parameters may require a non-standard lens. Therefore, particular embodiments may utilize a custom solid immersion hyperfisheye lens to tackle to unique environment of visuo-tactile sensing, rather than an off-the-shelf lens catering towards general purpose imaging thus enabling full control over lens geometry and optical parameters.
The embodiments disclosed herein describe two metric parameters on the illumination performance within the volume, background uniformity, how evenly is the light distributed, and image to background uniformity contrast, how well do impressions on the surface of the fingertip stand out compared to the background as follows. A common approach may be the embodiment of an internal structure which serves as a hemispherical light pipe and provides fingertip rigidity. However, an internal light pipe structure may produce illumination artifacts in the form of glint and hotspots from the convex geometry which contribute to a degradation in image metrics. To reduce these artifacts, conventional approaches may make use of a textured surface to induce Lambertian scattering of incident light rays. Particular embodiments may model the reflective layer surface properties with controlled degrees of scattering from polished to Lambertian, where the entire hemispherical surfaces may act as an integrating sphere, to show that a Lambertian scattering surface may be not the optimal approach to achieve high-performance. Particular embodiments may use a rigid solid volume, instead of the more common hollow volume or use of an internal support structure, in conjunction with controlled reflective surface scattering parameters moving away from Lambertian surfaces.
The embodiments disclosed herein disclose a new platform and show that these advancements far outperform conventional visuo-tactile techniques in sensitivity to spatial and force sensitivity. While visual information may provide insight into environmental and object contact, such as textures, and surface deformations, this may only provide a subset of fingertip to object-environment understanding.
Particular embodiments may further evolve the capabilities of the platform to include sensitivity to non-vision based modalities. As an example and not by way of limitation, when in contact with the environment, dynamic forces and signals may be experienced; swiping the fingertip across a surface, or the very moment a contact transient or slip may occur. Particular embodiments may capture this information through in-fingertip audio microphones and pressure MEMS based sensors and show the ability to determine the level of liquid inside an opaque bottle (see
For human arc-reflex, quick reaction to input stimuli on the fingertip may benefit from the central nervous system instead of a round trip to the brain. Particular embodiments may utilize a similar local processing response on the artificial fingertip. As an example and not by way of limitation, particular embodiments may include within the form-factor of the fingertip a neural-network accelerator to process the sensory reading, and allow for direct control providing actions to a robotic end effector for controlling the phalanges of a robot finger. While this may be a new era of on-device fingertip processing, the embodiments disclosed herein disclose two main effects that contribute to faster response, i.e., latency and jitter. Latency may result from the average time required to process a signal of interest and jitter may be the variation in mean time-based on system overhead which may occur due to host processing or bandwidth constraints. Compared to conventional methods of using an artificial fingertip with an external host where 2× the round-trip latency is required to perform an action, the embodiments disclosed herein show that by processing data directly on device through an onboard neural network accelerator, a 2× reduction in latency and jitter towards performing an action may be achieved in an example embodiment.
In particular embodiments, a 3D finite element method (FEM) model using Comsol Multiphysics may be utilized for analyzing and characterizing the fingertip material stack-up. The 3D FEM model may identify the sensitivity and resolution of the sensor. First, a FEM model may be used to identify the key parameters that presented the largest change in sensitivity and resolution. Since the fingertip may be isotropic and may revolve around the origin, only a quarter of the sensor may be modeled for faster computation, using a multi-layer based model. The multi-layer model may comprise the base gel, polymer, and coating layers.
Particular embodiments may use a particular system to generate nano-mechanical characterization of the fingertip polymer Young's modulus, E. This system may have in-situ high-resolution imaging, dynamic nanoidentation and a high-precision motion stage with high-resolution force sensing tips. As an example and not by way of limitation, for characterization, a 30 μN force may be applied with a 10 um probe tip. The corresponding force displacement curve may be measured yielding a Young's modulus of, e.g., E=2.86 MPa. Using the experimental E value, the FEM models may be updated to correct the simulations. Furthermore, the same force value applied and maximum displacement, Dmax, may be measured, resulting in the verification of simulation, Dmax=2.1 um as an example and experimental Dmax=2.2 um measurement results as an example, with an example error ≤5%. Additionally, multiple measurements may be taken across varying samples of the fingertip. For example, an average value for E may be measured at E=2.6±0.74 MPa. Both Emean and Estd may be used in detailed analysis for the total E range in the FEM models.
Particular embodiments may further employ design-of-experiments techniques used to identify key parameters in affecting sensor sensitivity. As an example and not by way of limitation, six different parameters may be used: Rgel (gel radius), Tc (coating layer thickness), Tg (gel layer thickness), h (height), Ec (coating Young's modulus), Eg (gel Young's modulus). In particular embodiments, materials associated with the silicone hemispherical dome may be determined based on a plurality of material parameters comprising one or more of gel radius, coating layer thickness, gel layer thickness, height, coating Young's modulus, or gel Young's modulus. Entertaining a full factorial design of 6 parameters may lead to 64 models, thus a quarter-factorial design method may be used to reduce the design into 16 models. Analysis of variance and prediction analysis may result in Ec and Eg as main effects and interactions with coating and gel thicknesses. Hence, the parameters height h and gel radius Rgel may be removed from the model. To analyze the effect of gel and coating thickness, Tg, Tc on the sensor performance, design-of-experiments methods may be used through sweeping Young's modulus parameters, Ec, Eg, and thickness parameters, Tg and Tc. For example, for the protective fingertip layer's Young's modulus, values of Ec,g=0.5, 1.0, 3.0, 5.0 MPa may be used. As another example, for the values of coating thickness, Tc=0.1, 0.5, 1.0, 2.0, 3.0 mm and for gel thickness, Tg=0.5, 1.0, 5.0, 10, 15 mm may be used.
Particular embodiments may determine average and maximum forces which can be applied to the fingertip before the soft hemispherical surface is damaged. Particular embodiments may fix the sensor to a force-torque sensor and apply an increasing force until the fingertip detaches from the body, which may occur at, for example, 40 N and 20 N for normal and shear forces respectably.
Reflecting on FEM simulations, Young's modulus may be an important parameter in sensor performance to input stimuli, and may require precise controller measurement. In addition to nanoindenter measurement, which is point-based measurement, a set of dynamic mechanical thermal analysis (DMTA) measurements may be performed to obtain the global Young's modulus of the gel. With this method, particular embodiments may measure the viscoelastic properties of polymers. During DMTA measurements an oscillating force may be applied to the material, and its response may be recorded to calculate the viscosity and stiffness of the material. The oscillating stress and strain measurements may be important in determining the viscoelastic properties of the material.
When oscillating force is applied, sinusoidal stress and strain values may be measured. The phase difference between sinusoidal stress and strain may provide information about viscous and elastic properties of the material. Ideal elastic systems may have a 0° C. phase angle while viscous systems may have a phase angle of 90° C. Additionally, elastic response of a material may be similar to storage energy and may be captured by storage modulus, while the viscous response may be considered as loss of energy, captured by loss modulus. Thus, the overall modulus of the viscoelastic material may be the combination of elastic and viscous components; in other words, summation of storage modulus and loss modulus. Another value, tan δ, may be used to compare the viscous and elastic moduli. DMTA may measure the change in the elastic modulus, loss modulus and tan δ with respect to temperature. As the viscosity of the material is affected by the temperature and time, usually DMTA experiments may be performed at different temperatures and frequencies. Particular embodiments may use a common approach to select the operational conditions of the materials. Hence, for ideal sensitivity the fingertip may be expected to be used at room temperature and low frequency. As such, DMTA measurements may be taken at 25° C. with frequency 5 Hz in an example embodiment.
During fingertip manufacturing, different combinations of polymers with varying shore values may be evaluated. For example, to identify the global Young's modulus and effect of different gel mixtures, DMTA measurements may be done at 25 degC with frequency 5 Hz, shown in Table 1. Fingertip materials with lower Young's modulus may be preferred in order to optimize for higher sensitivity. Particular embodiments may select a fabrication using a particular silicone encapsulating rubber using, e.g., a 0.8:1 (part A to B) ratio for the gel fingertip base material, and another particular cured silicon rubber as thin protection layer for the thin-film layer.
In particular embodiments, the silicone hemispherical dome may be generated based on manufacturing a mold from aluminum, finishing the mold with a machine polishing pass, preparing the mold for gel casting through a salinization process in a desiccator, preparing a gel material using a cure silicone rubber compound, combining the gel material in a speed mixer under vacuum, casting the gel material into the mold, curing the casted gel material at a first temperature for a first amount of time, and removing a gel hemispherical dome from the mold once the casted gel material is cured. As an example and not by way of limitation, particular embodiments may manufacture the fingertip molds from 6061 aluminum and finish them with a machine polishing pass with a 3 mm diameter tool and a 50 μm step-over. The molds may be then prepared for gel casting through a silanization process in a desiccator with 50 μL silane under vacuum for 30 min. Following this, the gel material may be prepared using a 1:1 ratio of the aforementioned silicon encapsulating rubber and combined in a speed mixer for 3 minutes under vacuum to release any captured air in the sample. The gel material may be then cast into the mold and allowed to cure at 23° C. for 12 h. Once cured, the gel fingertip may be removed from the mold using tweezers for transfer to a glass slide.
In particular embodiments, the reflective silver-film layer may be generated based on preparing a glucose solution by dissolving a first amount of glucose in a second amount of H2O and adding a third amount of KOH, preparing an AgNO3 solution by dissolving a fourth amount of AgNO3 in a fifth amount of H2O and adding a sixth amount of NH3, preparing a plating solution by mixing the glucose solution and AgNO3 solution, cleaning the gel hemispherical dome using oxygen plasma for a second amount of time, activating the gel hemispherical dome in a solution of a seventh amount of SnCl2 in an eight amount of H2O for a third amount of time, suspending the gel hemispherical dome into the plating solution for a fourth amount of time, rinsing the gel hemispherical dome with H2O, and air drying the gel hemispherical dome. As an example and not by way of limitation, the steps for preparing the thin film metallic reflective layer on the gel fingertip through silver plating may be as follows. First, a glucose solution may be prepared by dissolving 2.035 g glucose in 160 mL H2O and then adding 0.224 g KOH. This may be set aside and the AgNO3 solution may be prepared by dissolving 1.02 g AgNO3 in 120 mL H2O, and then adding 1.2 g NH3 25%. The plating solution which is used to silver coat the gel fingertip may be then prepared by mixing 2 parts glucose solution, total 80 mL, to 1 part AgNO3 solution, total 40 mL. The silvering solution may be then set to gently stir. Prior to silver coating, the gel fingertip may be cleaned using oxygen plasma for 3 minutes. The gel fingertip may be then activated in a solution of 6.181 g SnCl2 in 98 mL H2O for 10 s. Once the gel fingertip is activated, it may be suspended into the silvering solution for a total of 3 minutes, then rinsed with H2O and air dried. This process may create a silvered reflective layer with 6 μm thickness. For robotics applications, and for increasing resilience against the intrusion of ambient light, particular embodiments may coat the silvered layer in a white or black layer. This layer may be produced by using the aforementioned cured silicon rubber, a mixing ratio of part A to B of 1:1 and then adding 3% silicon color pigments to part A of the aforementioned cured silicon rubber. Part B of the cured silicon rubber may be then mixed in by weight according to the mixing ratio specified previously and then mixed in the speed mixer for 3 minutes under vacuum. The silvered gel fingertip may be then dipped into the pigment of the cured silicon rubber and set to cure for 6 hours.
Common vision-based tactile sensors may make use of a static illumination configuration whereas some other sensors may use a single light color with colored acrylic to simulate multiple colors. Static illumination may be not ideal to promote a modular system. Rather, the illumination system should adapt to the needs of extracting information from the touch surface. Some conventional tactile sensors used a gel coated with a Lambertian scattering layer, in which volume illumination may produce an image by means of scattering light off the surface and into the vision system. In the case of the monolithic hemispherical gel dome used in the artificial fingertip disclosed herein, the embodiments disclosed herein determine that Lambertian scattering may be not ideal for producing and optimizing for force and spatial sensitivity. Additionally, the embodiments disclosed herein introduce a dynamic illumination system that provides volume illumination with configurable wavelength, intensity, and positioning. As an example and not by way of limitation, the illumination system may comprise 8 fully controllable RGB LEDs that emit Lambertian diffuse light, equally spaced around a circle of radius 9 mm.
Using a Gaussian scatter distribution, particular embodiments may model a range of scatter. The scattering parameter, a, may be chosen to achieve the half-width-half-max angles, a, of the bi-directional scatter distribution (BSDF) function at normal incidence from, e.g., α=1° to 25°, along with a Lambertian scattering model.
With a fully Lambertian scattering model, the hemispherical surface of the gel fingertip may act as an integrating sphere. While Lambertian scattering provides uniform background illumination, the high scattering illumination from nearby interactions may reduce the overall indentation contrast. The embodiments disclosed herein optimized for high image contrast while maintaining uniform background illumination, the better to image impressions produced by gel indentations and minimize the amount of glints that would saturate the image sensor.
Two non-uniformity metrics were evaluated over the fingertip hemispherical surface, Std/Mean and (Max−Min)/Mean. The embodiments disclosed herein demonstrated that low scatter yields images with a large variation in image signal, requiring the camera to handle high dynamic range. If the image is allowed to saturate, to resolve the variations due to the indentations, the saturated areas of the image may be lost. Thus, in these cases the stray light may be more likely to cause objectionable artifacts. The contrast in the image caused by spherical indentations may be in certain areas high, due to bright glint reflections; but areas with large gradients in the background may make the indentations hard to detect. High scatter may give images with low variation in image signal and no areas may be lost due to saturation. In the image caused by spherical indentations, the contrast may be low in certain areas, but the uniform background may make these easier to detect.
The embodiments disclosed herein define a contrast-to-noise (CNR) metric and study three regions of interest on the hemispherical surface for background uniformity noise and indentation contrast. Plotting the calculated CNR across the hemispherical surface for the different scatter angles, it is observed that the CNR is generally higher for less scatter, but CNR is more uniform across the FOV for more scatter. Therefore, the embodiments disclosed herein determine that for a hemispherical fingertip surface the desired texture scattering profile may be constrained between, e.g., half-width-half-max angles of 20° to 25°.
The embodiments disclosed herein comprise a platform based on modular principles to provide an omnidirectional vision-based, multimodal sensing system, and on-device AI capabilities. Particular embodiments may achieve modularity through isolating each part of the system into an electronic assembly with small surface area. Specifically, these modules may include sensing, optics, vision capture, processing, and communications sub-systems shown in
In particular embodiments, the system disclosed herein may provide communications with the host device over the USB 3.0 standard interface. Three separate streams may be provided for data transfer, supporting video, audio, and multimodal data. For example, these streams may collectively output at a maximum rate of 148 MBps and below, depending on the configuration sent to the device. In particular embodiments, tactile sensors may be used in open-loop control, providing information to the host device for processing and additional actions to manipulators. In particular embodiments, edge AI may be added at the fingertip for the following reasons. First, edge AI may help create a latent representation of the data and reduce the overall bandwidth sent to the host device. Second, edge AI may help enable fast local decisions for transmitting actions to manipulators. Third, edge AI may help improve the overall latency of the system while reducing variance in jitter, which is the change in latency. Particular embodiments may model the tactile fingertip system with a manipulator, where both systems may be connected to the host device in a star configuration. This may result in decisions—and actions resulting from tactile information being processed through the host device and disseminated to the manipulators. For more data-intensive designs, capturing data from multiple fingers at once, this arrangement may result in unstable control schemes where information and action latency cannot be guaranteed. To accommodate and expand the terrain of tactile sensing research, particular embodiments may integrate a particular neural network accelerator, for example, a 9 core RISC-V compute cluster with AI acceleration, for on-device processing of selected data streams.
Following high level abstractions of the human reflex arc, the embodiments disclosed herein develop a fast reflex-like control loop using edge AI for local processing. Conventional paradigm of transferring the sensory input to a central control computer for processing, then sending back the control signals, may require high bandwidth while introducing communication latency. In contrast, the paradigm disclosed herein may locally process the sensory input inside the fingertip using edge AI. This may allow drastic reductions of the required bandwidth while greatly minimizing communication latency and minimizing jitter. The embodiments disclosed herein performed an experimental comparison of these two paradigms, shown in Table 2, by measuring the end-to-end latency of the systems using a PCI-e based precision time measurement tool. The embodiments disclosed herein evaluated this experiment on a Linux machine with 64 GB memory and a GPU. First, to ensure granularity in the measurements, each section of the system was isolated, and samples were collected in repeated trials. Second, the embodiments disclosed herein verified these results by subjecting the entire system to repeated measurements and comparing the timing results to the sum of the isolated components. This determined the areas that produce deterministic timing results, as well as highlighting the areas that sustained increased latency and jitter. Furthermore, these results indicated areas of performance improvements and design for future tactile sensors. The results for the entire control loop show how the edge AI paradigm results in a reduction of latency from 4 ms to 1 ms with a desirable smaller variance. In particular embodiments, appropriate edge-AI processing may be extended to further exploit the sequential nature of the camera FIFO memory to parallelize the data capture with the processing, thus yielding even lower latency. In this case, instead of processing the entire image, selected horizontal lines may be sent for processing in the configured region of interest. This may be applicable when touch interactions are most likely to appear in certain regions on the omnidirectional fingertip disclosed herein. The system disclosed herein may support this region-of-interest data output selection for increased resolution and image-capture frequency.
The output of the sub-sampling module 1110 may be input to the SPI transfer module 1114. The output of the SPI transfer module 1114 may be input to an on-device inference module 1116 of the host inference module 1118. The output of the on-device inference module 1116 may be input to the finger action transfer module 1120 based on I2C to finger transfer. The output of the finger action transfer module 1120 may be used to generate a finger action 1122 for an allegro hand 1124a to execute.
In particular embodiments, the output of the image buffer 1108 may be also provided to a USB transfer module 1126 of artificial fingertip (host) 1128. The output of the USB transfer module 1126 may be input to the sub-sampling module 1130 of the host inference module 1132. The output of the sub-sampling module 1130 may be input to the inference module 1134. The output of the inference module 1134 may be input to the finger action transfer module 1136 based on USB-CAN. The output of the finger action transfer module 1136 may go through a palm processing module 1138 of the allegro hand 1124b. The output of the palm processing module 1138 may be input to an I2C palm to finger transfer module 1140, the output of which may be used to generate a finger action 1142 for the allegro hand 1124b to execute.
The embodiments disclosed herein studied the effects of the vision system because this may be the most common modality used in touch sensors, with impacts on overall system latency for host and on-device configurations. The constraining factor on system latency between real-world input and data-processing input is the capture rate of the vision system. This may be limited by the frames per second rate, which may impose a delay of 1/fps, and the internal processing of the image-signal processor. For this reason, for example, particular embodiments may incorporate a CMOS sensor with 240 fps and a pixel size of 1.1 um in the system disclosed herein, which may yield a shorter delay of 4.17 ms as opposed to conventional sensors, which may operate at 60 Hz and thus have delays no fewer than 16.7 ms.
The embodiments disclosed herein evaluated the inference latency with two deep neural networks, MLP and MobileNetV2, for two scenarios: on-device and host inference. The two largest sources of latency arise from the tactile data transfer from device to host, and of action data from host to robotic end effector. For example, within the available headroom between the differences in pipeline latency, Tlatency, an upper limit of Tlatency ≤2.463 ms was established. With an MLP-based network, the embodiments disclosed herein increased the layer depth and observed the latency cost for both scenarios. Table 3 shows a ˜4× decrease in action latency for dynamic tasks which involve high velocity movements thereby reducing total time to action to less than 1 ms.
Observing a more suitable use case for the tactile research domain, the embodiments disclosed herein deployed a MobileNetV2 model and determine the total system pipeline latency.
However, in practical robotics environments the host system may be running a plethora of control and processing applications, with additional overhead of communication between other sensors and devices that introduce overhead to Tlatency of, e.g., 1.2 ms. Comparing this to a conventional artificial fingertip, the embodiments disclosed herein observed an overhead of e.g., 4.7 ms. These differences may be attributed to the frame rate of the artificial fingertip disclosed herein, e.g., 240 fps, and data transfer over USB 3.0, whereas the example conventional tactile sensor may be limited to 60 fps using USB 2.0. The system disclosed herein may enable tactile on-device inference with low latency control with the ability to have reflex-like control of the device to which the system is connected, as well as providing to the host abstractions of lower-level touch signals. An example may be training an on-device model to regress force from multimodal data to introduce touch and manipulation force limits to objects. Another example may be using the on-device AI capabilities to recognize slip and with low latency provide actions to the robotic end effector to reconfigure grasp.
The embodiments disclosed herein design a controllable robot indenter capable of applying with high precision measured 3-axis force on any spatial position of the sensor.
The embodiments disclosed herein start with normal force collection. For example, for high precision, particular embodiments may use the single-axis force sensor that can measure up to 250 mm. As another example, for each region, the robotic indenter may spatially sample 0.5 mm-spaced grid points on the tangential plane. As yet another example, for each point, the probe may move perpendicular from the plane, pressing into the sensor until the normal force reaches 200 mm. During the contact between the probe and gel (defined as normal force, Fnorm >0.2 mN), both sensor images and measured normal force may be collected synchronously. The embodiments disclosed herein collect about 550 image-force pairs per spatial point. For a 7 mm×6 mm region, the embodiments disclosed herein obtain approximately 12,000 points. This point data is randomly split into training (70%) and testing set (30%).
For shear-force data collection, particular embodiments may select a 3-axis force sensor to simultaneously measure normal force and shear force. Particular embodiments may apply sufficient friction while varying shear force.
Contact-force prediction on vision-based tactile sensors such as the system disclosed herein may be achieved using an image-to-force regression model. The model may be calibrated from reference data. Once calibrated, particular embodiments may evaluate the sensor-model as a system for force-sensing performance on a testing dataset. The embodiments disclosed herein collected the dataset for training and evaluating the model to benchmark normal and shear-force-sensing performance. Particular embodiments may use a modified ResNet50 deep neural network for the image-to-force regression model. For example, the network may take an input image of 224×224×3 and output 1024-way object-classification probabilities. Particular embodiments may replace the classification head with a scalar-output linear layer predicting the force. Particular embodiment may use mean-square error as the loss, then optimize with Adam with initial learning rate search. In one example embodiment, the raw images from the sensors are 640×480, which may be down scaled to 224×224 with 20-pixel spatial jitter to improve spatial invariance. Particular embodiments may pool training data from all three regions to train a single model and obtain the prediction performance (median error) breakdown by regions as described in
The embodiments disclosed herein evaluated additional normal-force resolution performance with two kinds of gel surface finish: specular and Lambertian. Lambertian surface scattering, typically considered preferable for vision-based tactile sensors, was outperformed by its specular counterpart. This may come from the enhancement of surface texture contrast due to specular reflection, which helps the imaging system track gel deformation.
Conventionally, a general view held that some tracking pattern (dots, for example) may be required for shear force measurement. However, the optical flow result reported above suggests that this requirement may be relaxed, due to the increasing resolution and quality of images. Such advancements may facilitate using the natural fingertip surface texture to observe gel deformation, and in turn to perform shear force estimation.
Particular embodiments may establish a modality within the artificial fingertip to determine object state and obtain clues on object classification. Particular embodiments may identify two key performance metrics for fingertip gas sensing: accuracy and signal acquisition time. The embodiments disclosed herein observe 6 different materials from liquid to solid commonly found in a household environment. These materials are coffee powder, liquid coffee, a nondescript rubber material, cheese, and a spread of soap and butter on a surface. All the materials were sampled at room temperature with a robotic arm and the disclosed artificial fingertip approaching the samples to near contact, within 1 cm, for a duration of 90 s. The embodiments disclosed herein record multimodal data and isolate the humidity, temperature, pressure, gas oxidation resistance datapoints at the maximum output frequency for each sampling modality. Over 100 approaches to each material are collected during a 3-hour sampling period. Between each approach, the embodiments disclosed herein sample air from the local environment. The raw data with the modalities of interest listed above are provided as inputs to a multi-layer perceptron network with a single 64-node hidden layer. The embodiments disclosed herein train the network with cross-entropy loss using an Adam optimizer with learning rate 0.1. The embodiments disclosed herein show the final accuracy of the model is not sensitive with respect to the size of hidden layers or learning rate. For example, the embodiments disclosed herein show that through these 6 materials a classification accuracy of 91%. Furthermore, as another example, the embodiments disclosed herein show the signal acquisition time to reach 66% accuracy.
The embodiments disclosed herein evaluate the disclosed artificial fingertip in performance with respect to spatial resolution, shear and normal forces, illumination, vibrations, heat, and local gas sensitivity.
The embodiments disclosed herein model the fingertip surface as a two-layer stack formed by an external diffusive material adhered to an internal reflective thin-film which is grown onto the non-rigid solid silicone body of the fingertip. In other words, the silicone hemispherical dome may further comprise a protective diffusive layer coated to the reflective silver-film layer. The embodiments disclosed herein then explore the effects of the non-rigid solid silicone surface mechanical properties, texture and the degree of controlled light scattering to find an optimal performance metric between background uniformity and image contrast. The embodiments disclosed herein show that increasing controlled surface texture scatter from 1-degree scattering to a Lambertian scatter results in an increase in background illumination uniformity thereby an increase in image impression contrast. However, with low degrees of scattering, intense hotspots artifacts may dominate the background whereas when the degree of scattering approaches Lambertian scattering, these artifacts may decrease along with a decrease in image contrast which may directly result in decreased sensitivity to impression stimuli. With little or no scatter with a polished surface, minimal background illumination may be present which motivates the production of shadows created by indentations against the fingertip surface. Furthermore, glint reflections off produced indentations may be minimal to non-existent and may not produce a consistent appearance across the surface. On the contrary, with the conventional method of visual-tactile sensors using Lambertian scattering surfaces, the embodiments disclosed herein show that the hemispherical sensing surface may act as an integrating sphere, where shadows may cast by direct illumination striking the indentations are wiped out by scattered illumination from other areas and even while imaging may occur on far off-axis angles, their contrast may be low. The embodiments disclosed herein introduce a controlled degree of scattering in which an optimized uniform background illumination may be achieved that lends itself well to contrast between indentations and the surrounding surface, furthermore, all indentations are imaged (see
The embodiments disclosed herein evaluate the normal force sensitivity and first collect tuples of normal forces applied by a micro-indenter and corresponding outputs from the sensors, and then train a deep-learning model from this dataset. For example, the trained model (see
To carry out spatial resolution evaluations, the embodiments disclosed herein define the spatial resolution of an artificial fingertip sensor as the minimum feature size that can be resolved with an MTF ≥0.5; this may be determined by how well the contrast is preserved quantified by line pairs per millimeter. The embodiments disclosed herein first simulate the imaging system from the design which yields that on-axis contacts are resolvable for features of size ≥6 um for region 1, ≥8 um for region 2 resolves, and ≥22 um for region 3. The embodiments disclosed herein then validate these results by collecting data with a two-pronged micro-indenter depressed onto the fingertip, varying the distance between the two prongs and observing the taxel intensity line profile; both the visual validation and the inspection of the taxels profile intensity confirmed the embodiments disclosed herein may clearly distinguish features as small as ≥7 um for region 1 (see
Multimodal information such as vibrations upwards of 10 kHz, auditory clues, sensitivity to heat and smells may play an important role in human touch. However, typical vision-based tactile sensors may not contain a broad range of multimodal capabilities to capture this information or operate lower sensing frequencies such as 60 Hz. Even with the fast camera of the disclosed artificial fingertip which operates at, e.g., 240 Hz, highly dynamic movements may not be fully captured. The embodiments disclosed herein evaluate capturing vibrations up to 10 kHz which may be enough to distinguish between different materials upon a simple light sliding of the finger. Furthermore, the embodiments disclosed herein show that these multimodal features can be used to detect the amount of liquid inside a bottle by simply tapping it with a fingertip (see
As inspired by the human reflex arc, the embodiments disclosed herein demonstrate a fast reflex-like control loop using the on-device AI neural-network accelerator for local processing. Compared to conventional sensors using an external computer for processing, on-device processing on the artificial fingertip disclosed herein may reduce latency, for example, from 6 ms to 1.2 ms (see
The embodiments disclosed herein may advance the state of artificial fingertip sensing towards digitizing fingertip interactions between the environment and objects. The embodiments disclosed herein disclose an artificial fingertip that may be more sensitive in spatial and force sensitivity compared to conventional methods with the additional technical advantage of multimodal sensing features and a local processing ability. Experimental results demonstrate the digitization of touch with capabilities that outperform a human fingertip. The richness of touch digitized by the disclosed modular platform may open new promising venues into studying the nature of touch in humans and investigating key questions around the digitization and processing of touch as a sensor modality. Moreover, the embodiments disclosed herein may open the doors to a wider adoption of touch sensors beyond conventional niche fields: In robotics, to improve sensing and manipulation capabilities with benefits for applications in manufacturing and logistics, medical robotics, agricultural robotics, and consumer-level robotics; In artificial intelligence, to investigate the learning of appropriate tactile and multimodal representations, and corresponding computational models that can better exploit the active, spatial and temporal nature of touch. Further potential applications may include virtual reality and telepresence, prosthesis, and e-commerce.
Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context.
The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages.
This application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application No. 63/383,069, filed 9 Nov. 2022, which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63383069 | Nov 2022 | US |