Various embodiments of the present technology generally relate to robotics and prosthetics. More specifically, some embodiments of the present technology relate to multi-modal fingertip sensors with proximity, contact, and force localization capabilities.
Traditional tactile sensors in both the robotics and the prosthetics fields still have many barriers for these sensors to be integrated into self-contained prosthetic hands. In robotics, contact information is useful for a variety of grasping-related tasks such as object identification through haptic exploration/palpation and object manipulation that involves gentle interaction. Proximity information is used primarily for the pre-grasp improvement, reactive grasping, and point-cloud construction of objects. Dynamic force patterns are useful in detecting slip and other such disturbances from the grasped objects as well as for providing sensory feedback to users of prosthetic devices. This in turn informs about the grasp stability associated with an object and allows reactions to unpredicted disturbances. The ability to estimate the position and orientation of the object in hand is an important skill for effective object manipulation.
There are a number of challenges and inefficiencies created in traditional robotic and prosthetic sensors. For example, traditional tactile sensors are unable to detect spatial location of loads and angles of incidence of the force and cannot detect zero force contact/release events. The ability to pre-shape the prehensor in advance of making contact with the object is not possible without proximity information. Thus, it can be difficult to create biomimetic sensory-feedback paradigms like Discrete Event Sensory Control (DESC). It is with respect to these and other problems that embodiments of the present invention have been made.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed matter.
Various embodiments of the present technology generally relate to robotics and prosthetics. More specifically, some embodiments of the present technology relate to multi-modal fingertip sensors with proximity, contact, and force localization capabilities. In some embodiments, a fingertip sensor can include a proximity sensor, a pressure sensor, a circuit with various digital electronics, and a viscoelastic compressible material, and/or other components. The proximity sensor (e.g., infrared emitter-detector) can be used to detect distance from the proximity sensor to an object and produce a proximity signal and detect initial contact. The pressure sensor (e.g., barometer) can be to detect contact with the object and produce a pressure signal indicative of the force being applied. The pressure sensor (e.g., barometer) may provide new readings at a much slower rate than the proximity sensor. For example, the pressure sensor in some embodiments may only provide a new reading every half second while the IR can be sampled up to 1 KHz. The circuit with digital electronics can be configured to receive the proximity signal from the proximity sensor and the pressure signal from the pressure sensor to identify spatial position and angular orientation of the object relative to the fingertip sensor. The viscoelastic compressible material can enclose the proximity sensor, the pressure sensor, and the circuit.
Embodiments of the present invention also include computer-readable storage media containing sets of instructions to cause one or more processors to perform the methods, variations of the methods, and other operations described herein.
While multiple embodiments are disclosed, still other embodiments of the present invention will become apparent to those skilled in the art from the following Detailed Description, which shows and describes illustrative embodiments of the invention. As will be realized, the invention is capable of modifications in various aspects, all without departing from the scope of the present invention. Accordingly, the drawings and Detailed Description are to be regarded as illustrative in nature and not restrictive.
Embodiments of the present technology will be described and explained through the use of the accompanying drawings.
The drawings have not necessarily been drawn to scale. Similarly, some components and/or operations may be separated into different blocks or combined into a single block for the purposes of discussion of some of the embodiments of the present technology. Moreover, while the technology is amenable to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and are described in detail below. The intention, however, is not to limit the technology to the particular embodiments described. On the contrary, the technology is intended to cover all modifications, equivalents, and alternatives falling within the scope of the technology as defined by the appended claims.
Various embodiments of the present technology generally relate to robotics and prosthetics. More specifically, some embodiments of the present technology relate to multi-modal fingertip sensors with proximity, contact, and force localization capabilities. Numerous tactile sensors have been designed with application to both robotics and prosthetics. However, many barriers remain for these sensors to be integrated into self-contained prosthetic hands. Some of these barriers include the digital communication systems, the multiplexing of multiple sensors, and the wiring of the sensors throughout the device. Simple off-the-shelf pressure sensors (e.g., FlexiForce, Tekscan Inc. South Boston) are well used but lack the ability to detect spatial location of loads and angles of incidence of the force. None of the traditional sensors can detect zero force contact/release events which are important signals for the recreation of biomimetic sensory-feedback paradigms like Discrete Event Sensory Control (DESC) as well as the proximity of objects with respect to the prehensor.
In robotics, contact information is useful for a variety of grasping-related tasks such as object identification through haptic exploration/palpation and object manipulation that involves gentle interaction. Proximity information is used primarily for the pre-grasp improvement, reactive grasping, and point-cloud construction of objects. Dynamic force patterns are useful in detecting slip and other such disturbances from the grasped objects. This in turn informs about the grasp stability associated with an object and allows reactions to unpredicted disturbances. The ability to estimate the position and orientation of the object in hand is an important skill for effective object manipulation. However, there are only few sensors that combine all of this information into a single package and few if any have been effectively translated to address the unique challenges of prosthetic limb design.
In contrast, various embodiments of the present technology include a sensor (e.g., for a prosthetic or robotic fingertip) which integrates both an infrared emitter-detector and barometer to form a proximity, contact, and force sensor (see, e.g.,
Various embodiments of the present technology provide for a novel multi-modal tactile sensor which comprises an infrared proximity sensor and a barometric pressure sensor embedded in an elastomer layer. Signals from both of these sensor can be fused to measure proximity (0-10 mm), contact (0N) and force (0-50N) and to localize impact at five spatial locations and three angles of incidence. Gaussian processes in a regression setting can be used to obtain calibrated force measurements with an R-squared value of 0.99. Supervised machine learning approaches can be used to localize the position and direction of probing with classification accuracies of 96% and 89% respectively. Preliminary experiments show the complimenting nature of both sensors that lead to several sensing modalities that no sensor can provide on its own with potential use in prosthetics and robotics.
Various embodiments of the present technology provide for a wide range of technical effects, advantages, and/or improvements to computing systems and components. For example, various embodiments include one or more of the following technical effects, advantages, and/or improvements: 1) tactile sensor including multiple sensor modalities allowing simulation of biomimetic responses; 2) integrated use of machine learning to identify contact, forces, and angles of interactions with an object; 3) use of tactile sensors to provide pre-shaping of an artificial hand to reduce crushing, tipping, or other unwanted interactions with an object; 4) use of unconventional and non-routine computer operations to improve grasping interactions; 5) cross-platform integration of machine learning to more efficiently operate artificial hands and limbs; 6) changing the manner in which an artificial hand interacts with environmental situations; 7) changing the manner in which an artificial hand reacts to user interactions and feedback; and/or 8) improving sensory feedback signals used to restore sensation in prosthetic device users
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present technology. It will be apparent, however, to one skilled in the art that embodiments of the present technology may be practiced without some of these specific details.
The techniques introduced here can be embodied as special-purpose hardware (e.g., circuitry), as programmable circuitry appropriately programmed with software and/or firmware, or as a combination of special-purpose and programmable circuitry. Hence, embodiments may include a machine-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform a process. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), magneto-optical disks, ROMs, random access memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing electronic instructions.
The phrases “in some embodiments,” “according to some embodiments,” “in the embodiments shown,” “in other embodiments,” and the like generally mean the particular feature, structure, or characteristic following the phrase is included in at least one implementation of the present technology, and may be included in more than one implementation. In addition, such phrases do not necessarily refer to the same embodiments or different embodiments.
One or more of these fingers 120 can be formed via 3D printing around the sensor assembly, or the finger(s) can be 3D printed to allow for the sensor assembly to be inserted into the 3D printed finger after creation. Alternatively, an elastomer, such as liquid silicon polymer (e.g., Dragon skin 10), can be poured into a mold containing the sensor assembly such that the finger is “overmolded” over the sensor assembly. The elastomer preferably has low viscosity when poured into molds and mechanical robustness post curing. In some embodiments, a vacuum can be applied before pouring the elastomer into the mold to completely remove air from the polymer.
In accordance with various embodiments, tactile sensor 110 can include a logic circuit (e.g., PCB with a logic circuit printed thereon) that can be used to multiplex the sensor assembly's communication (e.g., using Inter-Integrated Circuit (I2C) Protocol) signals for access by a host computing device. The host computer can be separate from the prosthetic or robot or can be incorporated into the prosthetic or robot (e.g., a central controller board). For instance, the host computer can be worn on other anatomy of a user of the prosthetic. A microcontroller (e.g., Arduino) can be used to perform the multiplexing. In some embodiments, the multiplexing can include two signals per finger (one from the pressure sensor and one from the proximity sensor), such that the total number of signals to be multiplexed is n*2, where n is a number of fingers). The microcontroller firmware can perform the proximity calculation for the proximity sensor as well as the calibration and temperature compensation for the pressure sensor (e.g., using algorithms provided by the sensor manufacturer). The firmware can then send calibrated proximity and pressure data to the laptop computer through a serial USB interface. Some embodiments use a custom LabView (National Instruments Inc.) program to visualize real-time signals from the sensor assembly and can store data off-line for processing and analysis.
To experimentally characterize the performance of the sensors, multiple fingertip sensors were fabricated and tested. An Instron material testing machine (MTS Insight II—Low capacity: 2 kN maximum) applied calibrated loads to various spatial positions and angles of incidence on the fingertip as detailed below. The loads were applied using a probe with a flat circular tip (15 mm diameter) and monitored using a 250N load cell (model: M569326-06, sensitivity: 2.016 mV/V). The MTS machine applied prescribed loads ranging from 1N to 50N) at a rate of 1 mm/s with a sampling rate of 16 Hz. Additional fingertip “pillows” were prototyped in order to locate the fingertip sensor in the prescribed spatial and angular orientations with respect to the probe. The spatial dataset measured contact events at the center, 2.5 mm distally, 2.5 mm proximally, 2.5 mm medially, and 2.5 laterally and the angular orientation dataset measured contact events at 0 degree, 20 degree, and −20 degree angles.
These spatial and angular conditions were chosen in order to span the entire range of the detectable volume of the fingertip sensor. The center location was defined as directly above the midpoint of the PCB. The angular orientations were defined with respect to the normal vector of the PCB. In each condition, a sequence of 10 contact events at each maximum load took place. Each contact event was separated by a 1 second delay. The maximum loads tested were 1 N, 5N, 30N, and 50N. These loads were chosen to span a typical range of loads seen by fingers in everyday use.
The sensor fusion study followed the following procedure: The direction of probing angle was fixed to 0 degrees to obtain the mapping from the analog proximity and pressure readings to true force in Newtons. Ten dynamic loading and unloading cycles were performed on the finger using the same Instron machine described above. To generalize these loading and unloading cycles to everyday forces that the sensor would experience, various embodiments perform this test with multiple maximum load forces (1, 5, and 50 N). Note that the finger 300 and the probing location are kept constant for this calibration. In total, 10 curves for each maximum load force from the barometer sensor, IR sensor, and the load cell for a total of 90 curves (10×3×3) were created.
To collect data for classifying the direction of probing, 10 dynamic loading and unloading cycles were performed with the Instron machine for the maximum peak forces of 1, 5, 30, and 50 N at 0, 20, and −20 degrees of probing direction. Custom-made 3D-printed pillows were used for the finger 300 that align it at various angles with respect to the probe. In total, 120 combined loading and unloading curves were produced.
To determine the spatial location of impact on the finger, the data were collected by probing the finger 300 at different locations with respect to the center of the finger 300 (see, e.g.,
The calibration of multi-modal fingertip data to measure force is non-trivial. The combined signals from the fingertip vary based on the position and orientation of contact. Therefore, it is challenging to estimate a single function with a fixed number of parameters that will map the raw barometer and IR readings to true force in Newtons. To help solve this problem, various embodiments of the present technology use a Gaussian process (GP) in a regression setting to map the sensor input to a calibrated force measurement.
The GP approach is a non-parametric approach in that it finds a distribution over the possible functions f(x) that are consistent with the observed data. In a regression setting, one can aim at finding the function with y=f(x)+E with y being the observations, x a set of independent variables, and E being an error term. A GP is defined by a mean function m(x) and covariance function k(x,xt), otherwise known as a kernel function. GP defines a prior over the possible functions, which can be converted to posterior once data is available. In other words, there are some known parameters x for which there is some observed outcome f(x). Suppose there are some points x* for which one would like to estimate f(x*).
An estimate of the conditional probability p(f*|x,x*,f) on the assumption that the functions f and f* are drawn from a joint distribution defined by the GP. A specific advantage of Gaussian processes in our case is that they are computationally affordable on small datasets and have a well-tuned smoothing property.
Various embodiments frame the problem of localizing external loads on the finger into two separate supervised-learning problems: 1) classification of the spatial location of load, and 2) classification of the angle of incidence of the force at 0 degree, 20 degree, and −20 degree angles (see, e.g.,
Support vector machine (SVM), k-nearest neighbor (kNN), dynamic time warping (DTW), naive Bayes, and so on are very popular due to their high computational efficiency and high resistance to noise. However, it is inherently difficult to design good features that can capture intrinsic properties embedded in various time series data. Several deep learning frameworks are better in such cases as they do not need any handcrafted features by people, instead they can learn a hierarchical feature representation from raw data automatically. To compare these two supervised learning frameworks, an SVM classifier and a convolutional neural network (CNN) was trained for each of the supervised learning problems.
Pressure sensor 610 can provide a measurement of the pressure within the fingertip sensor. For example, pressure sensor 610 may provide a linear measurement of the applied force after a minimum range has been crossed. In some embodiments, pressure sensor 610 may be a single element or an array of pressure sensors to provide an array of measurements. Pressure sensor 610 may be a barometric pressure sensor that flexes inward and causes an increase in atmospheric pressure within the sensor. This change in pressure can be sensed by the device's internal barometer and translated to an analog output signal (e.g., a voltage signal). The analog output signal can then be converted to a digital signal using A/D converter 620 which microprocessor 630 can map into an estimate of the touch force on the fingertip. I2C communications port 640 allows multiple pressure sensors to communicate via address bus 690 with other integrated circuits such as a central controller board (not shown). In some embodiments, data may be transferred at a rate between 100 kHz to 400 kHz.
IR or distance sensor 650 can be an infrared (IR) emitter-detector to detect the distance between the sensor and the object. The measurement can be provided as an analog output which A/D converter 660 can convert to a digital signal which microprocessor 670 can use to create an estimate of the distance. I2C communications module 680 allows the output of microprocessor 670 to be communicated to other integrated circuits or controllers (e.g., a central controller board.
Cotton was chosen because of its light weight and to show that the infrared sensor can detect contact forces close to 0N that the barometer cannot measure. The proximity signal includes some nonlinear elements which are visible in the curve at the time force is applied on the cotton. In the embodiments illustrated in
To study the relationship between the proximity and pressure readings to true force, various embodiments fixed the direction of probing angle to 0 degrees. Ten loading and unloading cycles were performed on the finger using the same Instron machine described above. These loading and unloading cycles were generalized to everyday forces that the sensor would experience by performing with multiple maximum load forces (1 N, 5N, and 50N). Note that the finger and the probing location is kept constant for this experiment. This gives ten peaks for each maximum load force from the barometer sensor, IR sensor and the load cell for a total of 90 peaks (10×3×3).
Some embodiments may include data preprocessing steps including passing the raw sensor signals (proximity and pressure) through a low-pass filter to remove unwanted noise from the signal. To segment out an individual curve consisting of loading and unloading cycles at a particular maximum peak load force, the peaks were first located from each contact. After locating the peaks, a window of 180 samples (90 samples on each side of the peak) was taken and segment out the individual loading and unloading curves. Individual peaks were then concatenated from each sensor at peak load force of 1 N, 5N, and 50N into a single array. This gives a 3×10 set of data: three sensors (two on the finger and the external force sensor) and ten measured contact events.
The kernel of the Gaussian process was trained by providing it a set of inputs Xtrain and targets Ytrain (normalized). Inputs correspond to concatenated raw IR and barometer values, and targets correspond to forces in Newtons from the external load cell. The Gaussian kernel being used is a radial-basis function (RBF) kernel (also known as squared-exponential kernel) implemented in the Scikit-learn library. After the kernel has learned the relationships within the data (Xtrain and Ytrain) the kernel is presented with the testing dataset to predict the labels Ypred given the Xtest. The accuracy of the fit is determined using the rootmean-square error (RMSE) and R-squared (R2) score.
The interaction between the elastomer shell enclosing the sensors and the sensors themselves is difficult to study. This interaction leads to proximity and pressure signals of varying nature from the sensor when it is impacted from different directions and at different locations. To localize impact on such a dynamic sensor various embodiments can break down the problem into two smaller sub-problems. First, the method can identify the angular direction of probing, and second, the spatial location of impact with respect to the center of the fingertip. This can be framed as a classification problem in a supervised learning framework, and train a SVM and a CNN for each of the subproblems: 1) Probing direction; and 2) Spatial location.
To collect data for the probing direction classification a plurality (e.g., 10) of loading and unloading cycles were performed with the Instron machine for a plurality of maximum peak forces (e.g., 1 N, 5N, 30N, and 50N) and at a plurality of probing directions (e.g., 0, 20, and −20 degrees of probing direction). Some embodiments may use custom-made 3D-printed pillows for the finger that align it at various angles with respect to the probe. Assuming 10 loading and unloading cycles, 5 maximum peak forces, and three different probing directions, 120 combined loading and unloading curves were produced. Data preprocessing steps can include locating the peaks from every data collection trial. After locating the peaks, some embodiments can take a window size of X samples (e.g., 150), with half the samples on each side of the peak, and segment out the individual loading and unloading curves. Some embodiments then standardize the individual loading and unloading curves to have zero mean and one variance.
Some embodiments can use SVM as a baseline classifier since the amount of data collected for classification is small. An advantage of such a model is that fewer parameters need to be learned and the user has greater control over the model itself. A couple of variations of the barometer and IR sensor values can be explored to create features for the SVM. The most promising feature was the ratio of the IR and barometer values which gave us a significant rise in testing accuracy. Some embodiments also included the data points of maximum force and minimum force from the sensor into our feature vector. In some embodiments, a polynomial kernel can be used with a penalty factor of C=1. In order to avoid over fitting of our models to the data, cross validation was performed on all of the models described below. The accuracy obtained after 6-fold cross validation is shown in Table II.
In addition to this, some embodiments may train a small neural network to classify probing direction. Since convolution inherently captures the relation between the signals it is convolving across, the features ourselves may not have to be hand engineered. The raw data can be fed directly into the network in some embodiments. The network may include two 2D-convolution layers followed by a flattened layer and finally a dense output layer of 3 neurons with softmax activation. The accuracy obtained after 6-fold cross validation on the training and testing dataset is shown in Table II.
To determine the spatial location of impact on the finger some embodiments can use the same supervised learning models described above with different parameters. The data can be collected by probing the finger at different locations with respect to the center of the finger. Custom-made 3D-printed pillows may be used to align/offset the finger with respect to the center of the probe. As mentioned earlier, the signal from the IR sensor has greater variations as compared to the barometer. This variation may be an important factor in achieving the unexpected greater classification accuracy of using both a proximity and pressure sensor as opposed to just a pressure sensor.
The data collection procedure can include Y number of trials (e.g., Y=10) of loading and unloading for each of X number of maximum forces (e.g., 1 N, 5N, 30N, and 50N) for Z spatial locations (e.g., Z=5) with respect to the barometer. The data is segmented into a single combination of loading and unloading curves summing to a total of Y×X×Z (e.g., 10×4×5=200 curves). The data can then again be standardized before feeding it into the models.
The features extracted for training the SVM may be similar to those described previously. A radialbasis function (RBF) kernel with a penalty factor of C=8 was experimentally found to give a mean classification accuracy of 0.959 (+/−0.0354) after 6-fold cross validation (Table III).
The neural network architecture is also the same as described previously, except for an increase in the number of output neurons on the dense layer from 3 to 5, as now there are 5 labels to classify. The number of filters, their size, and kernel parameters were kept constant to compare the results. The accuracy obtained after 6-fold cross validation on the training and testing dataset is shown in Table III.
The herein disclosed sensor assembly has a variety of applications in robotic/prosthetic grasping and manipulation due to its ability to estimate proximity, contact, force, location, and direction of impact. These signals are important when, for example, the object is to be reoriented in the hand or the object is intended to be used as a tool.
The Gaussian processes method used in some embodiments can enable fusing of the pressure and proximity sensor data to force (e.g., Newtons). For the classification task, SVM outperforms the CNN approach, which is believed to be due to overfitting. Although the numerical values are a good fit, the proposed methods might not generalize over different probing shapes and material since the shape of indentation on the elastomer drives the signals in an unpredictable manner. Even though Gaussian processes regression is the most accurate regression method, it has an exceptionally high computational complexity which prevents its usage for large numbers of samples or learning online. The infrared proximity sensor has a strong dependence on the surface properties (e.g., color, texture, and reflectivity) of an object which can throw off the calibration for objects. However, it is believed that the sensor's multiple sensing modalities may help to mitigate some of the challenges discussed above. The linear behavior of the barometer could help calibrate the sensor against objects with a variety of surface properties, and the nonlinear response of the infrared sensor could be used to identify those surface properties.
Further, there are currently no standardized benchmarks for tactile sensing for robotics/prosthetics and manipulation. Here, system-level performance for specific tasks might become more important than characterization of individual sensor characteristics. At the same time, deep reinforcement learning is emerging as a promising technique to learn task level behaviors. Having shown the ability to identify task relevant patterns in data, these techniques might strongly benefit from multi-modal tactile sensing information such as is provided by the sensor presented here. Similar thinking applies to using the sensor in a myoelectric-prosthetic control context. Various embodiments of the present technology may include a myoelectric interface to detect voluntary muscular contractions from a patient and generate the volitional signal. The volitional signal can be a myoelectric signal collected from electrodes positioned on a limb of a subject.
Various embodiments of the sensor assembly's extended spatial capabilities will provide relevant force feedback to amputees even when an object is not centered against each digit. This fact will provide a better sensor for advanced neural interfaces since one can ensure a reliable source of force feedback during the complex activities of daily life. This is possible due to the effectiveness of these two distinct signals: 1) the reflectance of IR light off a reflecting surface and 2) the change in pressure due to the compression of an elastomer.
Various embodiments provide for a multi-modal fingertip sensor which can include an infrared proximity sensor and a barometer embedded in an elastic polymer. The compact sensors include all of the instrumentation, analog-to-digital conversion, and control circuitry which ensure reliable signal quality using the standard I2C communication protocol. The molded elastomer fingertip surface provides a durable interface to manipulate objects while allowing reliable measurements of those interactions. The fingertip sensor can be mapped to actual loads. For instance, some embodiments characterized the fingertip sensor over loads varying between 1N and 50N, and measured the systems response to loads applied spatially about the center and angled with respect to the normal surface of the fingertip. This characterization encompassed 28 distinct loading scenarios. Some embodiments use a Gaussian processes model to fuse the raw barometer and IR sensor readings to determine the applied force with an R-squared value of 0.99.
Then, the location of loading can be identified using supervised learning methods and obtained a classification accuracy of 96% and 92% using a Support Vector Machine and Convolution Neural Network, respectively. They similarly classified the probing angle and obtained classification accuracies of 89% and 83%, respectively.
These sensors can also be integrated with neural interfaces to provide rich sensory information to upper-limb amputees and robots. The calibrated force signal can provide a reliable tactile signal while the proximity and contact signals can allow for investigations of new sensory paradigms. The proximity signal can be mapped to non-physiological percepts while the contact signal can be utilized in a DESC-based manner. Furthermore, real-time sensor-fusion classification can be implemented. Once accomplished, the spatial and angular information may be relevant to certain neural interfaces and/or may be used in shared control paradigms of the prosthetic limb.
In some embodiments, pre-shaping module 1530 can pre-shape the hand causing the distance between each digit of the prehensor and the object to settle into the same constant distance. Contact detection module 1535 can detect the initial contact of each digit with the object (e.g., using the derivative of the IR sensor data 1505) and generate an indication of contact. This initial contact information along with the raw pressure data 1510 can be used by force control loop 1540 to set the pressure of each digit to a desired level.
Visual feedback provides a great deal of information about the environment and objects necessary for object selection, grasp planning and manipulation purpose. Tactile feedback on the other hand helps interpret the physical interactions of the object with the hand. Visual data however inherently suffers in low lightning conditions and occlusion form the hand itself. Hence, it is not suitable to accurately track the shape and position of the object during pre-grasp and grasp phase. Moreover, controlling a robot hand with high degree of freedom (DOF) is challenging given such inaccurate information from a vision sensor.
As such, some embodiments provide for a pre-grasp shaping of the robot hand using proximity sensors on the fingertips to reduce the complexity of controlling the hand to adapt to objects of varying sizes and shapes. Moreover, measuring the magnitude and location of contact eliminates the possibility of moving or damaging the object with imperfect contact forces. Various embodiments of the present technology create a reflex like behavior for pre-grasp and grasp phases using the upgraded design of the disclosed multimodal proximity-contact-force (PCF) sensor and a five-fingered robot hand using proximity signals for pre-grasp shaping and gently touching objects of unknown shapes with a five fingered robot hand.
As illustrated in
As long as no contact 1634 is detected the system will stay in pre-shaping state 1630. If a volitional open signal 1638 is detected the system will transition to open state 1640. If a contact signal 1636 is detected, then system will transition to grasping state 1650 where event 1652 initiates a grasping protocol causing the prehensor to grasp the object. If a volitional open signal 1654 is detected, the system will transition from grasping state 1650 to open state 1640.
An experimental setup was made of a five fingered Bebionic V2 prosthetic hand (RSL Steeper Inc.) equipped with the upgraded PCF sensors as shown in
The robot hand had six DOFs. One DOF for each finger to open and close and one additional DOF in the thumb joint for abduction. The original electronics of the hand were replaced with a custom built motor controller boards from Sigenics Inc. The motor controller boards had an in-built PID position controller. The motors can also be driven by pulse width modulated (PWM) signal.
In order to measure the effectiveness of the embodiments implemented, a motion camera system was used to track the 6D pose of the object to provide an absolute change in its position before and after a grasp. Seven markers were attached on a cup and the makers were placed such that their 6D position is tracked by four cameras.
For pre-grasp shaping a simple Proportional Derivative and Integral (PID) controller was used to control the position of the fingers based on the proximity signals. However, other embodiments may use different controllers. Inputs to the controller were normalized proximity values from the PCF sensor and output of the controller is the Pulse Width Modulated (PWM) control signal for the finger motors. The PID gains were tuned for each of the fingers individually such that all fingers maintain a constant distance from an object.
In the contact detection phase the fingers are slowly moved towards the object with a constant PWM signal such that once contact is detected the finger motors are stopped. The contact was measured in the implemented embodiments by averaging (or smoothing) the raw proximity signal with an exponential averaging filter and subtract the original signal from this smoothed signal. Both the controllers were written in C programming language to avoid delays associated with transferring data over USB serial bus to the host computer. With this a decent response was obtained at around 100 Hz.
The experiment was started by placing a cup at a fixed location within the aperture of the hand. The performance of the both controllers (for pre-grasp shaping and contact) is tested against the case where no controller is used. A 10 g dead weight is placed in cup initially to balance the torque created by the markers. Weights are then incrementally added in the cup. Each trial consists of the hand initially fully open. An input from the experimenter sets the hand in the pre-grasp shaping mode. In this mode the fingers dynamically maintain a constant distant from the objects. Once all the fingers stop moving another input from the experimenter sets the hand in the grasp mode where the fingers gently move until contact is detected. Next trial starts by replacing the weight in the cup with next larger weight. The robot hand was fully opened and the cup was placed in the fixed location. Same steps were repeated for the case where no controller is used. In this case the robot fingers were position controlled to move to a set location where the cup is positioned.
Data from the motion capture system was continuously recorded from the start of a trial until the fingers come in contact with the cup. A MATLAB function called findchangepts was used to find the point where there is a significant change in the signal. This function allowed for the calculation of the abrupt change in the resultant translational position of object by first averaging the position over the period of before and after the grasp and then subtracting them. This change in the resultant position of the cup with and without the use of the controller is shown in
Various embodiments provide a simple reflex behavior to pre-shape a five fingered robot hand and gently touch objects based on the proximity signals from the PCF sensor. Some embodiments may include a model or an on-the-fly calibration routine that would encode color dependence of the infrared proximity sensor. This would allow some embodiments to be extended to objects with any surface reflectively. Some embodiments may use a grip force control strategy in order to pick up objects with optimal force without damaging them. The motor friction of the robot fingers is not consistent across the entire range of the finger from fully open to fully close. Therefore, a single set of PID gains or PWM value for each finger does not allow the intended function of the finger. Sometimes it results in excessive motion while sometimes no motion at all. Some embodiments may address this issue by either using different set(s) of gains for different functional regions of the finger or by using some form of model predictive approach. Development of such a reflex like control of a multi-fingered robot hand is expected to dramatically improve grasp success and also help in effortless control of a upper limb prosthesis devices.
Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” As used herein, the terms “connected,” “coupled,” or any variant thereof means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application.
Where the context permits, words in the above Detailed Description using the singular or plural number may also include the plural or singular number respectively. The word “or,” in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list.
The above Detailed Description of examples of the technology is not intended to be exhaustive or to limit the technology to the precise form disclosed above. While specific examples for the technology are described above for illustrative purposes, various equivalent modifications are possible within the scope of the technology, as those skilled in the relevant art will recognize. For example, while processes or blocks are presented in a given order, alternative implementations may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or subcombinations. Each of these processes or blocks may be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed or implemented in parallel, or may be performed at different times. Further any specific numbers noted herein are only examples: alternative implementations may employ differing values or ranges.
The teachings of the technology provided herein can be applied to other systems, not necessarily the system described above. The elements and acts of the various examples described above can be combined to provide further implementations of the technology. Some alternative implementations of the technology may include not only additional elements to those implementations noted above, but also may include fewer elements.
These and other changes can be made to the technology in light of the above Detailed Description. While the above description describes certain examples of the technology, and describes the best mode contemplated, no matter how detailed the above appears in text, the technology can be practiced in many ways. Details of the system may vary considerably in its specific implementation, while still being encompassed by the technology disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the technology should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the technology with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the technology to the specific examples disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the technology encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the technology under the claims.
To reduce the number of claims, certain aspects of the technology are presented below in certain claim forms, but the applicant contemplates the various aspects of the technology in any number of claim forms. For example, while only one aspect of the technology is recited as a computer-readable medium claim, other aspects may likewise be embodied as a computer-readable medium claim, or in other forms, such as being embodied in a means-plus-function claim. Any claims intended to be treated under 35 U.S.C. § 112(f) will begin with the words “means for”, but use of the term “for” in any other context is not intended to invoke treatment under 35 U.S.C. § 112(f). Accordingly, the applicant reserves the right to pursue additional claims after filing this application to pursue such additional claim forms, in either this application or in a continuing application.
This application is a national phase of International Application No. PCT/US2019/040724 filed on Jul. 5, 2019, which claims priority to U.S. Provisional Application Ser. No. 62/694,278 filed Jul. 5, 2018, which are incorporated herein by reference in their entireties for all purposes.
This invention was made with government support under grant number FA9550-15-1-0238 awarded by the United States Air Force. The government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2019/040724 | 7/5/2019 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62694278 | Jul 2018 | US |