This disclosure is generally related to robots. More specifically, this disclosure is related to artificial intelligence (AI)-enabled high-precision robots with that can be used for manufacturing in electronics industry.
Automation (e.g., the use of robots) has been widely used and is transforming manufacturing in the automotive and industrial equipment industries. More specifically, the robot density has reached 1,414 (per 10,000 workers) in the automotive industry in Japan and 1,141 in the United States. However, the rapidly growing electrical/electronics industries have been lagging in the implementation of robots in their production lines. The robot density in the electronics industry is merely 318 in United States and just 20 in China. More specifically, when producing consumer electronics (e.g., smartphones, digital cameras, tablet or laptop computers, etc.), the assembly work is still largely performed by human workers. This is because there are many challenges in adopting robotics in the manufacturing of consumer electronics. The primary challenges can include short product life cycles, rapid change of products, low direct labor costs, poor dexterity of the robots, complexity of implementation and maintenance of the robots, and the lack of robot reusability.
Various low-cost collaborative robots have been developed to address the cost and reusability issues, and various intelligent systems have been attempted to make industrial robots smarter. For example, multiple sensors can be added to a robotic system to allow the robot to recognize work pieces, detect the presence of foreign objects to avoid, and detect collision. Moreover, some existing industrial robots can communicate with humans using voice and dialogue, and can be taught kinematic movements by demonstration. However, current robots are still far away from matching the capability of humans in terms of flexibility to execute various tasks and learn new skills.
One embodiment can provide an intelligent robotic system. The intelligent robotic system can include at least one multi-axis robotic arm, at least one gripper attached to the multi-axis robotic arm for picking up a component, a machine vision system comprising at least a three-dimensional (3D) surface-imaging module for obtaining 3D pose information associated with the component, and a control module configured to control movements of the multi-axis robotic arm and the gripper based on the detected 3D pose of the component.
In a variation on this embodiment, the 3D surface-imaging module can include a camera and a structured-light projector.
In a further variation, the structured-light projector can include a digital light processing (DLP) chip, a mirror array, or an independently addressable VCSEL (vertical-cavity surface-emitting laser) array.
In a further variation, the 3D surface-imaging module can be configured to perform one of more of: generating a low-resolution 3D point cloud using a spatial-codification technique and generating a high-resolution 3D point cloud using a spatial and time-multiplexing technique.
In a variation on this embodiment, the machine vision system can be configured to apply a machine-learning technique while detecting the 3D pose information of the component.
In a further variation, applying the machine-learning technique comprises training one or more convolutional neural networks (CNNs).
In a further variation, training the CNNs can include using a plurality of images of the component generated based on a computer-aided design (CAD) model of the component as training samples.
In a further variation, the CNNs can include a component-classifying CNN and a pose-classifying CNN.
In a variation on this embodiment, the machine vision system can further include an ultrasonic range finder configured to estimate a distance between the gripper and the component.
In a variation on this embodiment, the multi-axis robotic arm has at least six degrees of freedom.
In a variation on this embodiment, the intelligent robotic system can further include at least one two-dimensional (2D) imaging module configured to obtain wide-field visual information associated with the component.
In the figures, like reference numerals refer to the same figure elements.
The following description is presented to enable any person skilled in the art to make and use the embodiments, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
Embodiments described herein solve the technical problem of providing highly intelligent robots capable of performing light-load but complex operations (e.g., connecting cables) typically required for assembling consumer electronics. The robot can include one or more multi-axis arms equipped with grippers for picking up rigid or flexible components. The robot can also include a guiding system that implements novel three-dimensional (3D) machine visioning technologies; and an artificial intelligence (AI)-enabled human-machine interface (HMI) capable of transfer and expert guided reinforcement learning. Transfer learning is using a pre-trained neural network to accelerate learning by taking advantages of previously taught heuristics. Expert guided reinforcement learning is using experts (e.g., human) to initially teach robots how to achieve a stated goal, and then have the robot experiment in the vicinity of this solution space to find improved solutions. In some embodiments, the 3D machine visioning can be accomplished via the implementation of a structured-light source and can be trained using images generated based on computer-aided design (CAD) models. After accurately recognizing a work piece and its location and pose using the guiding system, the robot can perform motion planning and move its arm toward the recognized work piece at an optimal angle. In some embodiment, the robot can also include a tactile feedback module that includes multiple 3-axis force sensors. The tactile feedback module can enable touch sensing in order to provide feedback to the gripper's position and force. While the robot's arm is approaching the work piece, the guiding system (via vision and tactile feedbacks) continues to acquire images to fine tune the movements of the arm's grip approach.
As discussed previously, there are challenges for implementing robots in the manufacturing assembly lines of electronics, particularly consumer electronics. One challenge is the poor dexterity of existing robots. Assembling consumer electronics often involves handling many small (e.g., in the range of millimeters or less) components in a confined space, and currently available robots often lack the ability to perform such a task. A robot that can mimic, to a certain degree, human arm and hand movements is needed to meet the requirements of manufacturing consumer electronics.
Intelligent robot 100 can also include one or more grippers (e.g., gripper 114) attached to a wrist joint 116 of multi-joint arm 104. Gripper 114 can be used to pick up and maneuver components (e.g., electronic components) during operation. In some embodiments, gripper 114 can be a mechanical gripper. In alternative embodiments, gripper 114 can be a vacuum gripper. Depending on the needs of the assembly line, the mechanical clipper can be a parallel gripper (as shown in
In order to pick up and maneuver components in a precise manner, a guiding system is needed to guide the movements of the multi-joint arm and the gripper. Machine-vision-based guiding systems have been used to guide the operation of robots. Current machine-vision systems typically rely on two-dimensional (2D) images for feature recognition and for locating components. 2D machine visioning cannot provide accurate position and orientation information of a component if the component is not lying flat on a work surface. Assembling consumer electronics often requires a robot to handle a flexible component that cannot lie flat, such as connector with cables. To accurately locate a flexible component when it is suspended in midair, depth information is needed. In some embodiments, the intelligent robot can include a 3D machine-vision system that can not only identify a component but also acquire its 3D pose, including its position and orientation. In addition to locating and acquiring the pose of a component, the 3D machine-vision system can also be used for assembling the component.
In the assembly line of a consumer electronics product (e.g., a smartphone), various components may be randomly placed on a work surface. The machine-vision system needs to first find a desired component and then accurately acquire the pose of that component. These two operations require very different resolutions of the vision system. To accelerate the operation of the guiding system, in some embodiments, the guiding system can include multiple cameras. For example, the guiding system can include at least a wide-field-of-view camera and another close-range camera. The wide-field-of-view camera can be installed on the body of the robot, whereas the close-range camera can be installed at a location closer to the gripper. In some embodiments, the wide-field-of-view camera can be installed at a location close to the base of the intelligent robot, as shown in
Once wide-field-of-view camera system 202 found component 212, or the area where component 212 resides on work surface 210, the guiding system can guide the movement of the robot arm to bring its gripper close to component 212. In some embodiments, wide-field-of-view camera system 202 may also acquire the pose (including position and orientation) of component 212 in order to allow for more accurate motion planning. Not only is the gripper moved closer to component 212, it can also be configured to approach component 212 at an optimal angle.
Close-range camera 222 can be attached to the stem or base of the gripper, which is attached to wrist joint 224 of the robot arm. Hence, close-range camera 222 can be closer than wide-field-of-view camper system 202 to component 212 during the operation of the robot and can provide feedback to the robot control system. To provide depth information, a structured-light projector 226 can also be installed in the vicinity of close-range camera 222 (e.g., attached to the same base of the gripper). Moreover, an ultrasonic sensor 228 can also be included. More specifically, ultrasonic sensor 228 can be used as a range finder to measure the rough distance to component 212. Note that, in order to achieve the accuracy of a few microns, close-range camera 222 and structured-light projector 226 need to be placed very close to the object (e.g., within 150 mm) due to the limitations of pixel size in the image sensor and the limitations of pixel size in the light projector. On the other hand, an ultrasonic range finder has the advantage of a longer sensing range and a fast response, and can be suitable for detecting the distance to a moving object. The ultrasonic wave emitted by ultrasonic sensor 228 can be amplitude modulated or frequency modulated.
In some embodiments, the robot can turn on structured-light projector 226 in response to ultrasonic sensor 228 detecting that the distance to component 212 is less than a predetermined value (e.g., 150 mm). More specifically, structured-light projector 226 can project multiple patterns on the small area of work surface 210 that includes component 212, and close-range camera 222 can then capture a sequence of images synchronously with the projecting of patterns. The close range can provide high resolution images as well as a high-accuracy computation of depth information. In some embodiments, a 3D point cloud can be generated, which includes 3D data of objects within the field of view of close-range camera 222. The 3D data can be used to guide the movement of the robot. For example, the 3D data can be used to infer the exact 3D pose of component 212, thus enabling the gripper to accurately pick up component 212.
In some embodiments, each gripper can also include one or more multi-axis force sensors that can be used to provide touch feedback. The touch feedback can facilitate path adjustment of the robotic arm and the gripper while the gripper is approaching or in contact with the component of interest.
In some embodiments, the ultrasonic sensor can be omitted. Instead, the system can use a 2D wide-field imaging system to first find the component of interest and then move the gripper along with a 3D surface-imaging system closer to the component of interest. The 3D surface-imaging system can combine both spatial codification and time multiplexing when acquiring the 3D pose of a desired component. In the spatial codification paradigm, a field of dots is projected onto the scene and a set of dots is encoded with the information contained in a neighborhood (called a window) around them. In spatial codification, only one image is taken. Consequently, it can provide a fast response, and can be suitable for moving objects. Spatial codification also has the advantage of being able to use video stream as input as well as being able to have a far field-of-view. The drawback of spatial codification is its lower accuracy. Hence, spatial codification is more applicable to far field and rapidly moving objects, and can be used for rough estimation of the 3D pose of a component. On the other hand, as discussed previously, the spatial and time-multiplexed 3D surface imaging can provide the higher accuracy and is more applicable to near field-of-view prior to the gripper picking up the component. In some embodiments, the spatial codification can be performed using a far-field-of-view camera and a corresponding structured-light projector, and the spatial and time-multiplexed 3D surface-imaging can be performed using a near-field-of-view camera and a corresponding structured-light projector. In alternative embodiments, the 3D surface-imaging system can use the same camera/structured-light projector pair to perform both the spatial codification and the spatial and time-multiplexed 3D surface imaging, as long as the focal range of the camera is large enough and the structured-light projector can have sufficient resolution.
In practice, the 3D surface-imaging system can first employ spatial codification to obtain a far-field 3D description of the work surface, which can be used for component searching, identification, and tracking. Subsequent to moving the gripper closer to the component, the 3D surface-imaging system can switch to time-multiplexing in order to obtain the accurate 3D pose of the component to guide the gripper to pick up the component.
In the example shown in
The robot then determines whether the distance is less than a predetermined value (e.g., 150 mm) (operation 512). If not, the robot continues to move the gripper (operation 504). If so, the structured-light projector projects a sequence of frames of various predetermined patterns (e.g., striped patterns) onto the object and its close vicinity and the camera synchronously captures images (operation 514). More specifically, an image can be captured for each pattern. In other words, the 3D imaging system uses fast spatial and temporal patterns to encode and decode depth information. In some embodiments, the structured-light projector can project 60 frames per second. Accordingly, the camera can be configured to capture images at a rate of 60 frames per second.
Many electronics components can have a wide range of reflectivity, with some components having a specular reflection while some are highly absorbent to light. To improve the image quality, thus the quality of the 3D point cloud, in some embodiments, the camera can include a set of polarizers in order to optimize the level of specular lights rejected for best signal-to-noise ratio and contrast. In further embodiments, HDR (high dynamic range) techniques can be used to improve the dynamic range of the input data. For example, each projected pattern can have two exposures (i.e., two images are captured for each projected pattern). Based on the captured images, the robot can further generate a high-resolution 3D point cloud of the component and its close vicinity and recommend a pick up position of the gripper (operation 516).
Each image sensor can be part of a camera system. In some embodiments, the camera systems can be a far-field-of-view camera or a near-field-of-view camera. Alternatively, 3D surface-imaging system 600 can include a single camera system with a wide range of focal lengths that can capture both wide-angle images as well as close-up images. Note that the wide-angle images are used for the initial searching for and locating of a component, whereas the close-up images are used for extracting accurate 3D pose information of the component. In some embodiments, the resolutions of the image sensors can be 1280×1240, 2080×1552, or 4000×3000, and the resolution of the projectors can be 608×648, 912×1140, or 1920×1080. Moreover, the projectors can include various image projection devices, including but not limited to: digital light processing (DLP) chips, mirror arrays, and independently addressable VCSEL (vertical-cavity surface-emitting laser) arrays; and the light source can include LEDs or lasers (e.g., a VCSEL or a VCSEL array). In some embodiments, if the light sources include a laser, the laser can operate in multiple modes of oscillation and short-pulse mode to minimize speckles. The light sources can emit visible light, infrared light, or ultraviolet light. In further embodiments, the wavelength of the emitted light can be tunable according to the surface condition of the component of interest, in order to obtain high image contrast.
Control module 614 can control both the image sensors (i.e., the cameras) and the projectors, including the light sources. For example, control module 614 can tune the emitting wavelength of the light sources based on the surface condition of the illuminated component. Control module 614 can further control the patterns to be projected by the projectors and the pattern-projecting frequency. For example, control module 614 can control a projector to project patterns at a frequency of 60 frames per second. The patterns can include dot arrays, parallel lines, grids, etc. In the meantime, control module 614 can control the image sensors (more precisely, the shutters of the cameras) such that the image sensors record images synchronously with the projected patterns. At least one image is recorded for each projected pattern.
3D-point-cloud-generation module 616 can generate a 3D point cloud based on captured images. Depending on the resolution of the captured images, 3D-point-cloud-generation module 616 can generate a low- or high-resolution 3D point cloud. For example, 3D-point-cloud-generation module 616 can generate a 3D point cloud based on a single image. The generated 3D point cloud can be low resolution and less accurate. On the other hand, 3D-point-cloud-generation module 616 can also generate a high-resolution 3D point cloud based on a sequence of time-multiplexed images. In some embodiments, to increase accuracy, the 3D point cloud generated using structured light can be fitted into a pre-defined CAD model of the component in order to achieve full scale dimensions.
As discussed before, another challenge facing the implementation of robots in the assembly line of consumer electronics is the short product life cycle of the consumer electronics. Typical consumer electronics products (e.g., smartphones, tablet computers, digital cameras, etc.) can have a relatively short life cycle, such as less than a year, and the manufacturers of consumer electronics are constantly upgrading their products. Each product upgrade can require the upgrade of the assembly line. This also means that the assembly line workers (humans or robots) need to learn new skills. Although training a human to recognize a new component may not be difficult, training a robot to do the same task may be a challenge. In some embodiments, machine learning techniques can be used to train the robot, especially the visioning system of the robot, to recognize various components. More specifically, a convolutional neural network (CNN) can be trained to recognize components as well as their 3D poses.
To increase the training efficiency, in some embodiments, instead of using images of real-life components, CAD-generated images can be used to train the CNN. More specifically, the CAD system can generate images of a particular component (e.g., a connector with cable) at various positions and orientations under various lighting conditions. More specifically, various light sources (e.g., point source, diffuse light, or parallel light) can be simulated in CAD. Based on the CAD models of the component and the light sources, one can generate realistic representations of how an object can appear for a vision camera in all combinations of pose, location, and background. These CAD-generated images can become training input data for robust training of the CNN. In some embodiments, the CAD-generated images are 2D images. However, perspective projection and image scale may help with estimation of the first order depth value. In alternative embodiments, the system can also train the CNN using structured-light-based 3D images. More specifically, when generating the images, the CAD system can use structured light as the light source to obtain 3D data (e.g., depth, contour, etc.) associated with a component. The 3D data can be used as additional input for the training of the CNN, complimentary to the 2D shading data generated by the CAD system.
In addition to CAD-generated images, in some embodiments, the training images can also be obtained by performing 3D scanning on real life components. More specifically, the 3D scan can use structured light as a light source.
In some embodiments, thousands, or hundreds of thousands, of images can be generated with labels that indicate their position and orientation. These labeled images can then be used to train a CNN to generate a transformation from images to positions and orientations (poses). The transform recipe obtained through machine leaning can be loaded onto a processing module (e.g., an image processor) of the robot. During operation, a camera installed on the robot can capture one or more images of a component and send the captured image to the image processor, which can then generate the pose of the component based on the transform recipe. Using CAD-generated images for training can reduce the amount of manual labor needed to acquire and label images, thus increasing training speed. Moreover, the training can be done offline, thus reducing the downtime of robots.
In some embodiments, the robot system can further implement a calibration process to further improve the detection accuracy and to correct for the variations between a CAD-generated image and an image captured by cameras. The calibration process can be performed using a reference component. More specifically, during operation, the robot can generate a transform recipe for the reference object based on CAD-generated training images of the reference object. Moreover, the robot's camera system can obtain reference images of the reference object in various known positions and poses. The robot system can then use the transform recipe and a reference image to compute the position and pose of the reference component, and compare the computed position and pose to the known position and pose. The difference can then be used as correction factors that can be used to correct the transformation result of a real component. For example, the difference can be used to modify the transform recipe.
Moreover, the reference images can be used to correct distortion of the camera system. More specifically, a specially designed reference object (e.g., a grid) can be used for calibration purposes. Images taken by the camera system of the reference object may include distortion by lenses. By comparing the grid pitches on the images and the known pitches, one can infer the amount of distortion caused by the camera system and generate corrected images accounting for the camera distortions.
In some embodiments, there are two types of CNN, one for classifying components and one for classifying poses. Note that the pose-classifying CNNs can be component-specific. More specifically, a two-step approach can be used to identify a component and its pose. In the first step, based on wide-angle images of the work surface and a CNN trained for classifying components, the vision system of the robot can identify and locate a desired component on the work surface. Once the component is identified and located, a pose-classifying CNN that is specific to the identified component can be used to recognize the pose and location in higher resolution and confidence. Note that inputs to the pose-classifying CNN can include high-resolution 2D or 3D images. This two-step approach requires less complex modeling, thus resulting in fast and efficient training.
In alternative embodiments, instead of relying on a transform recipe to recognize the pose of a component, the system can generate a template image for the component. The template image describes the position and pose of the component when it is ready to be picked up by the robot gripper using a pre-defined trajectory and stroke. During operation, after identifying and locating a component, the gripper along with a camera can be brought toward the component. During this process, the camera continues to capture images at a predetermined frequency (e.g., 24 frames per second). The image processor of the robot can calculate, for each captured image, a confidence factor by comparison of the captured image and the template image. Higher similarities can result in larger confidence factors. The variation of the calculated confidence factors can be used to calculate the movement of the robot. Once the confidence factor reaches a threshold value (i.e., once the current location and pose of the gripper and camera match their presumed location and pose), the robot can move toward the component using the pre-defined trajectory and stroke. In other words, the robot plans ahead a trajectory and stroke from a particular location to pick up a component and computes the expected location and pose of the component seen by the camera, i.e., generating the image template. The robot can then adjust the position and pose of the gripper and camera until an image captured by the camera matches the image template, indicating that the gripper has reached the planned location. In further embodiments, subsequent to acquiring the pose and location and computing motion planning, the gripper can move toward the component with the optimized attack angle. The camera can capture an image at this point. The captured image can be compared with the template image to allow for fine tuning of the gripper's approach. Maximizing the template-matching can ensure high confidence of the gripper approach.
In a typical assembly line, a simple assembly operation can involve multiple (e.g., two) components, such as engaging two or more components. For example, an assembly operation of a consumer electronics product may involve mating two electrical connectors, such as inserting a male connector into the corresponding female connector. A connector can be attached to a cable, causing the connector to be suspended in midair. As discussed previously, this presents a challenge for a robot to accurately locate the connector. On the other hand, structured-light-based surface imaging and CNN-based machine learning allows the intelligent robot to obtain accurate depth information, thus allowing the intelligent robot to perform such difficult task.
Prior to operating on the assembly line, the robot needs to acquire a knowledge base 802, which can include various modules, such as 2D-component module 804, 3D-point-cloud module 806, and assembly-strategy module 808. More specifically, 2D-component module 804 can store 2D models of various components that are used in the assembly line. In some embodiments, 2D-component module 804 can include a CNN previously trained by an offline training process 810. In further embodiments, training the CNN can involve generating 2D images of the various components using their CAD models. 3D-point-cloud module 806 can include 3D models of the various components in the form of a point cloud. Assembly-strategy module 808 can include a number of pre-determined assembly strategies based on the locations and poses of the various components. More specifically, the assembly strategy can include calculated trajectories and angles of the gripper of the robot. In some embodiments, the robot can also have the capability of imitation learning, where a single assembly task or a sequence of assembly tasks can be modeled and learned by the robot. In a further embodiment, the robot can include an AI-enabled HMI that allows the robot to communicate with a human worker. For example, the human worker can input verbal or gesture-based commands to control movements of the robot. Moreover, a human worker can also demonstrate to the robot how to perform a certain task.
During operation, the robot can first locate the base plate and one or more target components based on 2D images captured by its camera system and the 2D model maintained by 2D-component module 804 (operation 812). Note that the robot can further determine the orientation (e.g., angle tilt or position shift) of the base plate based on a number of identified key components or points on the base plate.
Returning to
Once the 3D pose of the component is determined, the robot can obtain an assembly strategy (which can include motion plans) from assembly-strategy module 808 and perform the assembling operation (operation 818). For example, the robot can pick up the cable connector shown in
In addition to detecting the 3D pose of a component prior to performing the assembly task, the robot can also use real-time vision coordination to facilitate the assembly task. In some embodiments, the robot can use a reinforcement learning technique to learn how to optimally assembly mating components (e.g., cable connectors). For example, one can use expert guidance to initially teach the robot how to assemble with best known procedures, and the robot can then further refine assembly techniques by exploring variations, guided by an action-and-reward feedback loop. More specifically, real-time machine visioning can be used to guide the progress of the assembly, where real-time video or images can be used as input to the action-and-reward feedback loop.
In addition to guiding the assembly operation, in some embodiments, the real-time vision can also be used for quality control. More specifically, each captured image and generated 3D point cloud can be used to determine the acceptance of the components and the assembly result.
During operation, the robot can first find the exact locations of the first and second components (operation 1102). The robot can then approach the first component to acquire its 3D pose (operation 1104). Note that acquiring the 3D pose of the first component can involve using a structured-light-based 3D surface-imaging system to generate a 3D point cloud, similar to the one shown in
In addition to engaging components, the robot can also used for bin-picking tasks. More specifically, the robot can be trained to search, identify, and track components when multiple components are stacked. The robot can also obtain a 3D point cloud of each individual component and apply a CCN previously trained using CAD-generated images to detect the 3D pose of each individual component. The robot can then pick up each individual component based on the detected 3D pose.
In general, embodiments of the present invention can provide an intelligent robotic system that can be used for light-load, precise, and complex assembly operations required for manufacturing consumer electronics. The intelligent robotic system combines structured-light-based 3D surface-imaging technology and CNN-based machine-learning technology to achieve accurate 3D pose detection components. Moreover, by training the CNN using CAD-generated images, the robotic system can experience minimum downtime when upgrading the assembly task. The intelligent robotic system also includes a multi-axis arm having six degrees of freedom.
The methods and processes described in the detailed description section can be embodied as code and/or data, which can be stored in a computer-readable storage medium as described above. When a computer system reads and executes the code and/or data stored on the computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the computer-readable storage medium.
Furthermore, the methods and processes described above can be included in hardware modules or apparatus. The hardware modules or apparatus can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), dedicated or shared processors that execute a particular software module or a piece of code at a particular time, and other programmable-logic devices now known or later developed. When the hardware modules or apparatus are activated, they perform the methods and processes included within them.
The foregoing descriptions of embodiments of the present invention have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the present invention to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the present invention. The scope of the present invention is defined by the appended claims.
This claims the benefit of U.S. Provisional Patent Application No. 62/539,926, Attorney Docket No. ENOV17-1001PSP, entitled “INTELLIGENT ROBOTS,” filed Aug. 1, 2017, the disclosure of which is incorporated herein by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
62539926 | Aug 2017 | US |